Jan 13 20:53:08.968336 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:53:08.968378 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:53:08.968395 kernel: BIOS-provided physical RAM map: Jan 13 20:53:08.968407 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:53:08.968419 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:53:08.968431 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:53:08.968448 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 13 20:53:08.970525 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 13 20:53:08.970542 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 13 20:53:08.970556 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:53:08.970568 kernel: NX (Execute Disable) protection: active Jan 13 20:53:08.970582 kernel: APIC: Static calls initialized Jan 13 20:53:08.970594 kernel: SMBIOS 2.7 present. Jan 13 20:53:08.970608 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 13 20:53:08.970630 kernel: Hypervisor detected: KVM Jan 13 20:53:08.970644 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:53:08.970659 kernel: kvm-clock: using sched offset of 8479385973 cycles Jan 13 20:53:08.970674 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:53:08.970689 kernel: tsc: Detected 2499.996 MHz processor Jan 13 20:53:08.970704 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:53:08.970720 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:53:08.970738 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 13 20:53:08.970753 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:53:08.970768 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:53:08.970782 kernel: Using GB pages for direct mapping Jan 13 20:53:08.970797 kernel: ACPI: Early table checksum verification disabled Jan 13 20:53:08.970811 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 13 20:53:08.970825 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 13 20:53:08.970840 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:53:08.970855 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 20:53:08.970872 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 13 20:53:08.970887 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:53:08.970901 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:53:08.970916 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 13 20:53:08.970930 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:53:08.970944 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 13 20:53:08.970959 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 13 20:53:08.970973 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:53:08.970987 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 13 20:53:08.971005 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 13 20:53:08.971026 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 13 20:53:08.971041 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 13 20:53:08.971057 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 13 20:53:08.971073 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 13 20:53:08.971091 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 13 20:53:08.971106 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 13 20:53:08.971130 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 13 20:53:08.971145 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 13 20:53:08.971162 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:53:08.971177 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:53:08.971192 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 13 20:53:08.971208 kernel: NUMA: Initialized distance table, cnt=1 Jan 13 20:53:08.971223 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 13 20:53:08.971242 kernel: Zone ranges: Jan 13 20:53:08.971258 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:53:08.971273 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 13 20:53:08.971288 kernel: Normal empty Jan 13 20:53:08.971304 kernel: Movable zone start for each node Jan 13 20:53:08.971319 kernel: Early memory node ranges Jan 13 20:53:08.971334 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:53:08.971350 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 13 20:53:08.971365 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 13 20:53:08.971381 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:53:08.971399 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:53:08.971415 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 13 20:53:08.971430 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 20:53:08.971446 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:53:08.972390 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 13 20:53:08.972408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:53:08.972423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:53:08.972439 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:53:08.972467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:53:08.972494 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:53:08.972507 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:53:08.972520 kernel: TSC deadline timer available Jan 13 20:53:08.972533 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:53:08.972546 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:53:08.972558 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 13 20:53:08.972570 kernel: Booting paravirtualized kernel on KVM Jan 13 20:53:08.972584 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:53:08.972596 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:53:08.972612 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:53:08.972625 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:53:08.972637 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:53:08.972650 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:53:08.972663 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:53:08.972677 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:53:08.972690 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:53:08.972702 kernel: random: crng init done Jan 13 20:53:08.972718 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:53:08.972731 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:53:08.972743 kernel: Fallback order for Node 0: 0 Jan 13 20:53:08.972755 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 13 20:53:08.972768 kernel: Policy zone: DMA32 Jan 13 20:53:08.972781 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:53:08.972794 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Jan 13 20:53:08.972806 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:53:08.972819 kernel: Kernel/User page tables isolation: enabled Jan 13 20:53:08.972834 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:53:08.972847 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:53:08.972860 kernel: Dynamic Preempt: voluntary Jan 13 20:53:08.972873 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:53:08.972887 kernel: rcu: RCU event tracing is enabled. Jan 13 20:53:08.972900 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:53:08.972913 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:53:08.972925 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:53:08.972938 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:53:08.972954 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:53:08.972966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:53:08.972979 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:53:08.972991 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:53:08.973004 kernel: Console: colour VGA+ 80x25 Jan 13 20:53:08.973017 kernel: printk: console [ttyS0] enabled Jan 13 20:53:08.973029 kernel: ACPI: Core revision 20230628 Jan 13 20:53:08.973042 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 13 20:53:08.973054 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:53:08.973070 kernel: x2apic enabled Jan 13 20:53:08.973082 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:53:08.973105 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 13 20:53:08.973121 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 13 20:53:08.973135 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 20:53:08.973148 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 20:53:08.973161 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:53:08.973174 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:53:08.973187 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:53:08.973200 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:53:08.973214 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 13 20:53:08.973227 kernel: RETBleed: Vulnerable Jan 13 20:53:08.973243 kernel: Speculative Store Bypass: Vulnerable Jan 13 20:53:08.973256 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:53:08.973269 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:53:08.973282 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 20:53:08.973295 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:53:08.973308 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:53:08.973321 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:53:08.973337 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 20:53:08.973350 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 20:53:08.973364 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 20:53:08.973377 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 20:53:08.973390 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 20:53:08.973403 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 20:53:08.973416 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:53:08.973429 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 20:53:08.973443 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 20:53:08.974862 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 13 20:53:08.974886 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 13 20:53:08.974908 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 13 20:53:08.975013 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 13 20:53:08.975031 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 13 20:53:08.975048 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:53:08.975063 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:53:08.975079 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:53:08.975095 kernel: landlock: Up and running. Jan 13 20:53:08.975448 kernel: SELinux: Initializing. Jan 13 20:53:08.975522 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:53:08.975538 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:53:08.975596 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 13 20:53:08.975620 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:53:08.975637 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:53:08.975653 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:53:08.975669 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 20:53:08.975684 kernel: signal: max sigframe size: 3632 Jan 13 20:53:08.975700 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:53:08.975717 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:53:08.975732 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:53:08.975748 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:53:08.975767 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:53:08.975782 kernel: .... node #0, CPUs: #1 Jan 13 20:53:08.975799 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 20:53:08.975815 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:53:08.975867 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:53:08.975883 kernel: smpboot: Max logical packages: 1 Jan 13 20:53:08.975899 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 13 20:53:08.975914 kernel: devtmpfs: initialized Jan 13 20:53:08.975933 kernel: x86/mm: Memory block size: 128MB Jan 13 20:53:08.975949 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:53:08.975993 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:53:08.976009 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:53:08.976025 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:53:08.976040 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:53:08.976056 kernel: audit: type=2000 audit(1736801588.850:1): state=initialized audit_enabled=0 res=1 Jan 13 20:53:08.976071 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:53:08.976087 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:53:08.976106 kernel: cpuidle: using governor menu Jan 13 20:53:08.976122 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:53:08.976137 kernel: dca service started, version 1.12.1 Jan 13 20:53:08.976152 kernel: PCI: Using configuration type 1 for base access Jan 13 20:53:08.976168 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:53:08.976183 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:53:08.976199 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:53:08.976214 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:53:08.976230 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:53:08.976434 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:53:08.976605 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:53:08.976621 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:53:08.976637 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:53:08.976652 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 20:53:08.976668 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:53:08.976683 kernel: ACPI: Interpreter enabled Jan 13 20:53:08.976699 kernel: ACPI: PM: (supports S0 S5) Jan 13 20:53:08.976714 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:53:08.976730 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:53:08.976751 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:53:08.976766 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 20:53:08.976782 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:53:08.977153 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:53:08.977406 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:53:08.978613 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:53:08.978640 kernel: acpiphp: Slot [3] registered Jan 13 20:53:08.978662 kernel: acpiphp: Slot [4] registered Jan 13 20:53:08.978678 kernel: acpiphp: Slot [5] registered Jan 13 20:53:08.978694 kernel: acpiphp: Slot [6] registered Jan 13 20:53:08.978709 kernel: acpiphp: Slot [7] registered Jan 13 20:53:08.978724 kernel: acpiphp: Slot [8] registered Jan 13 20:53:08.978740 kernel: acpiphp: Slot [9] registered Jan 13 20:53:08.978755 kernel: acpiphp: Slot [10] registered Jan 13 20:53:08.978770 kernel: acpiphp: Slot [11] registered Jan 13 20:53:08.978785 kernel: acpiphp: Slot [12] registered Jan 13 20:53:08.978804 kernel: acpiphp: Slot [13] registered Jan 13 20:53:08.978820 kernel: acpiphp: Slot [14] registered Jan 13 20:53:08.978835 kernel: acpiphp: Slot [15] registered Jan 13 20:53:08.978850 kernel: acpiphp: Slot [16] registered Jan 13 20:53:08.978864 kernel: acpiphp: Slot [17] registered Jan 13 20:53:08.978877 kernel: acpiphp: Slot [18] registered Jan 13 20:53:08.978892 kernel: acpiphp: Slot [19] registered Jan 13 20:53:08.978907 kernel: acpiphp: Slot [20] registered Jan 13 20:53:08.978922 kernel: acpiphp: Slot [21] registered Jan 13 20:53:08.978937 kernel: acpiphp: Slot [22] registered Jan 13 20:53:08.978956 kernel: acpiphp: Slot [23] registered Jan 13 20:53:08.978971 kernel: acpiphp: Slot [24] registered Jan 13 20:53:08.978987 kernel: acpiphp: Slot [25] registered Jan 13 20:53:08.979002 kernel: acpiphp: Slot [26] registered Jan 13 20:53:08.979018 kernel: acpiphp: Slot [27] registered Jan 13 20:53:08.979034 kernel: acpiphp: Slot [28] registered Jan 13 20:53:08.979048 kernel: acpiphp: Slot [29] registered Jan 13 20:53:08.979064 kernel: acpiphp: Slot [30] registered Jan 13 20:53:08.979079 kernel: acpiphp: Slot [31] registered Jan 13 20:53:08.979098 kernel: PCI host bridge to bus 0000:00 Jan 13 20:53:08.979252 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:53:08.979370 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:53:08.980916 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:53:08.981671 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 13 20:53:08.981965 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:53:08.983524 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:53:08.983833 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 20:53:08.984019 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 13 20:53:08.984643 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 20:53:08.984850 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 13 20:53:08.984977 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 13 20:53:08.985099 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 13 20:53:08.985219 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 13 20:53:08.985351 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 13 20:53:08.987510 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 13 20:53:08.987736 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 13 20:53:08.987885 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 13 20:53:08.988020 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 13 20:53:08.988225 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 13 20:53:08.988361 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:53:08.988563 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:53:08.988766 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 13 20:53:08.988993 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:53:08.989134 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 13 20:53:08.989155 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:53:08.989172 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:53:08.989195 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:53:08.989211 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:53:08.989227 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:53:08.989244 kernel: iommu: Default domain type: Translated Jan 13 20:53:08.989260 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:53:08.989275 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:53:08.989364 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:53:08.989398 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:53:08.989430 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 13 20:53:08.990688 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 13 20:53:08.990834 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 13 20:53:08.990965 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:53:08.990985 kernel: vgaarb: loaded Jan 13 20:53:08.991001 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 20:53:08.991018 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 13 20:53:08.991033 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:53:08.991049 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:53:08.991065 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:53:08.991086 kernel: pnp: PnP ACPI init Jan 13 20:53:08.991101 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:53:08.991125 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:53:08.991141 kernel: NET: Registered PF_INET protocol family Jan 13 20:53:08.991157 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:53:08.991189 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 20:53:08.991206 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:53:08.991243 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:53:08.991261 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:53:08.991278 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 20:53:08.991293 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:53:08.991311 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:53:08.991327 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:53:08.991342 kernel: NET: Registered PF_XDP protocol family Jan 13 20:53:08.993609 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:53:08.993757 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:53:08.993881 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:53:08.994008 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 13 20:53:08.994223 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:53:08.994247 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:53:08.994265 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:53:08.994282 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 13 20:53:08.994297 kernel: clocksource: Switched to clocksource tsc Jan 13 20:53:08.994314 kernel: Initialise system trusted keyrings Jan 13 20:53:08.994330 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 20:53:08.994351 kernel: Key type asymmetric registered Jan 13 20:53:08.994367 kernel: Asymmetric key parser 'x509' registered Jan 13 20:53:08.994383 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:53:08.994400 kernel: io scheduler mq-deadline registered Jan 13 20:53:08.995364 kernel: io scheduler kyber registered Jan 13 20:53:08.995386 kernel: io scheduler bfq registered Jan 13 20:53:08.995403 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:53:08.995421 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:53:08.995438 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:53:08.997492 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:53:08.997517 kernel: i8042: Warning: Keylock active Jan 13 20:53:08.997534 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:53:08.997551 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:53:08.997732 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 20:53:08.997868 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 20:53:08.997996 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T20:53:08 UTC (1736801588) Jan 13 20:53:08.998121 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 20:53:08.998147 kernel: intel_pstate: CPU model not supported Jan 13 20:53:08.998164 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:53:08.998181 kernel: Segment Routing with IPv6 Jan 13 20:53:08.998197 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:53:08.998213 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:53:08.998229 kernel: Key type dns_resolver registered Jan 13 20:53:08.998246 kernel: IPI shorthand broadcast: enabled Jan 13 20:53:08.998330 kernel: sched_clock: Marking stable (541003418, 208640171)->(865242323, -115598734) Jan 13 20:53:08.998348 kernel: registered taskstats version 1 Jan 13 20:53:08.998370 kernel: Loading compiled-in X.509 certificates Jan 13 20:53:08.998386 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:53:08.998402 kernel: Key type .fscrypt registered Jan 13 20:53:08.998418 kernel: Key type fscrypt-provisioning registered Jan 13 20:53:08.998435 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:53:08.999832 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:53:08.999863 kernel: ima: No architecture policies found Jan 13 20:53:08.999880 kernel: clk: Disabling unused clocks Jan 13 20:53:08.999897 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:53:08.999919 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:53:08.999935 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:53:08.999951 kernel: Run /init as init process Jan 13 20:53:08.999967 kernel: with arguments: Jan 13 20:53:08.999984 kernel: /init Jan 13 20:53:08.999999 kernel: with environment: Jan 13 20:53:09.000013 kernel: HOME=/ Jan 13 20:53:09.000029 kernel: TERM=linux Jan 13 20:53:09.000046 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:53:09.000073 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:53:09.000108 systemd[1]: Detected virtualization amazon. Jan 13 20:53:09.000230 systemd[1]: Detected architecture x86-64. Jan 13 20:53:09.000248 systemd[1]: Running in initrd. Jan 13 20:53:09.000294 systemd[1]: No hostname configured, using default hostname. Jan 13 20:53:09.000316 systemd[1]: Hostname set to . Jan 13 20:53:09.000334 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:53:09.000416 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:53:09.000837 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:53:09.000910 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:53:09.000957 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:53:09.000973 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:53:09.001014 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:53:09.001056 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:53:09.001074 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:53:09.001114 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:53:09.001159 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:53:09.001177 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:53:09.001194 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:53:09.002558 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:53:09.002583 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:53:09.002602 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:53:09.002622 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:53:09.002640 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:53:09.002658 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:53:09.002677 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:53:09.002696 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:53:09.002714 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:53:09.002736 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:53:09.002754 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:53:09.002773 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:53:09.002791 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:53:09.002809 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:53:09.002827 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:53:09.002845 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:53:09.002868 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:53:09.002886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:53:09.002939 systemd-journald[179]: Collecting audit messages is disabled. Jan 13 20:53:09.002983 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:53:09.003002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:53:09.003021 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:53:09.003040 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:53:09.003066 systemd-journald[179]: Journal started Jan 13 20:53:09.003111 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2b185be02a992517f03f98eadf41ba) is 4.8M, max 38.6M, 33.7M free. Jan 13 20:53:08.989704 systemd-modules-load[180]: Inserted module 'overlay' Jan 13 20:53:09.010650 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:53:09.019161 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:53:09.038535 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:53:09.037506 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:53:09.163563 kernel: Bridge firewalling registered Jan 13 20:53:09.041179 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 13 20:53:09.170054 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:53:09.172979 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:53:09.177567 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:53:09.192786 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:53:09.197418 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:53:09.210996 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:53:09.212593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:53:09.231618 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:53:09.239753 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:53:09.245247 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:53:09.247319 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:53:09.260488 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:53:09.287310 dracut-cmdline[215]: dracut-dracut-053 Jan 13 20:53:09.294300 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:53:09.315070 systemd-resolved[212]: Positive Trust Anchors: Jan 13 20:53:09.315092 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:53:09.315198 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:53:09.319773 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 13 20:53:09.321829 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:53:09.322047 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:53:09.446429 kernel: SCSI subsystem initialized Jan 13 20:53:09.469632 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:53:09.488483 kernel: iscsi: registered transport (tcp) Jan 13 20:53:09.544691 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:53:09.544779 kernel: QLogic iSCSI HBA Driver Jan 13 20:53:09.600900 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:53:09.607316 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:53:09.679851 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:53:09.679937 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:53:09.679960 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:53:09.722496 kernel: raid6: avx512x4 gen() 14485 MB/s Jan 13 20:53:09.739498 kernel: raid6: avx512x2 gen() 15237 MB/s Jan 13 20:53:09.756488 kernel: raid6: avx512x1 gen() 15092 MB/s Jan 13 20:53:09.773490 kernel: raid6: avx2x4 gen() 14796 MB/s Jan 13 20:53:09.791020 kernel: raid6: avx2x2 gen() 15487 MB/s Jan 13 20:53:09.807682 kernel: raid6: avx2x1 gen() 9991 MB/s Jan 13 20:53:09.807793 kernel: raid6: using algorithm avx2x2 gen() 15487 MB/s Jan 13 20:53:09.825476 kernel: raid6: .... xor() 15928 MB/s, rmw enabled Jan 13 20:53:09.825567 kernel: raid6: using avx512x2 recovery algorithm Jan 13 20:53:09.848482 kernel: xor: automatically using best checksumming function avx Jan 13 20:53:10.014486 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:53:10.025376 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:53:10.032640 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:53:10.058769 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 13 20:53:10.073114 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:53:10.096991 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:53:10.134435 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 13 20:53:10.186850 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:53:10.191927 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:53:10.274328 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:53:10.284152 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:53:10.333130 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:53:10.343891 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:53:10.345679 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:53:10.348372 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:53:10.364879 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:53:10.391225 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:53:10.408978 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:53:10.427064 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:53:10.427135 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:53:10.470650 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:53:10.470859 kernel: AES CTR mode by8 optimization enabled Jan 13 20:53:10.470882 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 13 20:53:10.471044 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:95:d7:34:c0:d1 Jan 13 20:53:10.472989 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:53:10.489772 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:53:10.491441 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:53:10.495285 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:53:10.497606 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:53:10.497821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:53:10.499683 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:53:10.518947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:53:10.524788 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:53:10.525018 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 20:53:10.537482 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:53:10.544475 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:53:10.544541 kernel: GPT:9289727 != 16777215 Jan 13 20:53:10.544560 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:53:10.544579 kernel: GPT:9289727 != 16777215 Jan 13 20:53:10.544596 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:53:10.545573 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:53:10.634482 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (447) Jan 13 20:53:10.635475 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (455) Jan 13 20:53:10.743037 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:53:10.745112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:53:10.753665 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:53:10.768982 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:53:10.772821 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:53:10.793115 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:53:10.805128 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:53:10.813639 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:53:10.822161 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:53:10.833319 disk-uuid[631]: Primary Header is updated. Jan 13 20:53:10.833319 disk-uuid[631]: Secondary Entries is updated. Jan 13 20:53:10.833319 disk-uuid[631]: Secondary Header is updated. Jan 13 20:53:10.838511 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:53:11.853222 disk-uuid[632]: The operation has completed successfully. Jan 13 20:53:11.856716 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:53:12.043694 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:53:12.043890 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:53:12.065741 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:53:12.083839 sh[892]: Success Jan 13 20:53:12.106659 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:53:12.252103 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:53:12.266766 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:53:12.284968 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:53:12.318556 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:53:12.318632 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:53:12.321407 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:53:12.321478 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:53:12.322517 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:53:12.340483 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:53:12.356609 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:53:12.357439 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:53:12.365758 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:53:12.370865 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:53:12.401371 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:53:12.401512 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:53:12.401665 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:53:12.407551 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:53:12.424754 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:53:12.423872 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:53:12.433107 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:53:12.443783 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:53:12.552102 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:53:12.560089 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:53:12.617163 systemd-networkd[1085]: lo: Link UP Jan 13 20:53:12.617176 systemd-networkd[1085]: lo: Gained carrier Jan 13 20:53:12.620103 systemd-networkd[1085]: Enumeration completed Jan 13 20:53:12.620283 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:53:12.620716 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:53:12.620721 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:53:12.631826 systemd-networkd[1085]: eth0: Link UP Jan 13 20:53:12.631832 systemd-networkd[1085]: eth0: Gained carrier Jan 13 20:53:12.631848 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:53:12.633295 systemd[1]: Reached target network.target - Network. Jan 13 20:53:12.656448 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.29.104/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:53:12.706504 ignition[1005]: Ignition 2.20.0 Jan 13 20:53:12.706518 ignition[1005]: Stage: fetch-offline Jan 13 20:53:12.706747 ignition[1005]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:53:12.706758 ignition[1005]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:53:12.709504 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:53:12.707066 ignition[1005]: Ignition finished successfully Jan 13 20:53:12.720647 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:53:12.746366 ignition[1096]: Ignition 2.20.0 Jan 13 20:53:12.746378 ignition[1096]: Stage: fetch Jan 13 20:53:12.746807 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:53:12.746817 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:53:12.746909 ignition[1096]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:53:12.764697 ignition[1096]: PUT result: OK Jan 13 20:53:12.767159 ignition[1096]: parsed url from cmdline: "" Jan 13 20:53:12.767168 ignition[1096]: no config URL provided Jan 13 20:53:12.767175 ignition[1096]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:53:12.767186 ignition[1096]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:53:12.767203 ignition[1096]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:53:12.768197 ignition[1096]: PUT result: OK Jan 13 20:53:12.768232 ignition[1096]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:53:12.774961 ignition[1096]: GET result: OK Jan 13 20:53:12.775065 ignition[1096]: parsing config with SHA512: 24f6d4006e046bb10b35d48951630b2475dd559f36a20bb9994e5d8a12c3eb73d76344eaf923bc524c04ad27efc62b9a77fa5070260616d0ea506d589ad2a43c Jan 13 20:53:12.781292 unknown[1096]: fetched base config from "system" Jan 13 20:53:12.781760 ignition[1096]: fetch: fetch complete Jan 13 20:53:12.781310 unknown[1096]: fetched base config from "system" Jan 13 20:53:12.781765 ignition[1096]: fetch: fetch passed Jan 13 20:53:12.781318 unknown[1096]: fetched user config from "aws" Jan 13 20:53:12.781808 ignition[1096]: Ignition finished successfully Jan 13 20:53:12.788308 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:53:12.799716 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:53:12.829833 ignition[1102]: Ignition 2.20.0 Jan 13 20:53:12.829847 ignition[1102]: Stage: kargs Jan 13 20:53:12.831364 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:53:12.831530 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:53:12.831757 ignition[1102]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:53:12.833286 ignition[1102]: PUT result: OK Jan 13 20:53:12.842218 ignition[1102]: kargs: kargs passed Jan 13 20:53:12.842331 ignition[1102]: Ignition finished successfully Jan 13 20:53:12.845253 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:53:12.851643 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:53:12.866882 ignition[1108]: Ignition 2.20.0 Jan 13 20:53:12.866897 ignition[1108]: Stage: disks Jan 13 20:53:12.867398 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:53:12.867411 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:53:12.867547 ignition[1108]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:53:12.868815 ignition[1108]: PUT result: OK Jan 13 20:53:12.874440 ignition[1108]: disks: disks passed Jan 13 20:53:12.877765 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:53:12.874589 ignition[1108]: Ignition finished successfully Jan 13 20:53:12.881622 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:53:12.883262 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:53:12.883320 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:53:12.883349 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:53:12.883373 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:53:12.893934 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:53:12.941740 systemd-fsck[1116]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:53:12.945204 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:53:12.950691 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:53:13.064483 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:53:13.064675 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:53:13.066361 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:53:13.078583 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:53:13.081419 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:53:13.085506 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:53:13.085914 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:53:13.085987 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:53:13.103854 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:53:13.107798 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:53:13.119480 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1135) Jan 13 20:53:13.124703 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:53:13.124782 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:53:13.124813 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:53:13.130770 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:53:13.133957 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:53:13.207628 initrd-setup-root[1159]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:53:13.215343 initrd-setup-root[1166]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:53:13.224551 initrd-setup-root[1173]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:53:13.229696 initrd-setup-root[1180]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:53:13.388054 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:53:13.393747 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:53:13.404632 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:53:13.413921 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:53:13.415179 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:53:13.443020 ignition[1251]: INFO : Ignition 2.20.0 Jan 13 20:53:13.443020 ignition[1251]: INFO : Stage: mount Jan 13 20:53:13.445344 ignition[1251]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:53:13.445344 ignition[1251]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:53:13.445344 ignition[1251]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:53:13.451438 ignition[1251]: INFO : PUT result: OK Jan 13 20:53:13.451558 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:53:13.456240 ignition[1251]: INFO : mount: mount passed Jan 13 20:53:13.457278 ignition[1251]: INFO : Ignition finished successfully Jan 13 20:53:13.458246 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:53:13.464703 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:53:13.478767 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:53:13.500506 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1264) Jan 13 20:53:13.502692 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:53:13.502748 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:53:13.502768 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:53:13.508541 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:53:13.511024 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:53:13.536537 ignition[1281]: INFO : Ignition 2.20.0 Jan 13 20:53:13.536537 ignition[1281]: INFO : Stage: files Jan 13 20:53:13.538720 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:53:13.538720 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:53:13.538720 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:53:13.538720 ignition[1281]: INFO : PUT result: OK Jan 13 20:53:13.544759 ignition[1281]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:53:13.546157 ignition[1281]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:53:13.546157 ignition[1281]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:53:13.551268 ignition[1281]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:53:13.553062 ignition[1281]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:53:13.563069 unknown[1281]: wrote ssh authorized keys file for user: core Jan 13 20:53:13.567731 ignition[1281]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:53:13.571022 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:53:13.571022 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:53:13.576664 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:53:13.576664 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:53:13.576664 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:53:13.576664 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:53:13.576664 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:53:13.576664 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:53:14.029225 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:53:14.357899 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:53:14.360341 ignition[1281]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:53:14.362543 ignition[1281]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:53:14.362543 ignition[1281]: INFO : files: files passed Jan 13 20:53:14.362543 ignition[1281]: INFO : Ignition finished successfully Jan 13 20:53:14.368608 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:53:14.375968 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:53:14.380205 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:53:14.407139 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:53:14.407732 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:53:14.418259 initrd-setup-root-after-ignition[1310]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:53:14.418259 initrd-setup-root-after-ignition[1310]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:53:14.425048 initrd-setup-root-after-ignition[1314]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:53:14.428657 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:53:14.434232 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:53:14.441670 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:53:14.487717 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:53:14.487848 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:53:14.492488 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:53:14.495632 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:53:14.498278 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:53:14.504748 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:53:14.521957 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:53:14.528652 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:53:14.543804 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:53:14.544040 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:53:14.548160 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:53:14.551377 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:53:14.552709 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:53:14.555285 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:53:14.556662 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:53:14.558645 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:53:14.562475 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:53:14.565554 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:53:14.565810 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:53:14.571173 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:53:14.573869 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:53:14.575500 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:53:14.578896 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:53:14.580476 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:53:14.581562 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:53:14.583909 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:53:14.586441 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:53:14.590137 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:53:14.590217 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:53:14.596426 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:53:14.599014 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:53:14.603147 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:53:14.603321 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:53:14.610204 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:53:14.613164 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:53:14.626776 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:53:14.629253 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:53:14.631145 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:53:14.637299 systemd-networkd[1085]: eth0: Gained IPv6LL Jan 13 20:53:14.658954 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:53:14.667063 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:53:14.667396 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:53:14.669661 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:53:14.669820 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:53:14.680013 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:53:14.680201 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:53:14.704752 ignition[1334]: INFO : Ignition 2.20.0 Jan 13 20:53:14.704752 ignition[1334]: INFO : Stage: umount Jan 13 20:53:14.704752 ignition[1334]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:53:14.704752 ignition[1334]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:53:14.710813 ignition[1334]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:53:14.710813 ignition[1334]: INFO : PUT result: OK Jan 13 20:53:14.715316 ignition[1334]: INFO : umount: umount passed Jan 13 20:53:14.716898 ignition[1334]: INFO : Ignition finished successfully Jan 13 20:53:14.717639 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:53:14.717913 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:53:14.722187 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:53:14.723380 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:53:14.725987 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:53:14.726049 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:53:14.727469 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:53:14.727645 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:53:14.729623 systemd[1]: Stopped target network.target - Network. Jan 13 20:53:14.732777 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:53:14.733070 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:53:14.738723 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:53:14.741170 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:53:14.745340 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:53:14.746755 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:53:14.749316 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:53:14.749535 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:53:14.749594 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:53:14.753540 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:53:14.753599 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:53:14.756530 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:53:14.756607 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:53:14.759988 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:53:14.761351 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:53:14.763624 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:53:14.765804 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:53:14.768936 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:53:14.780920 systemd-networkd[1085]: eth0: DHCPv6 lease lost Jan 13 20:53:14.786125 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:53:14.786235 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:53:14.790368 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:53:14.791540 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:53:14.796055 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:53:14.796124 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:53:14.805644 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:53:14.806587 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:53:14.806674 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:53:14.808176 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:53:14.808240 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:53:14.811912 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:53:14.811964 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:53:14.815072 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:53:14.815125 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:53:14.819302 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:53:14.834956 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:53:14.835310 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:53:14.839400 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:53:14.839748 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:53:14.850398 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:53:14.850535 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:53:14.851950 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:53:14.852022 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:53:14.857222 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:53:14.858253 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:53:14.864928 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:53:14.865023 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:53:14.879852 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:53:14.884004 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:53:14.884252 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:53:14.885619 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:53:14.886112 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:53:14.889738 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:53:14.890464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:53:14.892892 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:53:14.893004 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:53:14.900843 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:53:14.900944 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:53:14.914696 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:53:14.914847 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:53:14.919394 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:53:14.928751 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:53:14.952927 systemd[1]: Switching root. Jan 13 20:53:14.985782 systemd-journald[179]: Journal stopped Jan 13 20:53:16.519352 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 13 20:53:16.520219 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:53:16.520266 kernel: SELinux: policy capability open_perms=1 Jan 13 20:53:16.520286 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:53:16.520305 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:53:16.520322 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:53:16.520340 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:53:16.520357 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:53:16.520374 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:53:16.520396 kernel: audit: type=1403 audit(1736801595.215:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:53:16.520423 systemd[1]: Successfully loaded SELinux policy in 43.055ms. Jan 13 20:53:16.520560 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.334ms. Jan 13 20:53:16.520586 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:53:16.520605 systemd[1]: Detected virtualization amazon. Jan 13 20:53:16.520625 systemd[1]: Detected architecture x86-64. Jan 13 20:53:16.520642 systemd[1]: Detected first boot. Jan 13 20:53:16.520660 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:53:16.520683 zram_generator::config[1377]: No configuration found. Jan 13 20:53:16.520759 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:53:16.520783 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:53:16.520805 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:53:16.520827 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:53:16.520848 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:53:16.520867 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:53:16.520891 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:53:16.520909 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:53:16.520928 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:53:16.520946 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:53:16.520965 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:53:16.520987 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:53:16.521091 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:53:16.521113 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:53:16.521132 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:53:16.521154 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:53:16.521173 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:53:16.521192 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:53:16.521211 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:53:16.521229 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:53:16.521249 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:53:16.521269 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:53:16.521337 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:53:16.521366 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:53:16.521470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:53:16.521492 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:53:16.521967 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:53:16.522001 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:53:16.522046 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:53:16.522066 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:53:16.522085 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:53:16.522104 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:53:16.522153 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:53:16.522173 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:53:16.522215 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:53:16.522316 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:53:16.522338 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:53:16.522358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:53:16.522404 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:53:16.522423 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:53:16.524490 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:53:16.524520 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:53:16.524541 systemd[1]: Reached target machines.target - Containers. Jan 13 20:53:16.524563 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:53:16.524585 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:53:16.524605 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:53:16.524626 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:53:16.524646 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:53:16.524665 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:53:16.524690 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:53:16.524709 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:53:16.524728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:53:16.524748 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:53:16.524768 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:53:16.524787 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:53:16.524805 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:53:16.524824 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:53:16.524846 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:53:16.524865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:53:16.524884 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:53:16.524902 kernel: fuse: init (API version 7.39) Jan 13 20:53:16.524924 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:53:16.524943 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:53:16.524962 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:53:16.524981 systemd[1]: Stopped verity-setup.service. Jan 13 20:53:16.525000 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:53:16.525023 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:53:16.525042 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:53:16.525061 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:53:16.525079 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:53:16.525151 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:53:16.525177 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:53:16.525197 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:53:16.525218 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:53:16.525238 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:53:16.525256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:53:16.525275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:53:16.525294 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:53:16.525312 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:53:16.525332 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:53:16.525354 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:53:16.525372 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:53:16.525394 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:53:16.525413 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:53:16.525432 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:53:16.527022 kernel: loop: module loaded Jan 13 20:53:16.527058 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:53:16.527088 kernel: ACPI: bus type drm_connector registered Jan 13 20:53:16.527109 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:53:16.527130 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:53:16.527152 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:53:16.527175 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:53:16.527197 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:53:16.527221 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:53:16.527284 systemd-journald[1459]: Collecting audit messages is disabled. Jan 13 20:53:16.527329 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:53:16.527353 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:53:16.527377 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:53:16.527400 systemd-journald[1459]: Journal started Jan 13 20:53:16.527443 systemd-journald[1459]: Runtime Journal (/run/log/journal/ec2b185be02a992517f03f98eadf41ba) is 4.8M, max 38.6M, 33.7M free. Jan 13 20:53:15.963989 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:53:15.986432 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:53:15.986899 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:53:16.538531 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:53:16.568164 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:53:16.568421 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:53:16.572155 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:53:16.580865 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:53:16.589811 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:53:16.591517 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:53:16.598738 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:53:16.602631 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:53:16.603964 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:53:16.606645 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:53:16.608028 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:53:16.616799 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:53:16.631172 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:53:16.635606 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:53:16.639605 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:53:16.654758 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:53:16.663794 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:53:16.676516 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 20:53:16.677956 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:53:16.680063 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:53:16.693883 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:53:16.704543 udevadm[1512]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:53:16.710335 systemd-journald[1459]: Time spent on flushing to /var/log/journal/ec2b185be02a992517f03f98eadf41ba is 90.112ms for 949 entries. Jan 13 20:53:16.710335 systemd-journald[1459]: System Journal (/var/log/journal/ec2b185be02a992517f03f98eadf41ba) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:53:16.807834 systemd-journald[1459]: Received client request to flush runtime journal. Jan 13 20:53:16.807896 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:53:16.821966 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:53:16.829606 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:53:16.840989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:53:16.842588 kernel: loop1: detected capacity change from 0 to 62848 Jan 13 20:53:16.848802 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:53:16.849874 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:53:16.924344 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. Jan 13 20:53:16.924571 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. Jan 13 20:53:16.942251 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:53:16.958624 kernel: loop2: detected capacity change from 0 to 211296 Jan 13 20:53:17.111584 kernel: loop3: detected capacity change from 0 to 140992 Jan 13 20:53:17.181495 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 20:53:17.237917 kernel: loop5: detected capacity change from 0 to 62848 Jan 13 20:53:17.256477 kernel: loop6: detected capacity change from 0 to 211296 Jan 13 20:53:17.305484 kernel: loop7: detected capacity change from 0 to 140992 Jan 13 20:53:17.356710 (sd-merge)[1528]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:53:17.358035 (sd-merge)[1528]: Merged extensions into '/usr'. Jan 13 20:53:17.369791 systemd[1]: Reloading requested from client PID 1507 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:53:17.369974 systemd[1]: Reloading... Jan 13 20:53:17.556486 zram_generator::config[1555]: No configuration found. Jan 13 20:53:17.820483 ldconfig[1502]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:53:17.868450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:53:17.988073 systemd[1]: Reloading finished in 613 ms. Jan 13 20:53:18.023583 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:53:18.025584 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:53:18.035671 systemd[1]: Starting ensure-sysext.service... Jan 13 20:53:18.038450 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:53:18.066300 systemd[1]: Reloading requested from client PID 1603 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:53:18.066327 systemd[1]: Reloading... Jan 13 20:53:18.109728 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:53:18.110344 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:53:18.112345 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:53:18.112775 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 13 20:53:18.112852 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 13 20:53:18.124259 systemd-tmpfiles[1604]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:53:18.124276 systemd-tmpfiles[1604]: Skipping /boot Jan 13 20:53:18.166148 systemd-tmpfiles[1604]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:53:18.166305 systemd-tmpfiles[1604]: Skipping /boot Jan 13 20:53:18.219488 zram_generator::config[1631]: No configuration found. Jan 13 20:53:18.349935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:53:18.412231 systemd[1]: Reloading finished in 345 ms. Jan 13 20:53:18.430427 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:53:18.436073 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:53:18.450736 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:53:18.469961 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:53:18.483824 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:53:18.501909 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:53:18.508632 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:53:18.521074 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:53:18.545560 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:53:18.545983 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:53:18.559770 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:53:18.584849 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:53:18.589379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:53:18.593135 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:53:18.593347 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:53:18.610071 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:53:18.611513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:53:18.611890 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:53:18.621979 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:53:18.625728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:53:18.627175 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:53:18.642006 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:53:18.642217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:53:18.651168 systemd-udevd[1692]: Using default interface naming scheme 'v255'. Jan 13 20:53:18.657850 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:53:18.658330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:53:18.662576 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:53:18.665008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:53:18.672922 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:53:18.682886 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:53:18.685706 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:53:18.686590 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:53:18.688679 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:53:18.691537 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:53:18.694013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:53:18.694610 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:53:18.704739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:53:18.718048 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:53:18.721406 systemd[1]: Finished ensure-sysext.service. Jan 13 20:53:18.729215 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:53:18.732756 augenrules[1720]: No rules Jan 13 20:53:18.734800 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:53:18.738181 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:53:18.738886 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:53:18.740949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:53:18.741926 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:53:18.748131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:53:18.759763 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:53:18.762830 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:53:18.782296 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:53:18.793618 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:53:18.795708 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:53:18.800267 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:53:18.927668 (udev-worker)[1742]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:53:18.932617 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:53:18.951023 systemd-resolved[1689]: Positive Trust Anchors: Jan 13 20:53:18.951778 systemd-resolved[1689]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:53:18.951922 systemd-resolved[1689]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:53:18.970810 systemd-resolved[1689]: Defaulting to hostname 'linux'. Jan 13 20:53:18.972030 systemd-networkd[1737]: lo: Link UP Jan 13 20:53:18.972352 systemd-networkd[1737]: lo: Gained carrier Jan 13 20:53:18.975059 systemd-networkd[1737]: Enumeration completed Jan 13 20:53:18.977623 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:53:18.987661 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:53:18.993760 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:53:18.998530 systemd-networkd[1737]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:53:18.998536 systemd-networkd[1737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:53:19.000036 systemd[1]: Reached target network.target - Network. Jan 13 20:53:19.003324 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:53:19.018930 systemd-networkd[1737]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:53:19.018986 systemd-networkd[1737]: eth0: Link UP Jan 13 20:53:19.019316 systemd-networkd[1737]: eth0: Gained carrier Jan 13 20:53:19.019336 systemd-networkd[1737]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:53:19.034528 systemd-networkd[1737]: eth0: DHCPv4 address 172.31.29.104/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:53:19.081520 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 13 20:53:19.101876 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:53:19.111525 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:53:19.114004 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 13 20:53:19.114123 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:53:19.120484 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1735) Jan 13 20:53:19.125480 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 20:53:19.239481 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:53:19.252619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:53:19.320867 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:53:19.326779 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:53:19.345343 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:53:19.358011 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:53:19.385523 lvm[1850]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:53:19.393387 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:53:19.414677 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:53:19.535264 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:53:19.540733 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:53:19.543057 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:53:19.545957 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:53:19.547292 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:53:19.548901 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:53:19.550380 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:53:19.551623 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:53:19.552980 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:53:19.554313 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:53:19.554345 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:53:19.555349 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:53:19.557178 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:53:19.561549 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:53:19.563715 lvm[1857]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:53:19.591912 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:53:19.597128 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:53:19.599126 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:53:19.600811 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:53:19.602205 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:53:19.602244 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:53:19.613826 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:53:19.619225 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:53:19.623946 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:53:19.632619 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:53:19.652781 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:53:19.654177 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:53:19.654692 jq[1864]: false Jan 13 20:53:19.670764 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:53:19.695582 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:53:19.703698 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:53:19.710757 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:53:19.714681 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:53:19.721250 extend-filesystems[1865]: Found loop4 Jan 13 20:53:19.722489 extend-filesystems[1865]: Found loop5 Jan 13 20:53:19.722489 extend-filesystems[1865]: Found loop6 Jan 13 20:53:19.722489 extend-filesystems[1865]: Found loop7 Jan 13 20:53:19.722489 extend-filesystems[1865]: Found nvme0n1 Jan 13 20:53:19.722489 extend-filesystems[1865]: Found nvme0n1p1 Jan 13 20:53:19.722489 extend-filesystems[1865]: Found nvme0n1p2 Jan 13 20:53:19.722489 extend-filesystems[1865]: Found nvme0n1p3 Jan 13 20:53:19.722489 extend-filesystems[1865]: Found usr Jan 13 20:53:19.722489 extend-filesystems[1865]: Found nvme0n1p4 Jan 13 20:53:19.722489 extend-filesystems[1865]: Found nvme0n1p6 Jan 13 20:53:19.722489 extend-filesystems[1865]: Found nvme0n1p7 Jan 13 20:53:19.722489 extend-filesystems[1865]: Found nvme0n1p9 Jan 13 20:53:19.752714 extend-filesystems[1865]: Checking size of /dev/nvme0n1p9 Jan 13 20:53:19.736010 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:53:19.741799 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:53:19.743569 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:53:19.755307 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:53:19.783608 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:53:19.787526 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:53:19.804136 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:53:19.805017 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:53:19.805430 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:53:19.806226 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:53:19.821824 extend-filesystems[1865]: Resized partition /dev/nvme0n1p9 Jan 13 20:53:19.880912 ntpd[1867]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:22 UTC 2025 (1): Starting Jan 13 20:53:19.898655 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:22 UTC 2025 (1): Starting Jan 13 20:53:19.898655 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:53:19.898655 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: ---------------------------------------------------- Jan 13 20:53:19.898655 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:53:19.898655 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:53:19.898655 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: corporation. Support and training for ntp-4 are Jan 13 20:53:19.898655 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: available at https://www.nwtime.org/support Jan 13 20:53:19.898655 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: ---------------------------------------------------- Jan 13 20:53:19.898655 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: proto: precision = 0.069 usec (-24) Jan 13 20:53:19.895151 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:53:19.899257 extend-filesystems[1900]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:53:19.880946 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:53:19.909734 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:53:19.906698 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:53:19.912922 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: basedate set to 2025-01-01 Jan 13 20:53:19.912922 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: gps base set to 2025-01-05 (week 2348) Jan 13 20:53:19.880957 ntpd[1867]: ---------------------------------------------------- Jan 13 20:53:19.913239 jq[1879]: true Jan 13 20:53:19.906752 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:53:19.880967 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:53:19.909719 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:53:19.880976 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:53:19.909762 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:53:19.880986 ntpd[1867]: corporation. Support and training for ntp-4 are Jan 13 20:53:19.880996 ntpd[1867]: available at https://www.nwtime.org/support Jan 13 20:53:19.881005 ntpd[1867]: ---------------------------------------------------- Jan 13 20:53:19.894262 ntpd[1867]: proto: precision = 0.069 usec (-24) Jan 13 20:53:19.894584 dbus-daemon[1863]: [system] SELinux support is enabled Jan 13 20:53:19.900703 ntpd[1867]: basedate set to 2025-01-01 Jan 13 20:53:19.900727 ntpd[1867]: gps base set to 2025-01-05 (week 2348) Jan 13 20:53:19.925096 (ntainerd)[1897]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:53:19.928187 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:53:19.931636 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:53:19.931636 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:53:19.928262 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:53:19.937755 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:53:19.937813 ntpd[1867]: Listen normally on 3 eth0 172.31.29.104:123 Jan 13 20:53:19.937965 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:53:19.937965 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: Listen normally on 3 eth0 172.31.29.104:123 Jan 13 20:53:19.937965 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: Listen normally on 4 lo [::1]:123 Jan 13 20:53:19.937965 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: bind(21) AF_INET6 fe80::495:d7ff:fe34:c0d1%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:53:19.937965 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: unable to create socket on eth0 (5) for fe80::495:d7ff:fe34:c0d1%2#123 Jan 13 20:53:19.937965 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: failed to init interface for address fe80::495:d7ff:fe34:c0d1%2 Jan 13 20:53:19.937853 ntpd[1867]: Listen normally on 4 lo [::1]:123 Jan 13 20:53:19.942811 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: Listening on routing socket on fd #21 for interface updates Jan 13 20:53:19.937905 ntpd[1867]: bind(21) AF_INET6 fe80::495:d7ff:fe34:c0d1%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:53:19.937927 ntpd[1867]: unable to create socket on eth0 (5) for fe80::495:d7ff:fe34:c0d1%2#123 Jan 13 20:53:19.937942 ntpd[1867]: failed to init interface for address fe80::495:d7ff:fe34:c0d1%2 Jan 13 20:53:19.937977 ntpd[1867]: Listening on routing socket on fd #21 for interface updates Jan 13 20:53:19.944050 dbus-daemon[1863]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1737 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:53:19.961260 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:53:19.981764 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:53:19.981809 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:53:19.981989 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:53:19.981989 ntpd[1867]: 13 Jan 20:53:19 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:53:19.983690 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:53:19.985029 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:53:20.008920 update_engine[1878]: I20250113 20:53:20.008827 1878 main.cc:92] Flatcar Update Engine starting Jan 13 20:53:20.014053 update_engine[1878]: I20250113 20:53:20.013998 1878 update_check_scheduler.cc:74] Next update check in 5m46s Jan 13 20:53:20.017164 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:53:20.023683 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:53:20.064554 jq[1901]: true Jan 13 20:53:20.088622 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:53:20.103297 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:53:20.118710 extend-filesystems[1900]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:53:20.118710 extend-filesystems[1900]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:53:20.118710 extend-filesystems[1900]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:53:20.136864 extend-filesystems[1865]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:53:20.121826 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:53:20.145540 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1745) Jan 13 20:53:20.122077 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:53:20.200621 systemd-networkd[1737]: eth0: Gained IPv6LL Jan 13 20:53:20.211181 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:53:20.214669 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:53:20.218359 coreos-metadata[1862]: Jan 13 20:53:20.218 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:53:20.223703 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:53:20.251864 coreos-metadata[1862]: Jan 13 20:53:20.249 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:53:20.251864 coreos-metadata[1862]: Jan 13 20:53:20.250 INFO Fetch successful Jan 13 20:53:20.251864 coreos-metadata[1862]: Jan 13 20:53:20.250 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:53:20.251864 coreos-metadata[1862]: Jan 13 20:53:20.251 INFO Fetch successful Jan 13 20:53:20.251864 coreos-metadata[1862]: Jan 13 20:53:20.251 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:53:20.258010 coreos-metadata[1862]: Jan 13 20:53:20.252 INFO Fetch successful Jan 13 20:53:20.258010 coreos-metadata[1862]: Jan 13 20:53:20.252 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:53:20.258010 coreos-metadata[1862]: Jan 13 20:53:20.253 INFO Fetch successful Jan 13 20:53:20.258010 coreos-metadata[1862]: Jan 13 20:53:20.253 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:53:20.258010 coreos-metadata[1862]: Jan 13 20:53:20.254 INFO Fetch failed with 404: resource not found Jan 13 20:53:20.258010 coreos-metadata[1862]: Jan 13 20:53:20.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:53:20.258010 coreos-metadata[1862]: Jan 13 20:53:20.255 INFO Fetch successful Jan 13 20:53:20.258010 coreos-metadata[1862]: Jan 13 20:53:20.255 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:53:20.258010 coreos-metadata[1862]: Jan 13 20:53:20.256 INFO Fetch successful Jan 13 20:53:20.258010 coreos-metadata[1862]: Jan 13 20:53:20.256 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:53:20.258010 coreos-metadata[1862]: Jan 13 20:53:20.257 INFO Fetch successful Jan 13 20:53:20.258010 coreos-metadata[1862]: Jan 13 20:53:20.257 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:53:20.261826 coreos-metadata[1862]: Jan 13 20:53:20.259 INFO Fetch successful Jan 13 20:53:20.261826 coreos-metadata[1862]: Jan 13 20:53:20.259 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:53:20.261826 coreos-metadata[1862]: Jan 13 20:53:20.260 INFO Fetch successful Jan 13 20:53:20.261586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:53:20.265262 systemd-logind[1874]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:53:20.265295 systemd-logind[1874]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 13 20:53:20.265320 systemd-logind[1874]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:53:20.272531 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:53:20.282127 systemd-logind[1874]: New seat seat0. Jan 13 20:53:20.304477 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:53:20.393488 bash[1983]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:53:20.397359 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:53:20.398217 dbus-daemon[1863]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1907 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:53:20.401115 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:53:20.422155 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:53:20.450884 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:53:20.463802 systemd[1]: Starting sshkeys.service... Jan 13 20:53:20.603993 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:53:20.632315 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:53:20.657649 polkitd[1995]: Started polkitd version 121 Jan 13 20:53:20.661293 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:53:20.683650 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:53:20.708321 locksmithd[1918]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:53:20.718314 polkitd[1995]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:53:20.720672 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:53:20.719769 polkitd[1995]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:53:20.724514 polkitd[1995]: Finished loading, compiling and executing 2 rules Jan 13 20:53:20.742992 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:53:20.744742 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:53:20.746439 polkitd[1995]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:53:20.816227 amazon-ssm-agent[1934]: Initializing new seelog logger Jan 13 20:53:20.816227 amazon-ssm-agent[1934]: New Seelog Logger Creation Complete Jan 13 20:53:20.820184 amazon-ssm-agent[1934]: 2025/01/13 20:53:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:53:20.820184 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:53:20.820184 amazon-ssm-agent[1934]: 2025/01/13 20:53:20 processing appconfig overrides Jan 13 20:53:20.820184 amazon-ssm-agent[1934]: 2025/01/13 20:53:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:53:20.820184 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:53:20.820184 amazon-ssm-agent[1934]: 2025/01/13 20:53:20 processing appconfig overrides Jan 13 20:53:20.828611 amazon-ssm-agent[1934]: 2025/01/13 20:53:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:53:20.828611 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:53:20.828611 amazon-ssm-agent[1934]: 2025/01/13 20:53:20 processing appconfig overrides Jan 13 20:53:20.831491 amazon-ssm-agent[1934]: 2025-01-13 20:53:20 INFO Proxy environment variables: Jan 13 20:53:20.837714 amazon-ssm-agent[1934]: 2025/01/13 20:53:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:53:20.837714 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:53:20.837714 amazon-ssm-agent[1934]: 2025/01/13 20:53:20 processing appconfig overrides Jan 13 20:53:20.850881 systemd-resolved[1689]: System hostname changed to 'ip-172-31-29-104'. Jan 13 20:53:20.851444 systemd-hostnamed[1907]: Hostname set to (transient) Jan 13 20:53:20.929815 amazon-ssm-agent[1934]: 2025-01-13 20:53:20 INFO https_proxy: Jan 13 20:53:20.952021 coreos-metadata[2028]: Jan 13 20:53:20.951 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:53:20.954475 coreos-metadata[2028]: Jan 13 20:53:20.952 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:53:20.954475 coreos-metadata[2028]: Jan 13 20:53:20.954 INFO Fetch successful Jan 13 20:53:20.955447 coreos-metadata[2028]: Jan 13 20:53:20.955 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:53:20.957164 coreos-metadata[2028]: Jan 13 20:53:20.956 INFO Fetch successful Jan 13 20:53:20.961015 unknown[2028]: wrote ssh authorized keys file for user: core Jan 13 20:53:21.031503 amazon-ssm-agent[1934]: 2025-01-13 20:53:20 INFO http_proxy: Jan 13 20:53:21.046484 update-ssh-keys[2077]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:53:21.047648 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:53:21.064656 systemd[1]: Finished sshkeys.service. Jan 13 20:53:21.134931 containerd[1897]: time="2025-01-13T20:53:21.134814792Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:53:21.135958 amazon-ssm-agent[1934]: 2025-01-13 20:53:20 INFO no_proxy: Jan 13 20:53:21.235814 amazon-ssm-agent[1934]: 2025-01-13 20:53:20 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:53:21.264356 containerd[1897]: time="2025-01-13T20:53:21.264273613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:53:21.268743 containerd[1897]: time="2025-01-13T20:53:21.268683058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:53:21.268743 containerd[1897]: time="2025-01-13T20:53:21.268739185Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:53:21.268879 containerd[1897]: time="2025-01-13T20:53:21.268763229Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:53:21.268970 containerd[1897]: time="2025-01-13T20:53:21.268949497Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:53:21.269011 containerd[1897]: time="2025-01-13T20:53:21.268981728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:53:21.269134 containerd[1897]: time="2025-01-13T20:53:21.269052875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:53:21.269187 containerd[1897]: time="2025-01-13T20:53:21.269132727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:53:21.270072 containerd[1897]: time="2025-01-13T20:53:21.269367399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:53:21.270072 containerd[1897]: time="2025-01-13T20:53:21.269393840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:53:21.270072 containerd[1897]: time="2025-01-13T20:53:21.269413899Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:53:21.270072 containerd[1897]: time="2025-01-13T20:53:21.269429618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:53:21.271642 containerd[1897]: time="2025-01-13T20:53:21.271609653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:53:21.271901 containerd[1897]: time="2025-01-13T20:53:21.271876868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:53:21.272169 containerd[1897]: time="2025-01-13T20:53:21.272141915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:53:21.272220 containerd[1897]: time="2025-01-13T20:53:21.272171875Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:53:21.272302 containerd[1897]: time="2025-01-13T20:53:21.272283073Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:53:21.273231 containerd[1897]: time="2025-01-13T20:53:21.272450509Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:53:21.277076 containerd[1897]: time="2025-01-13T20:53:21.277029810Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:53:21.277163 containerd[1897]: time="2025-01-13T20:53:21.277117269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:53:21.277163 containerd[1897]: time="2025-01-13T20:53:21.277142374Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:53:21.277236 containerd[1897]: time="2025-01-13T20:53:21.277205062Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:53:21.277272 containerd[1897]: time="2025-01-13T20:53:21.277239506Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:53:21.277432 containerd[1897]: time="2025-01-13T20:53:21.277410671Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:53:21.279960 containerd[1897]: time="2025-01-13T20:53:21.279929948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:53:21.280105 containerd[1897]: time="2025-01-13T20:53:21.280084956Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:53:21.280151 containerd[1897]: time="2025-01-13T20:53:21.280113736Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:53:21.280151 containerd[1897]: time="2025-01-13T20:53:21.280139112Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:53:21.280223 containerd[1897]: time="2025-01-13T20:53:21.280160118Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:53:21.280223 containerd[1897]: time="2025-01-13T20:53:21.280180848Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:53:21.280223 containerd[1897]: time="2025-01-13T20:53:21.280199980Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280220736Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280243705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280262883Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280281561Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280299393Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280327901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280346342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280362676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280380030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280397068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280414874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280434590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280470902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.281497 containerd[1897]: time="2025-01-13T20:53:21.280492471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280516376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280535837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280551146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280569846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280590812Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280621654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280640863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280729070Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280799582Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280826161Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280904445Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280925051Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:53:21.282030 containerd[1897]: time="2025-01-13T20:53:21.280942358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.282494 containerd[1897]: time="2025-01-13T20:53:21.280965979Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:53:21.282494 containerd[1897]: time="2025-01-13T20:53:21.280980640Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:53:21.282494 containerd[1897]: time="2025-01-13T20:53:21.280995520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:53:21.282618 containerd[1897]: time="2025-01-13T20:53:21.281397290Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:53:21.282618 containerd[1897]: time="2025-01-13T20:53:21.281479240Z" level=info msg="Connect containerd service" Jan 13 20:53:21.282618 containerd[1897]: time="2025-01-13T20:53:21.281518949Z" level=info msg="using legacy CRI server" Jan 13 20:53:21.283507 containerd[1897]: time="2025-01-13T20:53:21.283482086Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:53:21.283759 containerd[1897]: time="2025-01-13T20:53:21.283734373Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:53:21.284698 containerd[1897]: time="2025-01-13T20:53:21.284668181Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:53:21.286491 containerd[1897]: time="2025-01-13T20:53:21.284811955Z" level=info msg="Start subscribing containerd event" Jan 13 20:53:21.286491 containerd[1897]: time="2025-01-13T20:53:21.284869289Z" level=info msg="Start recovering state" Jan 13 20:53:21.286491 containerd[1897]: time="2025-01-13T20:53:21.285014950Z" level=info msg="Start event monitor" Jan 13 20:53:21.286491 containerd[1897]: time="2025-01-13T20:53:21.285042543Z" level=info msg="Start snapshots syncer" Jan 13 20:53:21.286491 containerd[1897]: time="2025-01-13T20:53:21.285054558Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:53:21.286491 containerd[1897]: time="2025-01-13T20:53:21.285065643Z" level=info msg="Start streaming server" Jan 13 20:53:21.286912 containerd[1897]: time="2025-01-13T20:53:21.286892233Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:53:21.287015 containerd[1897]: time="2025-01-13T20:53:21.287001848Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:53:21.287237 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:53:21.287494 containerd[1897]: time="2025-01-13T20:53:21.287476660Z" level=info msg="containerd successfully booted in 0.153995s" Jan 13 20:53:21.334638 amazon-ssm-agent[1934]: 2025-01-13 20:53:20 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:53:21.434053 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO Agent will take identity from EC2 Jan 13 20:53:21.535286 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:53:21.634693 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:53:21.734093 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:53:21.753900 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:53:21.754093 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 13 20:53:21.754195 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:53:21.754276 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:53:21.754480 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [Registrar] Starting registrar module Jan 13 20:53:21.754564 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:53:21.754638 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:53:21.754712 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:53:21.754819 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:53:21.754905 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:53:21.833556 amazon-ssm-agent[1934]: 2025-01-13 20:53:21 INFO [CredentialRefresher] Next credential rotation will be in 31.408310379033335 minutes Jan 13 20:53:21.887885 sshd_keygen[1914]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:53:21.917935 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:53:21.925868 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:53:21.957773 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:53:21.958027 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:53:21.965841 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:53:21.989364 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:53:21.999835 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:53:22.003007 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:53:22.005960 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:53:22.771211 amazon-ssm-agent[1934]: 2025-01-13 20:53:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:53:22.872098 amazon-ssm-agent[1934]: 2025-01-13 20:53:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2103) started Jan 13 20:53:22.881416 ntpd[1867]: Listen normally on 6 eth0 [fe80::495:d7ff:fe34:c0d1%2]:123 Jan 13 20:53:22.883116 ntpd[1867]: 13 Jan 20:53:22 ntpd[1867]: Listen normally on 6 eth0 [fe80::495:d7ff:fe34:c0d1%2]:123 Jan 13 20:53:22.973243 amazon-ssm-agent[1934]: 2025-01-13 20:53:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:53:23.121657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:53:23.124164 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:53:23.125316 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:53:23.126603 systemd[1]: Startup finished in 680ms (kernel) + 6.514s (initrd) + 7.952s (userspace) = 15.147s. Jan 13 20:53:24.622241 kubelet[2118]: E0113 20:53:24.622158 2118 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:53:24.625256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:53:24.625451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:53:24.626338 systemd[1]: kubelet.service: Consumed 1.053s CPU time. Jan 13 20:53:28.023919 systemd-resolved[1689]: Clock change detected. Flushing caches. Jan 13 20:53:30.555086 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:53:30.561931 systemd[1]: Started sshd@0-172.31.29.104:22-139.178.89.65:35566.service - OpenSSH per-connection server daemon (139.178.89.65:35566). Jan 13 20:53:30.759768 sshd[2131]: Accepted publickey for core from 139.178.89.65 port 35566 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:53:30.765205 sshd-session[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:53:30.781270 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:53:30.797327 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:53:30.805912 systemd-logind[1874]: New session 1 of user core. Jan 13 20:53:30.829443 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:53:30.844064 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:53:30.859137 (systemd)[2135]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:53:31.042133 systemd[2135]: Queued start job for default target default.target. Jan 13 20:53:31.052640 systemd[2135]: Created slice app.slice - User Application Slice. Jan 13 20:53:31.052688 systemd[2135]: Reached target paths.target - Paths. Jan 13 20:53:31.052811 systemd[2135]: Reached target timers.target - Timers. Jan 13 20:53:31.054299 systemd[2135]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:53:31.074342 systemd[2135]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:53:31.074496 systemd[2135]: Reached target sockets.target - Sockets. Jan 13 20:53:31.074517 systemd[2135]: Reached target basic.target - Basic System. Jan 13 20:53:31.074901 systemd[2135]: Reached target default.target - Main User Target. Jan 13 20:53:31.074947 systemd[2135]: Startup finished in 205ms. Jan 13 20:53:31.075224 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:53:31.082904 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:53:31.241110 systemd[1]: Started sshd@1-172.31.29.104:22-139.178.89.65:55512.service - OpenSSH per-connection server daemon (139.178.89.65:55512). Jan 13 20:53:31.404555 sshd[2146]: Accepted publickey for core from 139.178.89.65 port 55512 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:53:31.406068 sshd-session[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:53:31.411041 systemd-logind[1874]: New session 2 of user core. Jan 13 20:53:31.418913 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:53:31.539559 sshd[2148]: Connection closed by 139.178.89.65 port 55512 Jan 13 20:53:31.540208 sshd-session[2146]: pam_unix(sshd:session): session closed for user core Jan 13 20:53:31.544820 systemd[1]: sshd@1-172.31.29.104:22-139.178.89.65:55512.service: Deactivated successfully. Jan 13 20:53:31.546702 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:53:31.548311 systemd-logind[1874]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:53:31.549410 systemd-logind[1874]: Removed session 2. Jan 13 20:53:31.578601 systemd[1]: Started sshd@2-172.31.29.104:22-139.178.89.65:55526.service - OpenSSH per-connection server daemon (139.178.89.65:55526). Jan 13 20:53:31.750982 sshd[2153]: Accepted publickey for core from 139.178.89.65 port 55526 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:53:31.752141 sshd-session[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:53:31.759597 systemd-logind[1874]: New session 3 of user core. Jan 13 20:53:31.767155 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:53:31.883746 sshd[2155]: Connection closed by 139.178.89.65 port 55526 Jan 13 20:53:31.884436 sshd-session[2153]: pam_unix(sshd:session): session closed for user core Jan 13 20:53:31.888994 systemd[1]: sshd@2-172.31.29.104:22-139.178.89.65:55526.service: Deactivated successfully. Jan 13 20:53:31.891780 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:53:31.893801 systemd-logind[1874]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:53:31.895255 systemd-logind[1874]: Removed session 3. Jan 13 20:53:31.929195 systemd[1]: Started sshd@3-172.31.29.104:22-139.178.89.65:55534.service - OpenSSH per-connection server daemon (139.178.89.65:55534). Jan 13 20:53:32.105578 sshd[2160]: Accepted publickey for core from 139.178.89.65 port 55534 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:53:32.106640 sshd-session[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:53:32.113986 systemd-logind[1874]: New session 4 of user core. Jan 13 20:53:32.120725 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:53:32.243597 sshd[2162]: Connection closed by 139.178.89.65 port 55534 Jan 13 20:53:32.244348 sshd-session[2160]: pam_unix(sshd:session): session closed for user core Jan 13 20:53:32.248099 systemd[1]: sshd@3-172.31.29.104:22-139.178.89.65:55534.service: Deactivated successfully. Jan 13 20:53:32.250024 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:53:32.250788 systemd-logind[1874]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:53:32.252074 systemd-logind[1874]: Removed session 4. Jan 13 20:53:32.283020 systemd[1]: Started sshd@4-172.31.29.104:22-139.178.89.65:55538.service - OpenSSH per-connection server daemon (139.178.89.65:55538). Jan 13 20:53:32.461998 sshd[2167]: Accepted publickey for core from 139.178.89.65 port 55538 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:53:32.464189 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:53:32.473084 systemd-logind[1874]: New session 5 of user core. Jan 13 20:53:32.479751 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:53:32.594578 sudo[2170]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:53:32.595038 sudo[2170]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:53:32.611671 sudo[2170]: pam_unix(sudo:session): session closed for user root Jan 13 20:53:32.634850 sshd[2169]: Connection closed by 139.178.89.65 port 55538 Jan 13 20:53:32.636289 sshd-session[2167]: pam_unix(sshd:session): session closed for user core Jan 13 20:53:32.639903 systemd[1]: sshd@4-172.31.29.104:22-139.178.89.65:55538.service: Deactivated successfully. Jan 13 20:53:32.642051 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:53:32.643629 systemd-logind[1874]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:53:32.644906 systemd-logind[1874]: Removed session 5. Jan 13 20:53:32.671909 systemd[1]: Started sshd@5-172.31.29.104:22-139.178.89.65:55554.service - OpenSSH per-connection server daemon (139.178.89.65:55554). Jan 13 20:53:32.840372 sshd[2175]: Accepted publickey for core from 139.178.89.65 port 55554 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:53:32.841450 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:53:32.846490 systemd-logind[1874]: New session 6 of user core. Jan 13 20:53:32.855760 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:53:32.952936 sudo[2179]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:53:32.953336 sudo[2179]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:53:32.957115 sudo[2179]: pam_unix(sudo:session): session closed for user root Jan 13 20:53:32.963295 sudo[2178]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:53:32.963694 sudo[2178]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:53:32.984071 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:53:33.020813 augenrules[2201]: No rules Jan 13 20:53:33.022378 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:53:33.022647 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:53:33.024057 sudo[2178]: pam_unix(sudo:session): session closed for user root Jan 13 20:53:33.046745 sshd[2177]: Connection closed by 139.178.89.65 port 55554 Jan 13 20:53:33.047502 sshd-session[2175]: pam_unix(sshd:session): session closed for user core Jan 13 20:53:33.050616 systemd[1]: sshd@5-172.31.29.104:22-139.178.89.65:55554.service: Deactivated successfully. Jan 13 20:53:33.052445 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:53:33.054155 systemd-logind[1874]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:53:33.055214 systemd-logind[1874]: Removed session 6. Jan 13 20:53:33.095960 systemd[1]: Started sshd@6-172.31.29.104:22-139.178.89.65:55558.service - OpenSSH per-connection server daemon (139.178.89.65:55558). Jan 13 20:53:33.278198 sshd[2209]: Accepted publickey for core from 139.178.89.65 port 55558 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:53:33.279901 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:53:33.285788 systemd-logind[1874]: New session 7 of user core. Jan 13 20:53:33.292773 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:53:33.391512 sudo[2212]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:53:33.391922 sudo[2212]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:53:34.754909 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:53:34.755164 systemd[1]: kubelet.service: Consumed 1.053s CPU time. Jan 13 20:53:34.765380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:53:34.798058 systemd[1]: Reloading requested from client PID 2249 ('systemctl') (unit session-7.scope)... Jan 13 20:53:34.798076 systemd[1]: Reloading... Jan 13 20:53:35.019860 zram_generator::config[2292]: No configuration found. Jan 13 20:53:35.182311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:53:35.296724 systemd[1]: Reloading finished in 498 ms. Jan 13 20:53:35.362819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:53:35.366567 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:53:35.372924 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:53:35.373804 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:53:35.374043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:53:35.380306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:53:35.598982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:53:35.612124 (kubelet)[2352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:53:35.676171 kubelet[2352]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:53:35.676171 kubelet[2352]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:53:35.676171 kubelet[2352]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:53:35.676639 kubelet[2352]: I0113 20:53:35.676255 2352 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:53:35.907620 kubelet[2352]: I0113 20:53:35.907437 2352 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:53:35.907620 kubelet[2352]: I0113 20:53:35.907477 2352 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:53:35.907878 kubelet[2352]: I0113 20:53:35.907844 2352 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:53:35.940077 kubelet[2352]: I0113 20:53:35.939487 2352 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:53:35.949910 kubelet[2352]: I0113 20:53:35.949861 2352 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:53:35.950181 kubelet[2352]: I0113 20:53:35.950161 2352 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:53:35.950387 kubelet[2352]: I0113 20:53:35.950370 2352 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:53:35.951198 kubelet[2352]: I0113 20:53:35.951175 2352 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:53:35.951266 kubelet[2352]: I0113 20:53:35.951203 2352 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:53:35.951351 kubelet[2352]: I0113 20:53:35.951333 2352 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:53:35.951652 kubelet[2352]: I0113 20:53:35.951598 2352 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:53:35.951652 kubelet[2352]: I0113 20:53:35.951622 2352 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:53:35.951754 kubelet[2352]: I0113 20:53:35.951653 2352 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:53:35.951754 kubelet[2352]: I0113 20:53:35.951682 2352 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:53:35.954554 kubelet[2352]: E0113 20:53:35.953403 2352 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:35.954554 kubelet[2352]: E0113 20:53:35.953716 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:35.954703 kubelet[2352]: I0113 20:53:35.954686 2352 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:53:35.958721 kubelet[2352]: I0113 20:53:35.958681 2352 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:53:35.960316 kubelet[2352]: W0113 20:53:35.960282 2352 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:53:35.961594 kubelet[2352]: I0113 20:53:35.960966 2352 server.go:1256] "Started kubelet" Jan 13 20:53:35.962303 kubelet[2352]: I0113 20:53:35.962265 2352 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:53:35.974497 kubelet[2352]: I0113 20:53:35.974466 2352 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:53:35.975900 kubelet[2352]: I0113 20:53:35.975789 2352 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:53:35.976209 kubelet[2352]: I0113 20:53:35.976189 2352 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:53:35.976890 kubelet[2352]: I0113 20:53:35.976666 2352 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:53:35.976890 kubelet[2352]: I0113 20:53:35.976737 2352 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:53:35.986511 kubelet[2352]: I0113 20:53:35.986475 2352 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:53:35.990554 kubelet[2352]: I0113 20:53:35.989005 2352 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:53:35.990554 kubelet[2352]: I0113 20:53:35.989405 2352 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:53:35.995934 kubelet[2352]: I0113 20:53:35.991950 2352 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:53:36.000232 kubelet[2352]: W0113 20:53:36.000205 2352 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:53:36.000411 kubelet[2352]: E0113 20:53:36.000401 2352 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:53:36.000666 kubelet[2352]: W0113 20:53:36.000647 2352 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.29.104" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:53:36.000733 kubelet[2352]: E0113 20:53:36.000678 2352 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.29.104" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:53:36.000794 kubelet[2352]: E0113 20:53:36.000756 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.29.104\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 20:53:36.000855 kubelet[2352]: E0113 20:53:36.000845 2352 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:53:36.001072 kubelet[2352]: W0113 20:53:36.001039 2352 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:53:36.001142 kubelet[2352]: E0113 20:53:36.001079 2352 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:53:36.001429 kubelet[2352]: I0113 20:53:36.001414 2352 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:53:36.010041 kubelet[2352]: E0113 20:53:36.010002 2352 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.29.104.181a5bde24ad020a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.29.104,UID:172.31.29.104,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.29.104,},FirstTimestamp:2025-01-13 20:53:35.960932874 +0000 UTC m=+0.344015430,LastTimestamp:2025-01-13 20:53:35.960932874 +0000 UTC m=+0.344015430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.29.104,}" Jan 13 20:53:36.039914 kubelet[2352]: E0113 20:53:36.039867 2352 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.29.104.181a5bde270add3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.29.104,UID:172.31.29.104,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.29.104,},FirstTimestamp:2025-01-13 20:53:36.000638271 +0000 UTC m=+0.383720845,LastTimestamp:2025-01-13 20:53:36.000638271 +0000 UTC m=+0.383720845,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.29.104,}" Jan 13 20:53:36.040196 kubelet[2352]: I0113 20:53:36.040171 2352 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:53:36.040196 kubelet[2352]: I0113 20:53:36.040192 2352 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:53:36.040285 kubelet[2352]: I0113 20:53:36.040212 2352 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:53:36.043869 kubelet[2352]: I0113 20:53:36.043834 2352 policy_none.go:49] "None policy: Start" Jan 13 20:53:36.048555 kubelet[2352]: I0113 20:53:36.048410 2352 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:53:36.048555 kubelet[2352]: I0113 20:53:36.048442 2352 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:53:36.058942 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:53:36.073425 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:53:36.079274 kubelet[2352]: I0113 20:53:36.077191 2352 kubelet_node_status.go:73] "Attempting to register node" node="172.31.29.104" Jan 13 20:53:36.084964 kubelet[2352]: I0113 20:53:36.084790 2352 kubelet_node_status.go:76] "Successfully registered node" node="172.31.29.104" Jan 13 20:53:36.089143 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:53:36.090955 kubelet[2352]: I0113 20:53:36.090916 2352 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:53:36.091947 kubelet[2352]: I0113 20:53:36.091930 2352 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:53:36.095411 kubelet[2352]: E0113 20:53:36.095391 2352 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.29.104\" not found" Jan 13 20:53:36.101212 kubelet[2352]: I0113 20:53:36.100660 2352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:53:36.104444 kubelet[2352]: I0113 20:53:36.104295 2352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:53:36.104444 kubelet[2352]: I0113 20:53:36.104336 2352 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:53:36.104627 kubelet[2352]: I0113 20:53:36.104611 2352 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:53:36.106271 kubelet[2352]: E0113 20:53:36.104688 2352 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 20:53:36.275728 kubelet[2352]: E0113 20:53:36.275599 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.104\" not found" Jan 13 20:53:36.376180 kubelet[2352]: E0113 20:53:36.376135 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.104\" not found" Jan 13 20:53:36.476927 kubelet[2352]: E0113 20:53:36.476883 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.104\" not found" Jan 13 20:53:36.577426 kubelet[2352]: E0113 20:53:36.577295 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.104\" not found" Jan 13 20:53:36.678085 kubelet[2352]: E0113 20:53:36.678030 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.104\" not found" Jan 13 20:53:36.778372 kubelet[2352]: E0113 20:53:36.778323 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.104\" not found" Jan 13 20:53:36.879128 kubelet[2352]: E0113 20:53:36.879002 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.104\" not found" Jan 13 20:53:36.911634 kubelet[2352]: I0113 20:53:36.911584 2352 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:53:36.911804 kubelet[2352]: W0113 20:53:36.911778 2352 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:53:36.953943 kubelet[2352]: E0113 20:53:36.953885 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:36.979444 kubelet[2352]: E0113 20:53:36.979394 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.104\" not found" Jan 13 20:53:37.080192 kubelet[2352]: E0113 20:53:37.080148 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.104\" not found" Jan 13 20:53:37.132026 sudo[2212]: pam_unix(sudo:session): session closed for user root Jan 13 20:53:37.155086 sshd[2211]: Connection closed by 139.178.89.65 port 55558 Jan 13 20:53:37.155822 sshd-session[2209]: pam_unix(sshd:session): session closed for user core Jan 13 20:53:37.163858 systemd[1]: sshd@6-172.31.29.104:22-139.178.89.65:55558.service: Deactivated successfully. Jan 13 20:53:37.166336 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:53:37.169441 systemd-logind[1874]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:53:37.171633 systemd-logind[1874]: Removed session 7. Jan 13 20:53:37.180613 kubelet[2352]: E0113 20:53:37.180578 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.104\" not found" Jan 13 20:53:37.281550 kubelet[2352]: E0113 20:53:37.281480 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.104\" not found" Jan 13 20:53:37.382254 kubelet[2352]: E0113 20:53:37.382097 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.104\" not found" Jan 13 20:53:37.483716 kubelet[2352]: I0113 20:53:37.483683 2352 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:53:37.484170 containerd[1897]: time="2025-01-13T20:53:37.484037993Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:53:37.484743 kubelet[2352]: I0113 20:53:37.484702 2352 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:53:37.954934 kubelet[2352]: E0113 20:53:37.954886 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:37.954934 kubelet[2352]: I0113 20:53:37.954898 2352 apiserver.go:52] "Watching apiserver" Jan 13 20:53:37.961547 kubelet[2352]: I0113 20:53:37.961500 2352 topology_manager.go:215] "Topology Admit Handler" podUID="f9ae47d3-7ab8-43ed-8de9-19f74619fc51" podNamespace="calico-system" podName="calico-node-h4qss" Jan 13 20:53:37.961733 kubelet[2352]: I0113 20:53:37.961688 2352 topology_manager.go:215] "Topology Admit Handler" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" podNamespace="calico-system" podName="csi-node-driver-jvgjw" Jan 13 20:53:37.961802 kubelet[2352]: I0113 20:53:37.961765 2352 topology_manager.go:215] "Topology Admit Handler" podUID="aa77b08e-cab2-4552-bcc0-5dbebf2a6e02" podNamespace="kube-system" podName="kube-proxy-ndmdd" Jan 13 20:53:37.963430 kubelet[2352]: E0113 20:53:37.962674 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:37.972368 systemd[1]: Created slice kubepods-besteffort-podaa77b08e_cab2_4552_bcc0_5dbebf2a6e02.slice - libcontainer container kubepods-besteffort-podaa77b08e_cab2_4552_bcc0_5dbebf2a6e02.slice. Jan 13 20:53:37.979149 kubelet[2352]: I0113 20:53:37.979119 2352 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:53:37.986866 kubelet[2352]: I0113 20:53:37.986694 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9ae47d3-7ab8-43ed-8de9-19f74619fc51-xtables-lock\") pod \"calico-node-h4qss\" (UID: \"f9ae47d3-7ab8-43ed-8de9-19f74619fc51\") " pod="calico-system/calico-node-h4qss" Jan 13 20:53:37.986866 kubelet[2352]: I0113 20:53:37.986739 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f9ae47d3-7ab8-43ed-8de9-19f74619fc51-var-lib-calico\") pod \"calico-node-h4qss\" (UID: \"f9ae47d3-7ab8-43ed-8de9-19f74619fc51\") " pod="calico-system/calico-node-h4qss" Jan 13 20:53:37.987157 kubelet[2352]: I0113 20:53:37.986968 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f9ae47d3-7ab8-43ed-8de9-19f74619fc51-cni-net-dir\") pod \"calico-node-h4qss\" (UID: \"f9ae47d3-7ab8-43ed-8de9-19f74619fc51\") " pod="calico-system/calico-node-h4qss" Jan 13 20:53:37.987157 kubelet[2352]: I0113 20:53:37.987086 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f9ae47d3-7ab8-43ed-8de9-19f74619fc51-flexvol-driver-host\") pod \"calico-node-h4qss\" (UID: \"f9ae47d3-7ab8-43ed-8de9-19f74619fc51\") " pod="calico-system/calico-node-h4qss" Jan 13 20:53:37.987157 kubelet[2352]: I0113 20:53:37.987128 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76fdx\" (UniqueName: \"kubernetes.io/projected/aa77b08e-cab2-4552-bcc0-5dbebf2a6e02-kube-api-access-76fdx\") pod \"kube-proxy-ndmdd\" (UID: \"aa77b08e-cab2-4552-bcc0-5dbebf2a6e02\") " pod="kube-system/kube-proxy-ndmdd" Jan 13 20:53:37.987500 kubelet[2352]: I0113 20:53:37.987378 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f9ae47d3-7ab8-43ed-8de9-19f74619fc51-policysync\") pod \"calico-node-h4qss\" (UID: \"f9ae47d3-7ab8-43ed-8de9-19f74619fc51\") " pod="calico-system/calico-node-h4qss" Jan 13 20:53:37.987500 kubelet[2352]: I0113 20:53:37.987478 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9ae47d3-7ab8-43ed-8de9-19f74619fc51-tigera-ca-bundle\") pod \"calico-node-h4qss\" (UID: \"f9ae47d3-7ab8-43ed-8de9-19f74619fc51\") " pod="calico-system/calico-node-h4qss" Jan 13 20:53:37.987693 kubelet[2352]: I0113 20:53:37.987557 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f9ae47d3-7ab8-43ed-8de9-19f74619fc51-var-run-calico\") pod \"calico-node-h4qss\" (UID: \"f9ae47d3-7ab8-43ed-8de9-19f74619fc51\") " pod="calico-system/calico-node-h4qss" Jan 13 20:53:37.987693 kubelet[2352]: I0113 20:53:37.987671 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f9ae47d3-7ab8-43ed-8de9-19f74619fc51-cni-log-dir\") pod \"calico-node-h4qss\" (UID: \"f9ae47d3-7ab8-43ed-8de9-19f74619fc51\") " pod="calico-system/calico-node-h4qss" Jan 13 20:53:37.987789 kubelet[2352]: I0113 20:53:37.987758 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa77b08e-cab2-4552-bcc0-5dbebf2a6e02-kube-proxy\") pod \"kube-proxy-ndmdd\" (UID: \"aa77b08e-cab2-4552-bcc0-5dbebf2a6e02\") " pod="kube-system/kube-proxy-ndmdd" Jan 13 20:53:37.987839 kubelet[2352]: I0113 20:53:37.987804 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa77b08e-cab2-4552-bcc0-5dbebf2a6e02-lib-modules\") pod \"kube-proxy-ndmdd\" (UID: \"aa77b08e-cab2-4552-bcc0-5dbebf2a6e02\") " pod="kube-system/kube-proxy-ndmdd" Jan 13 20:53:37.987839 kubelet[2352]: I0113 20:53:37.987836 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9ae47d3-7ab8-43ed-8de9-19f74619fc51-lib-modules\") pod \"calico-node-h4qss\" (UID: \"f9ae47d3-7ab8-43ed-8de9-19f74619fc51\") " pod="calico-system/calico-node-h4qss" Jan 13 20:53:37.987921 kubelet[2352]: I0113 20:53:37.987868 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f9ae47d3-7ab8-43ed-8de9-19f74619fc51-node-certs\") pod \"calico-node-h4qss\" (UID: \"f9ae47d3-7ab8-43ed-8de9-19f74619fc51\") " pod="calico-system/calico-node-h4qss" Jan 13 20:53:37.987921 kubelet[2352]: I0113 20:53:37.987900 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7e57e640-184c-47ee-a3aa-558418051dc1-varrun\") pod \"csi-node-driver-jvgjw\" (UID: \"7e57e640-184c-47ee-a3aa-558418051dc1\") " pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:37.988067 kubelet[2352]: I0113 20:53:37.987930 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e57e640-184c-47ee-a3aa-558418051dc1-kubelet-dir\") pod \"csi-node-driver-jvgjw\" (UID: \"7e57e640-184c-47ee-a3aa-558418051dc1\") " pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:37.988067 kubelet[2352]: I0113 20:53:37.987962 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7e57e640-184c-47ee-a3aa-558418051dc1-registration-dir\") pod \"csi-node-driver-jvgjw\" (UID: \"7e57e640-184c-47ee-a3aa-558418051dc1\") " pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:37.988296 kubelet[2352]: I0113 20:53:37.988202 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxlg8\" (UniqueName: \"kubernetes.io/projected/7e57e640-184c-47ee-a3aa-558418051dc1-kube-api-access-dxlg8\") pod \"csi-node-driver-jvgjw\" (UID: \"7e57e640-184c-47ee-a3aa-558418051dc1\") " pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:37.988296 kubelet[2352]: I0113 20:53:37.988281 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f9ae47d3-7ab8-43ed-8de9-19f74619fc51-cni-bin-dir\") pod \"calico-node-h4qss\" (UID: \"f9ae47d3-7ab8-43ed-8de9-19f74619fc51\") " pod="calico-system/calico-node-h4qss" Jan 13 20:53:37.988468 kubelet[2352]: I0113 20:53:37.988312 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb74w\" (UniqueName: \"kubernetes.io/projected/f9ae47d3-7ab8-43ed-8de9-19f74619fc51-kube-api-access-zb74w\") pod \"calico-node-h4qss\" (UID: \"f9ae47d3-7ab8-43ed-8de9-19f74619fc51\") " pod="calico-system/calico-node-h4qss" Jan 13 20:53:37.988468 kubelet[2352]: I0113 20:53:37.988371 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7e57e640-184c-47ee-a3aa-558418051dc1-socket-dir\") pod \"csi-node-driver-jvgjw\" (UID: \"7e57e640-184c-47ee-a3aa-558418051dc1\") " pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:37.988468 kubelet[2352]: I0113 20:53:37.988444 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa77b08e-cab2-4552-bcc0-5dbebf2a6e02-xtables-lock\") pod \"kube-proxy-ndmdd\" (UID: \"aa77b08e-cab2-4552-bcc0-5dbebf2a6e02\") " pod="kube-system/kube-proxy-ndmdd" Jan 13 20:53:37.993126 systemd[1]: Created slice kubepods-besteffort-podf9ae47d3_7ab8_43ed_8de9_19f74619fc51.slice - libcontainer container kubepods-besteffort-podf9ae47d3_7ab8_43ed_8de9_19f74619fc51.slice. Jan 13 20:53:38.117717 kubelet[2352]: E0113 20:53:38.115799 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.117717 kubelet[2352]: W0113 20:53:38.115861 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.117717 kubelet[2352]: E0113 20:53:38.115907 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.123633 kubelet[2352]: E0113 20:53:38.119360 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.123633 kubelet[2352]: W0113 20:53:38.119385 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.123633 kubelet[2352]: E0113 20:53:38.119413 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.123633 kubelet[2352]: E0113 20:53:38.119724 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.123633 kubelet[2352]: W0113 20:53:38.119736 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.123633 kubelet[2352]: E0113 20:53:38.119755 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.123633 kubelet[2352]: E0113 20:53:38.120963 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.123633 kubelet[2352]: W0113 20:53:38.120979 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.123633 kubelet[2352]: E0113 20:53:38.122853 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.127411 kubelet[2352]: E0113 20:53:38.127379 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.127551 kubelet[2352]: W0113 20:53:38.127519 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.127887 kubelet[2352]: E0113 20:53:38.127872 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.127975 kubelet[2352]: W0113 20:53:38.127963 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.128253 kubelet[2352]: E0113 20:53:38.128240 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.128341 kubelet[2352]: W0113 20:53:38.128330 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.132412 kubelet[2352]: E0113 20:53:38.132391 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.132522 kubelet[2352]: W0113 20:53:38.132509 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.133549 kubelet[2352]: E0113 20:53:38.133516 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.133836 kubelet[2352]: W0113 20:53:38.133819 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.134114 kubelet[2352]: E0113 20:53:38.134101 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.134310 kubelet[2352]: W0113 20:53:38.134295 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.134636 kubelet[2352]: E0113 20:53:38.134623 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.134814 kubelet[2352]: W0113 20:53:38.134733 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.135076 kubelet[2352]: E0113 20:53:38.134989 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.135076 kubelet[2352]: W0113 20:53:38.135001 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.135188 kubelet[2352]: E0113 20:53:38.135171 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.135346 kubelet[2352]: E0113 20:53:38.135335 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.135500 kubelet[2352]: W0113 20:53:38.135417 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.135812 kubelet[2352]: E0113 20:53:38.135708 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.135812 kubelet[2352]: W0113 20:53:38.135724 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.140628 kubelet[2352]: E0113 20:53:38.140605 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.140970 kubelet[2352]: W0113 20:53:38.140795 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.140970 kubelet[2352]: E0113 20:53:38.140826 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.140970 kubelet[2352]: E0113 20:53:38.140904 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.140970 kubelet[2352]: E0113 20:53:38.140933 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.140970 kubelet[2352]: E0113 20:53:38.140966 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.141217 kubelet[2352]: E0113 20:53:38.140982 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.141217 kubelet[2352]: E0113 20:53:38.140997 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.141217 kubelet[2352]: E0113 20:53:38.141010 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.141217 kubelet[2352]: E0113 20:53:38.141029 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.141217 kubelet[2352]: E0113 20:53:38.141070 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.141217 kubelet[2352]: E0113 20:53:38.141089 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.145492 kubelet[2352]: E0113 20:53:38.145283 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.145492 kubelet[2352]: W0113 20:53:38.145302 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.145492 kubelet[2352]: E0113 20:53:38.145331 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.146269 kubelet[2352]: E0113 20:53:38.146041 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.146269 kubelet[2352]: W0113 20:53:38.146057 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.146269 kubelet[2352]: E0113 20:53:38.146080 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.146447 kubelet[2352]: E0113 20:53:38.146428 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.146447 kubelet[2352]: W0113 20:53:38.146443 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.146584 kubelet[2352]: E0113 20:53:38.146482 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.146837 kubelet[2352]: E0113 20:53:38.146820 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.146837 kubelet[2352]: W0113 20:53:38.146833 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.146997 kubelet[2352]: E0113 20:53:38.146924 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.147290 kubelet[2352]: E0113 20:53:38.147272 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.147290 kubelet[2352]: W0113 20:53:38.147286 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.147401 kubelet[2352]: E0113 20:53:38.147376 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.147594 kubelet[2352]: E0113 20:53:38.147577 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.147594 kubelet[2352]: W0113 20:53:38.147590 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.147692 kubelet[2352]: E0113 20:53:38.147680 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.147849 kubelet[2352]: E0113 20:53:38.147835 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.147849 kubelet[2352]: W0113 20:53:38.147848 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.148034 kubelet[2352]: E0113 20:53:38.147947 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.148171 kubelet[2352]: E0113 20:53:38.148064 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.148171 kubelet[2352]: W0113 20:53:38.148072 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.148266 kubelet[2352]: E0113 20:53:38.148238 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.148390 kubelet[2352]: E0113 20:53:38.148374 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.148390 kubelet[2352]: W0113 20:53:38.148386 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.148509 kubelet[2352]: E0113 20:53:38.148472 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.148741 kubelet[2352]: E0113 20:53:38.148730 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.148829 kubelet[2352]: W0113 20:53:38.148741 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.148829 kubelet[2352]: E0113 20:53:38.148773 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.149067 kubelet[2352]: E0113 20:53:38.149032 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.149067 kubelet[2352]: W0113 20:53:38.149063 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.149406 kubelet[2352]: E0113 20:53:38.149167 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.149406 kubelet[2352]: E0113 20:53:38.149280 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.149406 kubelet[2352]: W0113 20:53:38.149289 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.149406 kubelet[2352]: E0113 20:53:38.149347 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.149972 kubelet[2352]: E0113 20:53:38.149878 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.149972 kubelet[2352]: W0113 20:53:38.149894 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.150264 kubelet[2352]: E0113 20:53:38.150251 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.150426 kubelet[2352]: W0113 20:53:38.150337 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.150657 kubelet[2352]: E0113 20:53:38.150645 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.151151 kubelet[2352]: W0113 20:53:38.150740 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.151395 kubelet[2352]: E0113 20:53:38.151382 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.151565 kubelet[2352]: W0113 20:53:38.151458 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.155569 kubelet[2352]: E0113 20:53:38.151805 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.155569 kubelet[2352]: W0113 20:53:38.151821 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.155569 kubelet[2352]: E0113 20:53:38.152255 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.155569 kubelet[2352]: W0113 20:53:38.152437 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.155569 kubelet[2352]: E0113 20:53:38.152462 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.155569 kubelet[2352]: E0113 20:53:38.152870 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.155569 kubelet[2352]: W0113 20:53:38.152881 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.155569 kubelet[2352]: E0113 20:53:38.152898 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.155569 kubelet[2352]: E0113 20:53:38.153186 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.155569 kubelet[2352]: E0113 20:53:38.153548 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.156025 kubelet[2352]: W0113 20:53:38.153559 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.156025 kubelet[2352]: E0113 20:53:38.153611 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.156025 kubelet[2352]: E0113 20:53:38.153641 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.156025 kubelet[2352]: E0113 20:53:38.154025 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.156025 kubelet[2352]: W0113 20:53:38.154036 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.156025 kubelet[2352]: E0113 20:53:38.154053 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.156025 kubelet[2352]: E0113 20:53:38.154078 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.156025 kubelet[2352]: E0113 20:53:38.154306 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.156025 kubelet[2352]: W0113 20:53:38.154317 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.156025 kubelet[2352]: E0113 20:53:38.154332 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.156510 kubelet[2352]: E0113 20:53:38.154356 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.156510 kubelet[2352]: E0113 20:53:38.154605 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.156510 kubelet[2352]: W0113 20:53:38.154615 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.156510 kubelet[2352]: E0113 20:53:38.154631 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.156510 kubelet[2352]: E0113 20:53:38.154879 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.156510 kubelet[2352]: E0113 20:53:38.155249 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.156510 kubelet[2352]: W0113 20:53:38.155261 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.156510 kubelet[2352]: E0113 20:53:38.155278 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.157786 kubelet[2352]: E0113 20:53:38.157710 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.157786 kubelet[2352]: W0113 20:53:38.157724 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.157786 kubelet[2352]: E0113 20:53:38.157742 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.185236 kubelet[2352]: E0113 20:53:38.185207 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.185236 kubelet[2352]: W0113 20:53:38.185232 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.185392 kubelet[2352]: E0113 20:53:38.185277 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.189857 kubelet[2352]: E0113 20:53:38.185591 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:38.189857 kubelet[2352]: W0113 20:53:38.185605 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:38.189857 kubelet[2352]: E0113 20:53:38.185622 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:38.290993 containerd[1897]: time="2025-01-13T20:53:38.290555550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ndmdd,Uid:aa77b08e-cab2-4552-bcc0-5dbebf2a6e02,Namespace:kube-system,Attempt:0,}" Jan 13 20:53:38.314729 containerd[1897]: time="2025-01-13T20:53:38.312368207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h4qss,Uid:f9ae47d3-7ab8-43ed-8de9-19f74619fc51,Namespace:calico-system,Attempt:0,}" Jan 13 20:53:38.916885 containerd[1897]: time="2025-01-13T20:53:38.916833679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:53:38.919047 containerd[1897]: time="2025-01-13T20:53:38.918963207Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:53:38.920858 containerd[1897]: time="2025-01-13T20:53:38.920805945Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:53:38.922277 containerd[1897]: time="2025-01-13T20:53:38.922237729Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:53:38.923259 containerd[1897]: time="2025-01-13T20:53:38.923173168Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:53:38.926315 containerd[1897]: time="2025-01-13T20:53:38.926256398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:53:38.928563 containerd[1897]: time="2025-01-13T20:53:38.927393792Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 636.330712ms" Jan 13 20:53:38.928563 containerd[1897]: time="2025-01-13T20:53:38.928427653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 614.503918ms" Jan 13 20:53:38.955681 kubelet[2352]: E0113 20:53:38.955625 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:39.121956 containerd[1897]: time="2025-01-13T20:53:39.121489743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:39.121956 containerd[1897]: time="2025-01-13T20:53:39.121563153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:39.121956 containerd[1897]: time="2025-01-13T20:53:39.121586931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:39.121956 containerd[1897]: time="2025-01-13T20:53:39.121688258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:39.123133 containerd[1897]: time="2025-01-13T20:53:39.123002050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:39.123278 containerd[1897]: time="2025-01-13T20:53:39.123115160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:39.123397 containerd[1897]: time="2025-01-13T20:53:39.123263742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:39.123744 containerd[1897]: time="2025-01-13T20:53:39.123708647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:39.150752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209815068.mount: Deactivated successfully. Jan 13 20:53:39.256043 systemd[1]: run-containerd-runc-k8s.io-e206361e6a35719794a966becf4ffc784374aab68fe44ea5e668f32ce4a19cc6-runc.T0k4bs.mount: Deactivated successfully. Jan 13 20:53:39.268825 systemd[1]: Started cri-containerd-848fe392bf72b461cd9884c3b1667a6f66d30c154054e98f359ded1fe1eddb3a.scope - libcontainer container 848fe392bf72b461cd9884c3b1667a6f66d30c154054e98f359ded1fe1eddb3a. Jan 13 20:53:39.270403 systemd[1]: Started cri-containerd-e206361e6a35719794a966becf4ffc784374aab68fe44ea5e668f32ce4a19cc6.scope - libcontainer container e206361e6a35719794a966becf4ffc784374aab68fe44ea5e668f32ce4a19cc6. Jan 13 20:53:39.310357 containerd[1897]: time="2025-01-13T20:53:39.310187784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ndmdd,Uid:aa77b08e-cab2-4552-bcc0-5dbebf2a6e02,Namespace:kube-system,Attempt:0,} returns sandbox id \"848fe392bf72b461cd9884c3b1667a6f66d30c154054e98f359ded1fe1eddb3a\"" Jan 13 20:53:39.313999 containerd[1897]: time="2025-01-13T20:53:39.313961687Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:53:39.326398 containerd[1897]: time="2025-01-13T20:53:39.326271027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h4qss,Uid:f9ae47d3-7ab8-43ed-8de9-19f74619fc51,Namespace:calico-system,Attempt:0,} returns sandbox id \"e206361e6a35719794a966becf4ffc784374aab68fe44ea5e668f32ce4a19cc6\"" Jan 13 20:53:39.957663 kubelet[2352]: E0113 20:53:39.956722 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:40.105006 kubelet[2352]: E0113 20:53:40.104967 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:40.695450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3191094749.mount: Deactivated successfully. Jan 13 20:53:40.957658 kubelet[2352]: E0113 20:53:40.957438 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:41.334828 containerd[1897]: time="2025-01-13T20:53:41.334630873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:41.335952 containerd[1897]: time="2025-01-13T20:53:41.335859822Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:53:41.337378 containerd[1897]: time="2025-01-13T20:53:41.337318695Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:41.342872 containerd[1897]: time="2025-01-13T20:53:41.340374756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:41.342872 containerd[1897]: time="2025-01-13T20:53:41.342252806Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.028208759s" Jan 13 20:53:41.342872 containerd[1897]: time="2025-01-13T20:53:41.342294401Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:53:41.343811 containerd[1897]: time="2025-01-13T20:53:41.343716354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:53:41.345162 containerd[1897]: time="2025-01-13T20:53:41.345127274Z" level=info msg="CreateContainer within sandbox \"848fe392bf72b461cd9884c3b1667a6f66d30c154054e98f359ded1fe1eddb3a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:53:41.368421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945062268.mount: Deactivated successfully. Jan 13 20:53:41.380355 containerd[1897]: time="2025-01-13T20:53:41.380157411Z" level=info msg="CreateContainer within sandbox \"848fe392bf72b461cd9884c3b1667a6f66d30c154054e98f359ded1fe1eddb3a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7f776fdc94d76098b5bd3a1d831505b35add40ab99a47194cfea515b231be1e3\"" Jan 13 20:53:41.381358 containerd[1897]: time="2025-01-13T20:53:41.381249736Z" level=info msg="StartContainer for \"7f776fdc94d76098b5bd3a1d831505b35add40ab99a47194cfea515b231be1e3\"" Jan 13 20:53:41.432766 systemd[1]: Started cri-containerd-7f776fdc94d76098b5bd3a1d831505b35add40ab99a47194cfea515b231be1e3.scope - libcontainer container 7f776fdc94d76098b5bd3a1d831505b35add40ab99a47194cfea515b231be1e3. Jan 13 20:53:41.487413 containerd[1897]: time="2025-01-13T20:53:41.487364645Z" level=info msg="StartContainer for \"7f776fdc94d76098b5bd3a1d831505b35add40ab99a47194cfea515b231be1e3\" returns successfully" Jan 13 20:53:41.958520 kubelet[2352]: E0113 20:53:41.958395 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:42.109562 kubelet[2352]: E0113 20:53:42.105723 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:42.180933 kubelet[2352]: I0113 20:53:42.180892 2352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ndmdd" podStartSLOduration=4.150310134 podStartE2EDuration="6.180839321s" podCreationTimestamp="2025-01-13 20:53:36 +0000 UTC" firstStartedPulling="2025-01-13 20:53:39.31231312 +0000 UTC m=+3.695395660" lastFinishedPulling="2025-01-13 20:53:41.342842315 +0000 UTC m=+5.725924847" observedRunningTime="2025-01-13 20:53:42.180552188 +0000 UTC m=+6.563634740" watchObservedRunningTime="2025-01-13 20:53:42.180839321 +0000 UTC m=+6.563921873" Jan 13 20:53:42.207242 kubelet[2352]: E0113 20:53:42.207200 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.207242 kubelet[2352]: W0113 20:53:42.207231 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.207819 kubelet[2352]: E0113 20:53:42.207259 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.207819 kubelet[2352]: E0113 20:53:42.207632 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.207819 kubelet[2352]: W0113 20:53:42.207646 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.207819 kubelet[2352]: E0113 20:53:42.207664 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.208091 kubelet[2352]: E0113 20:53:42.207871 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.208091 kubelet[2352]: W0113 20:53:42.207881 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.208091 kubelet[2352]: E0113 20:53:42.207897 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.208223 kubelet[2352]: E0113 20:53:42.208177 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.208223 kubelet[2352]: W0113 20:53:42.208187 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.208223 kubelet[2352]: E0113 20:53:42.208207 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.208427 kubelet[2352]: E0113 20:53:42.208415 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.208427 kubelet[2352]: W0113 20:53:42.208425 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.208620 kubelet[2352]: E0113 20:53:42.208440 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.210781 kubelet[2352]: E0113 20:53:42.208669 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.210781 kubelet[2352]: W0113 20:53:42.208679 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.210781 kubelet[2352]: E0113 20:53:42.208695 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.210781 kubelet[2352]: E0113 20:53:42.208989 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.210781 kubelet[2352]: W0113 20:53:42.209000 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.210781 kubelet[2352]: E0113 20:53:42.209072 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.210781 kubelet[2352]: E0113 20:53:42.209499 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.210781 kubelet[2352]: W0113 20:53:42.209510 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.210781 kubelet[2352]: E0113 20:53:42.209540 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.210781 kubelet[2352]: E0113 20:53:42.209784 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.211363 kubelet[2352]: W0113 20:53:42.209794 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.211363 kubelet[2352]: E0113 20:53:42.209810 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.211363 kubelet[2352]: E0113 20:53:42.210261 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.211363 kubelet[2352]: W0113 20:53:42.210272 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.211363 kubelet[2352]: E0113 20:53:42.210289 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.211363 kubelet[2352]: E0113 20:53:42.210850 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.211363 kubelet[2352]: W0113 20:53:42.210861 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.211363 kubelet[2352]: E0113 20:53:42.210879 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.211363 kubelet[2352]: E0113 20:53:42.211088 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.211363 kubelet[2352]: W0113 20:53:42.211097 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.211962 kubelet[2352]: E0113 20:53:42.211112 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.211962 kubelet[2352]: E0113 20:53:42.211422 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.211962 kubelet[2352]: W0113 20:53:42.211433 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.211962 kubelet[2352]: E0113 20:53:42.211449 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.211962 kubelet[2352]: E0113 20:53:42.211778 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.211962 kubelet[2352]: W0113 20:53:42.211793 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.211962 kubelet[2352]: E0113 20:53:42.211810 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.212282 kubelet[2352]: E0113 20:53:42.212013 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.212282 kubelet[2352]: W0113 20:53:42.212022 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.212282 kubelet[2352]: E0113 20:53:42.212037 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.212282 kubelet[2352]: E0113 20:53:42.212226 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.212282 kubelet[2352]: W0113 20:53:42.212235 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.212282 kubelet[2352]: E0113 20:53:42.212249 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.212787 kubelet[2352]: E0113 20:53:42.212441 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.212787 kubelet[2352]: W0113 20:53:42.212449 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.212787 kubelet[2352]: E0113 20:53:42.212463 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.212787 kubelet[2352]: E0113 20:53:42.212781 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.213494 kubelet[2352]: W0113 20:53:42.212792 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.213494 kubelet[2352]: E0113 20:53:42.212809 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.213494 kubelet[2352]: E0113 20:53:42.213022 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.213494 kubelet[2352]: W0113 20:53:42.213032 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.213494 kubelet[2352]: E0113 20:53:42.213047 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.213494 kubelet[2352]: E0113 20:53:42.213241 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.213494 kubelet[2352]: W0113 20:53:42.213251 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.213494 kubelet[2352]: E0113 20:53:42.213265 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.231125 kubelet[2352]: E0113 20:53:42.231093 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.231125 kubelet[2352]: W0113 20:53:42.231116 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.231613 kubelet[2352]: E0113 20:53:42.231142 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.231720 kubelet[2352]: E0113 20:53:42.231663 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.231720 kubelet[2352]: W0113 20:53:42.231676 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.231720 kubelet[2352]: E0113 20:53:42.231704 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.232264 kubelet[2352]: E0113 20:53:42.232123 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.232264 kubelet[2352]: W0113 20:53:42.232138 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.232264 kubelet[2352]: E0113 20:53:42.232169 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.232707 kubelet[2352]: E0113 20:53:42.232479 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.232707 kubelet[2352]: W0113 20:53:42.232491 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.232707 kubelet[2352]: E0113 20:53:42.232520 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.232986 kubelet[2352]: E0113 20:53:42.232899 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.232986 kubelet[2352]: W0113 20:53:42.232910 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.233245 kubelet[2352]: E0113 20:53:42.232973 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.233455 kubelet[2352]: E0113 20:53:42.233441 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.233577 kubelet[2352]: W0113 20:53:42.233458 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.233577 kubelet[2352]: E0113 20:53:42.233487 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.233802 kubelet[2352]: E0113 20:53:42.233785 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.233802 kubelet[2352]: W0113 20:53:42.233799 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.233904 kubelet[2352]: E0113 20:53:42.233828 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.234428 kubelet[2352]: E0113 20:53:42.234409 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.234428 kubelet[2352]: W0113 20:53:42.234423 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.234428 kubelet[2352]: E0113 20:53:42.234456 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.235190 kubelet[2352]: E0113 20:53:42.235028 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.235190 kubelet[2352]: W0113 20:53:42.235040 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.235190 kubelet[2352]: E0113 20:53:42.235057 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.235452 kubelet[2352]: E0113 20:53:42.235261 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.235452 kubelet[2352]: W0113 20:53:42.235270 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.235452 kubelet[2352]: E0113 20:53:42.235289 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.235770 kubelet[2352]: E0113 20:53:42.235700 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.235770 kubelet[2352]: W0113 20:53:42.235711 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.235770 kubelet[2352]: E0113 20:53:42.235728 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.236195 kubelet[2352]: E0113 20:53:42.236178 2352 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:53:42.236195 kubelet[2352]: W0113 20:53:42.236191 2352 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:53:42.236303 kubelet[2352]: E0113 20:53:42.236208 2352 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:53:42.524357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942214066.mount: Deactivated successfully. Jan 13 20:53:42.682324 containerd[1897]: time="2025-01-13T20:53:42.682279200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:42.683328 containerd[1897]: time="2025-01-13T20:53:42.683284608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 20:53:42.685314 containerd[1897]: time="2025-01-13T20:53:42.684208307Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:42.688011 containerd[1897]: time="2025-01-13T20:53:42.687266026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:42.688011 containerd[1897]: time="2025-01-13T20:53:42.687869108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.34384767s" Jan 13 20:53:42.688011 containerd[1897]: time="2025-01-13T20:53:42.687903940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 20:53:42.690160 containerd[1897]: time="2025-01-13T20:53:42.690130718Z" level=info msg="CreateContainer within sandbox \"e206361e6a35719794a966becf4ffc784374aab68fe44ea5e668f32ce4a19cc6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:53:42.706660 containerd[1897]: time="2025-01-13T20:53:42.706607869Z" level=info msg="CreateContainer within sandbox \"e206361e6a35719794a966becf4ffc784374aab68fe44ea5e668f32ce4a19cc6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1c3ee1c49fd61cdecfd1c7b05a30b6a5ee749dbf1dc5a6b274f8ac5bab233dbe\"" Jan 13 20:53:42.709123 containerd[1897]: time="2025-01-13T20:53:42.709087733Z" level=info msg="StartContainer for \"1c3ee1c49fd61cdecfd1c7b05a30b6a5ee749dbf1dc5a6b274f8ac5bab233dbe\"" Jan 13 20:53:42.747735 systemd[1]: Started cri-containerd-1c3ee1c49fd61cdecfd1c7b05a30b6a5ee749dbf1dc5a6b274f8ac5bab233dbe.scope - libcontainer container 1c3ee1c49fd61cdecfd1c7b05a30b6a5ee749dbf1dc5a6b274f8ac5bab233dbe. Jan 13 20:53:42.782418 containerd[1897]: time="2025-01-13T20:53:42.782257386Z" level=info msg="StartContainer for \"1c3ee1c49fd61cdecfd1c7b05a30b6a5ee749dbf1dc5a6b274f8ac5bab233dbe\" returns successfully" Jan 13 20:53:42.793967 systemd[1]: cri-containerd-1c3ee1c49fd61cdecfd1c7b05a30b6a5ee749dbf1dc5a6b274f8ac5bab233dbe.scope: Deactivated successfully. Jan 13 20:53:42.822362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c3ee1c49fd61cdecfd1c7b05a30b6a5ee749dbf1dc5a6b274f8ac5bab233dbe-rootfs.mount: Deactivated successfully. Jan 13 20:53:42.959617 kubelet[2352]: E0113 20:53:42.959567 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:42.964979 containerd[1897]: time="2025-01-13T20:53:42.964751279Z" level=info msg="shim disconnected" id=1c3ee1c49fd61cdecfd1c7b05a30b6a5ee749dbf1dc5a6b274f8ac5bab233dbe namespace=k8s.io Jan 13 20:53:42.964979 containerd[1897]: time="2025-01-13T20:53:42.964806600Z" level=warning msg="cleaning up after shim disconnected" id=1c3ee1c49fd61cdecfd1c7b05a30b6a5ee749dbf1dc5a6b274f8ac5bab233dbe namespace=k8s.io Jan 13 20:53:42.964979 containerd[1897]: time="2025-01-13T20:53:42.964820415Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:53:43.164054 containerd[1897]: time="2025-01-13T20:53:43.163495756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:53:43.960673 kubelet[2352]: E0113 20:53:43.960624 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:44.105924 kubelet[2352]: E0113 20:53:44.105422 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:44.961576 kubelet[2352]: E0113 20:53:44.961522 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:45.962850 kubelet[2352]: E0113 20:53:45.962566 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:46.105251 kubelet[2352]: E0113 20:53:46.104878 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:46.963360 kubelet[2352]: E0113 20:53:46.963317 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:47.141666 containerd[1897]: time="2025-01-13T20:53:47.141617808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:47.143001 containerd[1897]: time="2025-01-13T20:53:47.142921797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 20:53:47.143968 containerd[1897]: time="2025-01-13T20:53:47.143902498Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:47.147389 containerd[1897]: time="2025-01-13T20:53:47.146452564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:47.147389 containerd[1897]: time="2025-01-13T20:53:47.147262400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.983707363s" Jan 13 20:53:47.147389 containerd[1897]: time="2025-01-13T20:53:47.147293212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 20:53:47.149707 containerd[1897]: time="2025-01-13T20:53:47.149674250Z" level=info msg="CreateContainer within sandbox \"e206361e6a35719794a966becf4ffc784374aab68fe44ea5e668f32ce4a19cc6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:53:47.175760 containerd[1897]: time="2025-01-13T20:53:47.175702024Z" level=info msg="CreateContainer within sandbox \"e206361e6a35719794a966becf4ffc784374aab68fe44ea5e668f32ce4a19cc6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f890e56bd0dafb497a358f94d1101723ac8065e61963086e328901c6c8c31f5f\"" Jan 13 20:53:47.178553 containerd[1897]: time="2025-01-13T20:53:47.176316354Z" level=info msg="StartContainer for \"f890e56bd0dafb497a358f94d1101723ac8065e61963086e328901c6c8c31f5f\"" Jan 13 20:53:47.218733 systemd[1]: Started cri-containerd-f890e56bd0dafb497a358f94d1101723ac8065e61963086e328901c6c8c31f5f.scope - libcontainer container f890e56bd0dafb497a358f94d1101723ac8065e61963086e328901c6c8c31f5f. Jan 13 20:53:47.270571 containerd[1897]: time="2025-01-13T20:53:47.270501944Z" level=info msg="StartContainer for \"f890e56bd0dafb497a358f94d1101723ac8065e61963086e328901c6c8c31f5f\" returns successfully" Jan 13 20:53:47.963995 kubelet[2352]: E0113 20:53:47.963946 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:48.038486 systemd[1]: cri-containerd-f890e56bd0dafb497a358f94d1101723ac8065e61963086e328901c6c8c31f5f.scope: Deactivated successfully. Jan 13 20:53:48.044191 kubelet[2352]: I0113 20:53:48.044157 2352 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:53:48.075023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f890e56bd0dafb497a358f94d1101723ac8065e61963086e328901c6c8c31f5f-rootfs.mount: Deactivated successfully. Jan 13 20:53:48.117910 systemd[1]: Created slice kubepods-besteffort-pod7e57e640_184c_47ee_a3aa_558418051dc1.slice - libcontainer container kubepods-besteffort-pod7e57e640_184c_47ee_a3aa_558418051dc1.slice. Jan 13 20:53:48.120743 containerd[1897]: time="2025-01-13T20:53:48.120688214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:0,}" Jan 13 20:53:48.334348 containerd[1897]: time="2025-01-13T20:53:48.332700682Z" level=error msg="Failed to destroy network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:48.334348 containerd[1897]: time="2025-01-13T20:53:48.333590808Z" level=error msg="encountered an error cleaning up failed sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:48.334348 containerd[1897]: time="2025-01-13T20:53:48.333673684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:48.336574 kubelet[2352]: E0113 20:53:48.335916 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:48.336574 kubelet[2352]: E0113 20:53:48.336066 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:48.336574 kubelet[2352]: E0113 20:53:48.336116 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:48.338127 kubelet[2352]: E0113 20:53:48.336220 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:48.337705 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9-shm.mount: Deactivated successfully. Jan 13 20:53:48.626045 containerd[1897]: time="2025-01-13T20:53:48.625377193Z" level=info msg="shim disconnected" id=f890e56bd0dafb497a358f94d1101723ac8065e61963086e328901c6c8c31f5f namespace=k8s.io Jan 13 20:53:48.626045 containerd[1897]: time="2025-01-13T20:53:48.625431126Z" level=warning msg="cleaning up after shim disconnected" id=f890e56bd0dafb497a358f94d1101723ac8065e61963086e328901c6c8c31f5f namespace=k8s.io Jan 13 20:53:48.626045 containerd[1897]: time="2025-01-13T20:53:48.625442920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:53:48.964591 kubelet[2352]: E0113 20:53:48.964414 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:49.212844 kubelet[2352]: I0113 20:53:49.212811 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9" Jan 13 20:53:49.213507 containerd[1897]: time="2025-01-13T20:53:49.213471926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:53:49.213901 containerd[1897]: time="2025-01-13T20:53:49.213869576Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" Jan 13 20:53:49.214274 containerd[1897]: time="2025-01-13T20:53:49.214248542Z" level=info msg="Ensure that sandbox bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9 in task-service has been cleanup successfully" Jan 13 20:53:49.218845 containerd[1897]: time="2025-01-13T20:53:49.214834679Z" level=info msg="TearDown network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" successfully" Jan 13 20:53:49.218845 containerd[1897]: time="2025-01-13T20:53:49.214859233Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" returns successfully" Jan 13 20:53:49.228276 containerd[1897]: time="2025-01-13T20:53:49.228012406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:1,}" Jan 13 20:53:49.229476 systemd[1]: run-netns-cni\x2d19b5f86e\x2d8410\x2d40c6\x2df7f8\x2dba90e7808b77.mount: Deactivated successfully. Jan 13 20:53:49.319466 containerd[1897]: time="2025-01-13T20:53:49.319417186Z" level=error msg="Failed to destroy network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:49.322022 containerd[1897]: time="2025-01-13T20:53:49.321969326Z" level=error msg="encountered an error cleaning up failed sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:49.322395 containerd[1897]: time="2025-01-13T20:53:49.322069111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:49.322463 kubelet[2352]: E0113 20:53:49.322426 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:49.322609 kubelet[2352]: E0113 20:53:49.322489 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:49.322609 kubelet[2352]: E0113 20:53:49.322518 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:49.322717 kubelet[2352]: E0113 20:53:49.322662 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:49.323500 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46-shm.mount: Deactivated successfully. Jan 13 20:53:49.964750 kubelet[2352]: E0113 20:53:49.964701 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:50.216406 kubelet[2352]: I0113 20:53:50.216232 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46" Jan 13 20:53:50.217594 containerd[1897]: time="2025-01-13T20:53:50.217183783Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\"" Jan 13 20:53:50.220941 containerd[1897]: time="2025-01-13T20:53:50.218085459Z" level=info msg="Ensure that sandbox f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46 in task-service has been cleanup successfully" Jan 13 20:53:50.220941 containerd[1897]: time="2025-01-13T20:53:50.220808291Z" level=info msg="TearDown network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" successfully" Jan 13 20:53:50.220941 containerd[1897]: time="2025-01-13T20:53:50.220835965Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" returns successfully" Jan 13 20:53:50.220477 systemd[1]: run-netns-cni\x2d106c5433\x2d2b25\x2daa30\x2d240a\x2dacd65a47d0c3.mount: Deactivated successfully. Jan 13 20:53:50.224052 containerd[1897]: time="2025-01-13T20:53:50.221945433Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" Jan 13 20:53:50.224052 containerd[1897]: time="2025-01-13T20:53:50.222104359Z" level=info msg="TearDown network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" successfully" Jan 13 20:53:50.224052 containerd[1897]: time="2025-01-13T20:53:50.222123081Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" returns successfully" Jan 13 20:53:50.224052 containerd[1897]: time="2025-01-13T20:53:50.223840023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:2,}" Jan 13 20:53:50.390573 containerd[1897]: time="2025-01-13T20:53:50.386603771Z" level=error msg="Failed to destroy network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:50.390573 containerd[1897]: time="2025-01-13T20:53:50.387114571Z" level=error msg="encountered an error cleaning up failed sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:50.390573 containerd[1897]: time="2025-01-13T20:53:50.387210463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:50.390797 kubelet[2352]: E0113 20:53:50.387842 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:50.390797 kubelet[2352]: E0113 20:53:50.387910 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:50.390797 kubelet[2352]: E0113 20:53:50.387941 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:50.390955 kubelet[2352]: E0113 20:53:50.388021 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:50.392566 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf-shm.mount: Deactivated successfully. Jan 13 20:53:50.965294 kubelet[2352]: E0113 20:53:50.965244 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:51.221253 kubelet[2352]: I0113 20:53:51.221126 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf" Jan 13 20:53:51.222312 containerd[1897]: time="2025-01-13T20:53:51.222276787Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\"" Jan 13 20:53:51.225794 containerd[1897]: time="2025-01-13T20:53:51.223267021Z" level=info msg="Ensure that sandbox ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf in task-service has been cleanup successfully" Jan 13 20:53:51.226359 containerd[1897]: time="2025-01-13T20:53:51.225968951Z" level=info msg="TearDown network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" successfully" Jan 13 20:53:51.226359 containerd[1897]: time="2025-01-13T20:53:51.225996799Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" returns successfully" Jan 13 20:53:51.226497 systemd[1]: run-netns-cni\x2da4991482\x2d3064\x2d3434\x2da509\x2d4fcae6ed831c.mount: Deactivated successfully. Jan 13 20:53:51.228616 containerd[1897]: time="2025-01-13T20:53:51.226905129Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\"" Jan 13 20:53:51.228616 containerd[1897]: time="2025-01-13T20:53:51.227010580Z" level=info msg="TearDown network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" successfully" Jan 13 20:53:51.228616 containerd[1897]: time="2025-01-13T20:53:51.227026484Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" returns successfully" Jan 13 20:53:51.228616 containerd[1897]: time="2025-01-13T20:53:51.227772174Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" Jan 13 20:53:51.228616 containerd[1897]: time="2025-01-13T20:53:51.227865953Z" level=info msg="TearDown network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" successfully" Jan 13 20:53:51.228616 containerd[1897]: time="2025-01-13T20:53:51.227880032Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" returns successfully" Jan 13 20:53:51.229296 containerd[1897]: time="2025-01-13T20:53:51.229271401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:3,}" Jan 13 20:53:51.484828 containerd[1897]: time="2025-01-13T20:53:51.484485955Z" level=error msg="Failed to destroy network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:51.495300 containerd[1897]: time="2025-01-13T20:53:51.494927774Z" level=error msg="encountered an error cleaning up failed sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:51.495300 containerd[1897]: time="2025-01-13T20:53:51.495015787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:51.495505 kubelet[2352]: I0113 20:53:51.491989 2352 topology_manager.go:215] "Topology Admit Handler" podUID="c39c5321-4dfc-4b73-a1d2-cf757388b130" podNamespace="default" podName="nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:51.496281 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0-shm.mount: Deactivated successfully. Jan 13 20:53:51.497471 kubelet[2352]: E0113 20:53:51.496713 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:51.497471 kubelet[2352]: E0113 20:53:51.496773 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:51.497471 kubelet[2352]: E0113 20:53:51.496802 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:51.497789 kubelet[2352]: E0113 20:53:51.496863 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:51.508999 kubelet[2352]: I0113 20:53:51.508895 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx76g\" (UniqueName: \"kubernetes.io/projected/c39c5321-4dfc-4b73-a1d2-cf757388b130-kube-api-access-bx76g\") pod \"nginx-deployment-6d5f899847-kplr4\" (UID: \"c39c5321-4dfc-4b73-a1d2-cf757388b130\") " pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:51.510299 systemd[1]: Created slice kubepods-besteffort-podc39c5321_4dfc_4b73_a1d2_cf757388b130.slice - libcontainer container kubepods-besteffort-podc39c5321_4dfc_4b73_a1d2_cf757388b130.slice. Jan 13 20:53:51.819111 containerd[1897]: time="2025-01-13T20:53:51.818379576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:0,}" Jan 13 20:53:51.966248 kubelet[2352]: E0113 20:53:51.966211 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:51.986248 containerd[1897]: time="2025-01-13T20:53:51.986080175Z" level=error msg="Failed to destroy network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:51.986951 containerd[1897]: time="2025-01-13T20:53:51.986914001Z" level=error msg="encountered an error cleaning up failed sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:51.987064 containerd[1897]: time="2025-01-13T20:53:51.986996615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:51.987826 kubelet[2352]: E0113 20:53:51.987255 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:51.987826 kubelet[2352]: E0113 20:53:51.987320 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:51.987826 kubelet[2352]: E0113 20:53:51.987353 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:51.987995 kubelet[2352]: E0113 20:53:51.987421 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-kplr4_default(c39c5321-4dfc-4b73-a1d2-cf757388b130)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-kplr4_default(c39c5321-4dfc-4b73-a1d2-cf757388b130)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-kplr4" podUID="c39c5321-4dfc-4b73-a1d2-cf757388b130" Jan 13 20:53:52.029586 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:53:52.239754 kubelet[2352]: I0113 20:53:52.238558 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0" Jan 13 20:53:52.249421 containerd[1897]: time="2025-01-13T20:53:52.249380440Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\"" Jan 13 20:53:52.252061 kubelet[2352]: I0113 20:53:52.252033 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5" Jan 13 20:53:52.252479 containerd[1897]: time="2025-01-13T20:53:52.252444629Z" level=info msg="Ensure that sandbox ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0 in task-service has been cleanup successfully" Jan 13 20:53:52.253135 containerd[1897]: time="2025-01-13T20:53:52.252670468Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\"" Jan 13 20:53:52.253584 containerd[1897]: time="2025-01-13T20:53:52.253438205Z" level=info msg="Ensure that sandbox 6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5 in task-service has been cleanup successfully" Jan 13 20:53:52.253839 containerd[1897]: time="2025-01-13T20:53:52.253815963Z" level=info msg="TearDown network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" successfully" Jan 13 20:53:52.253960 containerd[1897]: time="2025-01-13T20:53:52.253905368Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" returns successfully" Jan 13 20:53:52.255597 containerd[1897]: time="2025-01-13T20:53:52.254780669Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\"" Jan 13 20:53:52.255597 containerd[1897]: time="2025-01-13T20:53:52.254883261Z" level=info msg="TearDown network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" successfully" Jan 13 20:53:52.255597 containerd[1897]: time="2025-01-13T20:53:52.254897701Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" returns successfully" Jan 13 20:53:52.255788 containerd[1897]: time="2025-01-13T20:53:52.255763451Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\"" Jan 13 20:53:52.255887 containerd[1897]: time="2025-01-13T20:53:52.255867590Z" level=info msg="TearDown network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" successfully" Jan 13 20:53:52.255941 containerd[1897]: time="2025-01-13T20:53:52.255888158Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" returns successfully" Jan 13 20:53:52.255984 containerd[1897]: time="2025-01-13T20:53:52.255949381Z" level=info msg="TearDown network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" successfully" Jan 13 20:53:52.255984 containerd[1897]: time="2025-01-13T20:53:52.255965685Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" returns successfully" Jan 13 20:53:52.257885 containerd[1897]: time="2025-01-13T20:53:52.257862743Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" Jan 13 20:53:52.260321 containerd[1897]: time="2025-01-13T20:53:52.258368096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:1,}" Jan 13 20:53:52.261719 containerd[1897]: time="2025-01-13T20:53:52.260173383Z" level=info msg="TearDown network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" successfully" Jan 13 20:53:52.261867 containerd[1897]: time="2025-01-13T20:53:52.261847492Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" returns successfully" Jan 13 20:53:52.264003 containerd[1897]: time="2025-01-13T20:53:52.263977228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:4,}" Jan 13 20:53:52.316007 systemd[1]: run-netns-cni\x2d0f74fac2\x2d7aa9\x2de33d\x2d22fa\x2d7a49e58e5676.mount: Deactivated successfully. Jan 13 20:53:52.471812 containerd[1897]: time="2025-01-13T20:53:52.471748385Z" level=error msg="Failed to destroy network for sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:52.472576 containerd[1897]: time="2025-01-13T20:53:52.472192942Z" level=error msg="encountered an error cleaning up failed sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:52.472576 containerd[1897]: time="2025-01-13T20:53:52.472370034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:52.473853 kubelet[2352]: E0113 20:53:52.472670 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:52.473853 kubelet[2352]: E0113 20:53:52.472734 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:52.473853 kubelet[2352]: E0113 20:53:52.472763 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:52.474250 kubelet[2352]: E0113 20:53:52.472831 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:52.476254 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe-shm.mount: Deactivated successfully. Jan 13 20:53:52.499333 containerd[1897]: time="2025-01-13T20:53:52.499205709Z" level=error msg="Failed to destroy network for sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:52.500449 containerd[1897]: time="2025-01-13T20:53:52.500400547Z" level=error msg="encountered an error cleaning up failed sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:52.500576 containerd[1897]: time="2025-01-13T20:53:52.500498674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:52.501758 kubelet[2352]: E0113 20:53:52.501731 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:52.501862 kubelet[2352]: E0113 20:53:52.501796 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:52.501862 kubelet[2352]: E0113 20:53:52.501825 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:52.501957 kubelet[2352]: E0113 20:53:52.501894 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-kplr4_default(c39c5321-4dfc-4b73-a1d2-cf757388b130)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-kplr4_default(c39c5321-4dfc-4b73-a1d2-cf757388b130)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-kplr4" podUID="c39c5321-4dfc-4b73-a1d2-cf757388b130" Jan 13 20:53:52.503707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee-shm.mount: Deactivated successfully. Jan 13 20:53:52.966715 kubelet[2352]: E0113 20:53:52.966601 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:53.262058 kubelet[2352]: I0113 20:53:53.261959 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe" Jan 13 20:53:53.263482 containerd[1897]: time="2025-01-13T20:53:53.263438350Z" level=info msg="StopPodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\"" Jan 13 20:53:53.268976 containerd[1897]: time="2025-01-13T20:53:53.266250776Z" level=info msg="Ensure that sandbox 29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe in task-service has been cleanup successfully" Jan 13 20:53:53.268976 containerd[1897]: time="2025-01-13T20:53:53.268599842Z" level=info msg="TearDown network for sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" successfully" Jan 13 20:53:53.268976 containerd[1897]: time="2025-01-13T20:53:53.268628303Z" level=info msg="StopPodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" returns successfully" Jan 13 20:53:53.272842 containerd[1897]: time="2025-01-13T20:53:53.272796270Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\"" Jan 13 20:53:53.274134 systemd[1]: run-netns-cni\x2dd8b32c9a\x2d8b69\x2d27dc\x2da0e3\x2d548c1cb35e1b.mount: Deactivated successfully. Jan 13 20:53:53.276803 containerd[1897]: time="2025-01-13T20:53:53.275021693Z" level=info msg="TearDown network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" successfully" Jan 13 20:53:53.276803 containerd[1897]: time="2025-01-13T20:53:53.275048209Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" returns successfully" Jan 13 20:53:53.277191 containerd[1897]: time="2025-01-13T20:53:53.277157404Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\"" Jan 13 20:53:53.277284 containerd[1897]: time="2025-01-13T20:53:53.277257274Z" level=info msg="TearDown network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" successfully" Jan 13 20:53:53.277284 containerd[1897]: time="2025-01-13T20:53:53.277274096Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" returns successfully" Jan 13 20:53:53.279436 kubelet[2352]: I0113 20:53:53.277730 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee" Jan 13 20:53:53.279540 containerd[1897]: time="2025-01-13T20:53:53.277890684Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\"" Jan 13 20:53:53.279540 containerd[1897]: time="2025-01-13T20:53:53.277998695Z" level=info msg="TearDown network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" successfully" Jan 13 20:53:53.279540 containerd[1897]: time="2025-01-13T20:53:53.278033909Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" returns successfully" Jan 13 20:53:53.279810 containerd[1897]: time="2025-01-13T20:53:53.279788064Z" level=info msg="StopPodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\"" Jan 13 20:53:53.281277 containerd[1897]: time="2025-01-13T20:53:53.281251281Z" level=info msg="Ensure that sandbox a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee in task-service has been cleanup successfully" Jan 13 20:53:53.281597 containerd[1897]: time="2025-01-13T20:53:53.281564853Z" level=info msg="TearDown network for sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" successfully" Jan 13 20:53:53.283559 containerd[1897]: time="2025-01-13T20:53:53.282593309Z" level=info msg="StopPodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" returns successfully" Jan 13 20:53:53.283559 containerd[1897]: time="2025-01-13T20:53:53.282755618Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" Jan 13 20:53:53.283559 containerd[1897]: time="2025-01-13T20:53:53.282842987Z" level=info msg="TearDown network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" successfully" Jan 13 20:53:53.283559 containerd[1897]: time="2025-01-13T20:53:53.282856672Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" returns successfully" Jan 13 20:53:53.287802 containerd[1897]: time="2025-01-13T20:53:53.287745714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:5,}" Jan 13 20:53:53.288264 containerd[1897]: time="2025-01-13T20:53:53.288084896Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\"" Jan 13 20:53:53.288264 containerd[1897]: time="2025-01-13T20:53:53.288208449Z" level=info msg="TearDown network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" successfully" Jan 13 20:53:53.288264 containerd[1897]: time="2025-01-13T20:53:53.288225188Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" returns successfully" Jan 13 20:53:53.288436 systemd[1]: run-netns-cni\x2d75157d27\x2ddf70\x2df3ea\x2d5f1b\x2dc523ae0d16cb.mount: Deactivated successfully. Jan 13 20:53:53.289359 containerd[1897]: time="2025-01-13T20:53:53.289313861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:2,}" Jan 13 20:53:53.453016 containerd[1897]: time="2025-01-13T20:53:53.452858176Z" level=error msg="Failed to destroy network for sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:53.453543 containerd[1897]: time="2025-01-13T20:53:53.453472290Z" level=error msg="encountered an error cleaning up failed sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:53.453881 containerd[1897]: time="2025-01-13T20:53:53.453695556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:53.454906 kubelet[2352]: E0113 20:53:53.454428 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:53.454906 kubelet[2352]: E0113 20:53:53.454500 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:53.454906 kubelet[2352]: E0113 20:53:53.454543 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:53.455113 kubelet[2352]: E0113 20:53:53.454613 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-kplr4_default(c39c5321-4dfc-4b73-a1d2-cf757388b130)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-kplr4_default(c39c5321-4dfc-4b73-a1d2-cf757388b130)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-kplr4" podUID="c39c5321-4dfc-4b73-a1d2-cf757388b130" Jan 13 20:53:53.456490 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a-shm.mount: Deactivated successfully. Jan 13 20:53:53.474749 containerd[1897]: time="2025-01-13T20:53:53.474685068Z" level=error msg="Failed to destroy network for sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:53.475128 containerd[1897]: time="2025-01-13T20:53:53.475075579Z" level=error msg="encountered an error cleaning up failed sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:53.475215 containerd[1897]: time="2025-01-13T20:53:53.475167742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:53.475594 kubelet[2352]: E0113 20:53:53.475561 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:53.475779 kubelet[2352]: E0113 20:53:53.475648 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:53.475779 kubelet[2352]: E0113 20:53:53.475678 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:53.475967 kubelet[2352]: E0113 20:53:53.475941 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:53.966785 kubelet[2352]: E0113 20:53:53.966707 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:54.287740 kubelet[2352]: I0113 20:53:54.286963 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885" Jan 13 20:53:54.289198 containerd[1897]: time="2025-01-13T20:53:54.289162274Z" level=info msg="StopPodSandbox for \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\"" Jan 13 20:53:54.289613 containerd[1897]: time="2025-01-13T20:53:54.289406249Z" level=info msg="Ensure that sandbox d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885 in task-service has been cleanup successfully" Jan 13 20:53:54.290700 containerd[1897]: time="2025-01-13T20:53:54.289941569Z" level=info msg="TearDown network for sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\" successfully" Jan 13 20:53:54.290700 containerd[1897]: time="2025-01-13T20:53:54.289971672Z" level=info msg="StopPodSandbox for \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\" returns successfully" Jan 13 20:53:54.291022 containerd[1897]: time="2025-01-13T20:53:54.290992888Z" level=info msg="StopPodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\"" Jan 13 20:53:54.291112 containerd[1897]: time="2025-01-13T20:53:54.291092325Z" level=info msg="TearDown network for sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" successfully" Jan 13 20:53:54.291161 containerd[1897]: time="2025-01-13T20:53:54.291113596Z" level=info msg="StopPodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" returns successfully" Jan 13 20:53:54.292902 containerd[1897]: time="2025-01-13T20:53:54.292701960Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\"" Jan 13 20:53:54.292902 containerd[1897]: time="2025-01-13T20:53:54.292802267Z" level=info msg="TearDown network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" successfully" Jan 13 20:53:54.292902 containerd[1897]: time="2025-01-13T20:53:54.292816131Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" returns successfully" Jan 13 20:53:54.295140 containerd[1897]: time="2025-01-13T20:53:54.295080761Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\"" Jan 13 20:53:54.295256 containerd[1897]: time="2025-01-13T20:53:54.295229590Z" level=info msg="TearDown network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" successfully" Jan 13 20:53:54.295256 containerd[1897]: time="2025-01-13T20:53:54.295249128Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" returns successfully" Jan 13 20:53:54.295951 kubelet[2352]: I0113 20:53:54.295876 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a" Jan 13 20:53:54.297066 containerd[1897]: time="2025-01-13T20:53:54.297039120Z" level=info msg="StopPodSandbox for \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\"" Jan 13 20:53:54.297313 containerd[1897]: time="2025-01-13T20:53:54.297279015Z" level=info msg="Ensure that sandbox c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a in task-service has been cleanup successfully" Jan 13 20:53:54.298201 containerd[1897]: time="2025-01-13T20:53:54.298097101Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\"" Jan 13 20:53:54.298276 containerd[1897]: time="2025-01-13T20:53:54.298244831Z" level=info msg="TearDown network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" successfully" Jan 13 20:53:54.298276 containerd[1897]: time="2025-01-13T20:53:54.298260570Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" returns successfully" Jan 13 20:53:54.299592 containerd[1897]: time="2025-01-13T20:53:54.298362311Z" level=info msg="TearDown network for sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\" successfully" Jan 13 20:53:54.299592 containerd[1897]: time="2025-01-13T20:53:54.298385400Z" level=info msg="StopPodSandbox for \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\" returns successfully" Jan 13 20:53:54.299592 containerd[1897]: time="2025-01-13T20:53:54.299417433Z" level=info msg="StopPodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\"" Jan 13 20:53:54.299592 containerd[1897]: time="2025-01-13T20:53:54.299503713Z" level=info msg="TearDown network for sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" successfully" Jan 13 20:53:54.299592 containerd[1897]: time="2025-01-13T20:53:54.299517627Z" level=info msg="StopPodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" returns successfully" Jan 13 20:53:54.299986 containerd[1897]: time="2025-01-13T20:53:54.299847596Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\"" Jan 13 20:53:54.299986 containerd[1897]: time="2025-01-13T20:53:54.299930814Z" level=info msg="TearDown network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" successfully" Jan 13 20:53:54.299986 containerd[1897]: time="2025-01-13T20:53:54.299944684Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" returns successfully" Jan 13 20:53:54.300112 containerd[1897]: time="2025-01-13T20:53:54.299998228Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" Jan 13 20:53:54.300112 containerd[1897]: time="2025-01-13T20:53:54.300078045Z" level=info msg="TearDown network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" successfully" Jan 13 20:53:54.300112 containerd[1897]: time="2025-01-13T20:53:54.300094718Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" returns successfully" Jan 13 20:53:54.301254 containerd[1897]: time="2025-01-13T20:53:54.300775354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:3,}" Jan 13 20:53:54.302015 containerd[1897]: time="2025-01-13T20:53:54.301988408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:6,}" Jan 13 20:53:54.318933 systemd[1]: run-netns-cni\x2dc5a2af75\x2d4161\x2d3a76\x2dc8f4\x2d53509101e15e.mount: Deactivated successfully. Jan 13 20:53:54.319394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885-shm.mount: Deactivated successfully. Jan 13 20:53:54.319491 systemd[1]: run-netns-cni\x2d0f5d1e25\x2dce20\x2d38ae\x2d45a2\x2d78f1ce9385cf.mount: Deactivated successfully. Jan 13 20:53:54.573043 containerd[1897]: time="2025-01-13T20:53:54.572280259Z" level=error msg="Failed to destroy network for sandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:54.573043 containerd[1897]: time="2025-01-13T20:53:54.572685952Z" level=error msg="encountered an error cleaning up failed sandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:54.573043 containerd[1897]: time="2025-01-13T20:53:54.572756509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:54.574267 kubelet[2352]: E0113 20:53:54.573852 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:54.574267 kubelet[2352]: E0113 20:53:54.573921 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:54.574267 kubelet[2352]: E0113 20:53:54.573955 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:54.575301 kubelet[2352]: E0113 20:53:54.574028 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-kplr4_default(c39c5321-4dfc-4b73-a1d2-cf757388b130)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-kplr4_default(c39c5321-4dfc-4b73-a1d2-cf757388b130)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-kplr4" podUID="c39c5321-4dfc-4b73-a1d2-cf757388b130" Jan 13 20:53:54.579908 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0-shm.mount: Deactivated successfully. Jan 13 20:53:54.591681 containerd[1897]: time="2025-01-13T20:53:54.591454353Z" level=error msg="Failed to destroy network for sandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:54.591901 containerd[1897]: time="2025-01-13T20:53:54.591867123Z" level=error msg="encountered an error cleaning up failed sandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:54.591968 containerd[1897]: time="2025-01-13T20:53:54.591939776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:54.592550 kubelet[2352]: E0113 20:53:54.592510 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:54.593102 kubelet[2352]: E0113 20:53:54.592872 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:54.593102 kubelet[2352]: E0113 20:53:54.592909 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:54.593102 kubelet[2352]: E0113 20:53:54.593006 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:54.968585 kubelet[2352]: E0113 20:53:54.967836 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:55.306738 kubelet[2352]: I0113 20:53:55.306314 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e" Jan 13 20:53:55.307488 containerd[1897]: time="2025-01-13T20:53:55.307328868Z" level=info msg="StopPodSandbox for \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\"" Jan 13 20:53:55.308480 containerd[1897]: time="2025-01-13T20:53:55.308105974Z" level=info msg="Ensure that sandbox 8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e in task-service has been cleanup successfully" Jan 13 20:53:55.310861 containerd[1897]: time="2025-01-13T20:53:55.310835264Z" level=info msg="TearDown network for sandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\" successfully" Jan 13 20:53:55.310988 containerd[1897]: time="2025-01-13T20:53:55.310970375Z" level=info msg="StopPodSandbox for \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\" returns successfully" Jan 13 20:53:55.312903 containerd[1897]: time="2025-01-13T20:53:55.312353573Z" level=info msg="StopPodSandbox for \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\"" Jan 13 20:53:55.312903 containerd[1897]: time="2025-01-13T20:53:55.312572660Z" level=info msg="TearDown network for sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\" successfully" Jan 13 20:53:55.312903 containerd[1897]: time="2025-01-13T20:53:55.312586437Z" level=info msg="StopPodSandbox for \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\" returns successfully" Jan 13 20:53:55.313601 containerd[1897]: time="2025-01-13T20:53:55.313577895Z" level=info msg="StopPodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\"" Jan 13 20:53:55.313795 containerd[1897]: time="2025-01-13T20:53:55.313776446Z" level=info msg="TearDown network for sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" successfully" Jan 13 20:53:55.313887 containerd[1897]: time="2025-01-13T20:53:55.313868630Z" level=info msg="StopPodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" returns successfully" Jan 13 20:53:55.314780 containerd[1897]: time="2025-01-13T20:53:55.314756290Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\"" Jan 13 20:53:55.314940 containerd[1897]: time="2025-01-13T20:53:55.314914247Z" level=info msg="TearDown network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" successfully" Jan 13 20:53:55.314940 containerd[1897]: time="2025-01-13T20:53:55.314934709Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" returns successfully" Jan 13 20:53:55.319393 kubelet[2352]: I0113 20:53:55.316898 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0" Jan 13 20:53:55.318625 systemd[1]: run-netns-cni\x2dba3c91c2\x2d810a\x2dd2cd\x2d8239\x2d8525968a7371.mount: Deactivated successfully. Jan 13 20:53:55.319675 containerd[1897]: time="2025-01-13T20:53:55.318080581Z" level=info msg="StopPodSandbox for \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\"" Jan 13 20:53:55.319675 containerd[1897]: time="2025-01-13T20:53:55.318314601Z" level=info msg="Ensure that sandbox 5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0 in task-service has been cleanup successfully" Jan 13 20:53:55.319675 containerd[1897]: time="2025-01-13T20:53:55.318604797Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\"" Jan 13 20:53:55.319675 containerd[1897]: time="2025-01-13T20:53:55.319109864Z" level=info msg="TearDown network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" successfully" Jan 13 20:53:55.319675 containerd[1897]: time="2025-01-13T20:53:55.319131199Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" returns successfully" Jan 13 20:53:55.319675 containerd[1897]: time="2025-01-13T20:53:55.319521424Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\"" Jan 13 20:53:55.319675 containerd[1897]: time="2025-01-13T20:53:55.319620415Z" level=info msg="TearDown network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" successfully" Jan 13 20:53:55.319675 containerd[1897]: time="2025-01-13T20:53:55.319635282Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" returns successfully" Jan 13 20:53:55.319136 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e-shm.mount: Deactivated successfully. Jan 13 20:53:55.324010 containerd[1897]: time="2025-01-13T20:53:55.321425645Z" level=info msg="TearDown network for sandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\" successfully" Jan 13 20:53:55.324010 containerd[1897]: time="2025-01-13T20:53:55.321505848Z" level=info msg="StopPodSandbox for \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\" returns successfully" Jan 13 20:53:55.324010 containerd[1897]: time="2025-01-13T20:53:55.322729019Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" Jan 13 20:53:55.324010 containerd[1897]: time="2025-01-13T20:53:55.322824526Z" level=info msg="TearDown network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" successfully" Jan 13 20:53:55.324010 containerd[1897]: time="2025-01-13T20:53:55.322838034Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" returns successfully" Jan 13 20:53:55.324010 containerd[1897]: time="2025-01-13T20:53:55.323320538Z" level=info msg="StopPodSandbox for \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\"" Jan 13 20:53:55.324010 containerd[1897]: time="2025-01-13T20:53:55.323409138Z" level=info msg="TearDown network for sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\" successfully" Jan 13 20:53:55.324010 containerd[1897]: time="2025-01-13T20:53:55.323422421Z" level=info msg="StopPodSandbox for \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\" returns successfully" Jan 13 20:53:55.324010 containerd[1897]: time="2025-01-13T20:53:55.323561845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:7,}" Jan 13 20:53:55.327133 containerd[1897]: time="2025-01-13T20:53:55.324510379Z" level=info msg="StopPodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\"" Jan 13 20:53:55.327133 containerd[1897]: time="2025-01-13T20:53:55.324660355Z" level=info msg="TearDown network for sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" successfully" Jan 13 20:53:55.327133 containerd[1897]: time="2025-01-13T20:53:55.324675249Z" level=info msg="StopPodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" returns successfully" Jan 13 20:53:55.327133 containerd[1897]: time="2025-01-13T20:53:55.325103436Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\"" Jan 13 20:53:55.327133 containerd[1897]: time="2025-01-13T20:53:55.325281321Z" level=info msg="TearDown network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" successfully" Jan 13 20:53:55.327133 containerd[1897]: time="2025-01-13T20:53:55.325337757Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" returns successfully" Jan 13 20:53:55.326219 systemd[1]: run-netns-cni\x2df7e9e955\x2d8b81\x2d413b\x2d4277\x2d8d16cebe8323.mount: Deactivated successfully. Jan 13 20:53:55.327452 containerd[1897]: time="2025-01-13T20:53:55.327168614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:4,}" Jan 13 20:53:55.574455 containerd[1897]: time="2025-01-13T20:53:55.573417371Z" level=error msg="Failed to destroy network for sandbox \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:55.574455 containerd[1897]: time="2025-01-13T20:53:55.574194519Z" level=error msg="encountered an error cleaning up failed sandbox \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:55.574455 containerd[1897]: time="2025-01-13T20:53:55.574279995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:55.575414 kubelet[2352]: E0113 20:53:55.574968 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:55.575414 kubelet[2352]: E0113 20:53:55.575034 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:55.575414 kubelet[2352]: E0113 20:53:55.575070 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:55.575728 kubelet[2352]: E0113 20:53:55.575134 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-kplr4_default(c39c5321-4dfc-4b73-a1d2-cf757388b130)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-kplr4_default(c39c5321-4dfc-4b73-a1d2-cf757388b130)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-kplr4" podUID="c39c5321-4dfc-4b73-a1d2-cf757388b130" Jan 13 20:53:55.585932 containerd[1897]: time="2025-01-13T20:53:55.585874325Z" level=error msg="Failed to destroy network for sandbox \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:55.586318 containerd[1897]: time="2025-01-13T20:53:55.586280956Z" level=error msg="encountered an error cleaning up failed sandbox \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:55.586447 containerd[1897]: time="2025-01-13T20:53:55.586363000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:55.587322 kubelet[2352]: E0113 20:53:55.586932 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:55.587322 kubelet[2352]: E0113 20:53:55.586999 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:55.587322 kubelet[2352]: E0113 20:53:55.587030 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:55.587511 kubelet[2352]: E0113 20:53:55.587095 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:55.954235 kubelet[2352]: E0113 20:53:55.953611 2352 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:55.968838 kubelet[2352]: E0113 20:53:55.968772 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:56.322669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf-shm.mount: Deactivated successfully. Jan 13 20:53:56.322889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b-shm.mount: Deactivated successfully. Jan 13 20:53:56.331439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1124601706.mount: Deactivated successfully. Jan 13 20:53:56.335559 kubelet[2352]: I0113 20:53:56.333760 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b" Jan 13 20:53:56.336415 containerd[1897]: time="2025-01-13T20:53:56.335991781Z" level=info msg="StopPodSandbox for \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\"" Jan 13 20:53:56.336415 containerd[1897]: time="2025-01-13T20:53:56.336264463Z" level=info msg="Ensure that sandbox 53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b in task-service has been cleanup successfully" Jan 13 20:53:56.339638 containerd[1897]: time="2025-01-13T20:53:56.337322076Z" level=info msg="TearDown network for sandbox \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\" successfully" Jan 13 20:53:56.339638 containerd[1897]: time="2025-01-13T20:53:56.337417316Z" level=info msg="StopPodSandbox for \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\" returns successfully" Jan 13 20:53:56.340915 containerd[1897]: time="2025-01-13T20:53:56.340417141Z" level=info msg="StopPodSandbox for \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\"" Jan 13 20:53:56.343717 systemd[1]: run-netns-cni\x2d8d1b60f3\x2d5786\x2dc392\x2d09ea\x2dc33cd22ee62d.mount: Deactivated successfully. Jan 13 20:53:56.346293 containerd[1897]: time="2025-01-13T20:53:56.343998177Z" level=info msg="TearDown network for sandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\" successfully" Jan 13 20:53:56.346293 containerd[1897]: time="2025-01-13T20:53:56.344027611Z" level=info msg="StopPodSandbox for \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\" returns successfully" Jan 13 20:53:56.346293 containerd[1897]: time="2025-01-13T20:53:56.346084810Z" level=info msg="StopPodSandbox for \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\"" Jan 13 20:53:56.346293 containerd[1897]: time="2025-01-13T20:53:56.346283133Z" level=info msg="TearDown network for sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\" successfully" Jan 13 20:53:56.346473 containerd[1897]: time="2025-01-13T20:53:56.346301950Z" level=info msg="StopPodSandbox for \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\" returns successfully" Jan 13 20:53:56.347789 containerd[1897]: time="2025-01-13T20:53:56.347609448Z" level=info msg="StopPodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\"" Jan 13 20:53:56.348808 containerd[1897]: time="2025-01-13T20:53:56.348777893Z" level=info msg="TearDown network for sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" successfully" Jan 13 20:53:56.348911 containerd[1897]: time="2025-01-13T20:53:56.348811620Z" level=info msg="StopPodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" returns successfully" Jan 13 20:53:56.352124 containerd[1897]: time="2025-01-13T20:53:56.352069334Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\"" Jan 13 20:53:56.352244 containerd[1897]: time="2025-01-13T20:53:56.352173299Z" level=info msg="TearDown network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" successfully" Jan 13 20:53:56.352244 containerd[1897]: time="2025-01-13T20:53:56.352189493Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" returns successfully" Jan 13 20:53:56.354854 containerd[1897]: time="2025-01-13T20:53:56.354718491Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\"" Jan 13 20:53:56.355113 containerd[1897]: time="2025-01-13T20:53:56.354982873Z" level=info msg="TearDown network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" successfully" Jan 13 20:53:56.355113 containerd[1897]: time="2025-01-13T20:53:56.355086052Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" returns successfully" Jan 13 20:53:56.356042 containerd[1897]: time="2025-01-13T20:53:56.355930799Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\"" Jan 13 20:53:56.356210 containerd[1897]: time="2025-01-13T20:53:56.356169255Z" level=info msg="TearDown network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" successfully" Jan 13 20:53:56.356355 containerd[1897]: time="2025-01-13T20:53:56.356192107Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" returns successfully" Jan 13 20:53:56.357580 kubelet[2352]: I0113 20:53:56.356877 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf" Jan 13 20:53:56.363801 containerd[1897]: time="2025-01-13T20:53:56.363758794Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" Jan 13 20:53:56.363944 containerd[1897]: time="2025-01-13T20:53:56.363877228Z" level=info msg="TearDown network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" successfully" Jan 13 20:53:56.363944 containerd[1897]: time="2025-01-13T20:53:56.363932656Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" returns successfully" Jan 13 20:53:56.366589 containerd[1897]: time="2025-01-13T20:53:56.364035502Z" level=info msg="StopPodSandbox for \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\"" Jan 13 20:53:56.366589 containerd[1897]: time="2025-01-13T20:53:56.364231766Z" level=info msg="Ensure that sandbox 8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf in task-service has been cleanup successfully" Jan 13 20:53:56.366589 containerd[1897]: time="2025-01-13T20:53:56.364677770Z" level=info msg="TearDown network for sandbox \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\" successfully" Jan 13 20:53:56.366589 containerd[1897]: time="2025-01-13T20:53:56.366496701Z" level=info msg="StopPodSandbox for \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\" returns successfully" Jan 13 20:53:56.367136 containerd[1897]: time="2025-01-13T20:53:56.367019876Z" level=info msg="StopPodSandbox for \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\"" Jan 13 20:53:56.367136 containerd[1897]: time="2025-01-13T20:53:56.367114681Z" level=info msg="TearDown network for sandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\" successfully" Jan 13 20:53:56.367136 containerd[1897]: time="2025-01-13T20:53:56.367130333Z" level=info msg="StopPodSandbox for \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\" returns successfully" Jan 13 20:53:56.367273 containerd[1897]: time="2025-01-13T20:53:56.367244106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:8,}" Jan 13 20:53:56.367588 systemd[1]: run-netns-cni\x2d2edf8143\x2d84b5\x2d9d0d\x2d3085\x2d6f5f9b1b3d7c.mount: Deactivated successfully. Jan 13 20:53:56.368062 containerd[1897]: time="2025-01-13T20:53:56.367811790Z" level=info msg="StopPodSandbox for \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\"" Jan 13 20:53:56.368062 containerd[1897]: time="2025-01-13T20:53:56.367896988Z" level=info msg="TearDown network for sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\" successfully" Jan 13 20:53:56.368062 containerd[1897]: time="2025-01-13T20:53:56.367909904Z" level=info msg="StopPodSandbox for \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\" returns successfully" Jan 13 20:53:56.369569 containerd[1897]: time="2025-01-13T20:53:56.368236019Z" level=info msg="StopPodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\"" Jan 13 20:53:56.369569 containerd[1897]: time="2025-01-13T20:53:56.368320163Z" level=info msg="TearDown network for sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" successfully" Jan 13 20:53:56.369569 containerd[1897]: time="2025-01-13T20:53:56.368335313Z" level=info msg="StopPodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" returns successfully" Jan 13 20:53:56.369569 containerd[1897]: time="2025-01-13T20:53:56.368715570Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\"" Jan 13 20:53:56.369569 containerd[1897]: time="2025-01-13T20:53:56.368971629Z" level=info msg="TearDown network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" successfully" Jan 13 20:53:56.369569 containerd[1897]: time="2025-01-13T20:53:56.369105869Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" returns successfully" Jan 13 20:53:56.369971 containerd[1897]: time="2025-01-13T20:53:56.369756936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:5,}" Jan 13 20:53:56.393253 containerd[1897]: time="2025-01-13T20:53:56.393202871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:56.401435 containerd[1897]: time="2025-01-13T20:53:56.401298166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 20:53:56.412856 containerd[1897]: time="2025-01-13T20:53:56.412803289Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:56.434908 containerd[1897]: time="2025-01-13T20:53:56.434769336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:56.444562 containerd[1897]: time="2025-01-13T20:53:56.443453764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.229939908s" Jan 13 20:53:56.450064 containerd[1897]: time="2025-01-13T20:53:56.449934774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 20:53:56.482718 containerd[1897]: time="2025-01-13T20:53:56.482251579Z" level=info msg="CreateContainer within sandbox \"e206361e6a35719794a966becf4ffc784374aab68fe44ea5e668f32ce4a19cc6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:53:56.541890 containerd[1897]: time="2025-01-13T20:53:56.541840094Z" level=info msg="CreateContainer within sandbox \"e206361e6a35719794a966becf4ffc784374aab68fe44ea5e668f32ce4a19cc6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0c1eef5f9764f53a87599e1966f1955c8287671d3efaa4d0f845b093e034f894\"" Jan 13 20:53:56.544637 containerd[1897]: time="2025-01-13T20:53:56.542812821Z" level=info msg="StartContainer for \"0c1eef5f9764f53a87599e1966f1955c8287671d3efaa4d0f845b093e034f894\"" Jan 13 20:53:56.571713 containerd[1897]: time="2025-01-13T20:53:56.571673866Z" level=error msg="Failed to destroy network for sandbox \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:56.572120 containerd[1897]: time="2025-01-13T20:53:56.572098955Z" level=error msg="encountered an error cleaning up failed sandbox \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:56.582370 containerd[1897]: time="2025-01-13T20:53:56.582251184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:56.582884 kubelet[2352]: E0113 20:53:56.582857 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:56.583096 kubelet[2352]: E0113 20:53:56.583085 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:56.583196 kubelet[2352]: E0113 20:53:56.583188 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-kplr4" Jan 13 20:53:56.583339 kubelet[2352]: E0113 20:53:56.583328 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-kplr4_default(c39c5321-4dfc-4b73-a1d2-cf757388b130)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-kplr4_default(c39c5321-4dfc-4b73-a1d2-cf757388b130)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-kplr4" podUID="c39c5321-4dfc-4b73-a1d2-cf757388b130" Jan 13 20:53:56.622864 containerd[1897]: time="2025-01-13T20:53:56.622796325Z" level=error msg="Failed to destroy network for sandbox \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:56.623649 containerd[1897]: time="2025-01-13T20:53:56.623588625Z" level=error msg="encountered an error cleaning up failed sandbox \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:56.623956 containerd[1897]: time="2025-01-13T20:53:56.623925245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:56.624979 kubelet[2352]: E0113 20:53:56.624890 2352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:53:56.625261 kubelet[2352]: E0113 20:53:56.625239 2352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:56.626244 kubelet[2352]: E0113 20:53:56.625567 2352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jvgjw" Jan 13 20:53:56.626366 kubelet[2352]: E0113 20:53:56.626349 2352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jvgjw_calico-system(7e57e640-184c-47ee-a3aa-558418051dc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jvgjw" podUID="7e57e640-184c-47ee-a3aa-558418051dc1" Jan 13 20:53:56.685799 systemd[1]: Started cri-containerd-0c1eef5f9764f53a87599e1966f1955c8287671d3efaa4d0f845b093e034f894.scope - libcontainer container 0c1eef5f9764f53a87599e1966f1955c8287671d3efaa4d0f845b093e034f894. Jan 13 20:53:56.724442 containerd[1897]: time="2025-01-13T20:53:56.724398586Z" level=info msg="StartContainer for \"0c1eef5f9764f53a87599e1966f1955c8287671d3efaa4d0f845b093e034f894\" returns successfully" Jan 13 20:53:56.814205 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:53:56.814356 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:53:56.969589 kubelet[2352]: E0113 20:53:56.969429 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:57.322043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db-shm.mount: Deactivated successfully. Jan 13 20:53:57.363439 kubelet[2352]: I0113 20:53:57.363216 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4" Jan 13 20:53:57.364592 containerd[1897]: time="2025-01-13T20:53:57.364368915Z" level=info msg="StopPodSandbox for \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\"" Jan 13 20:53:57.365412 containerd[1897]: time="2025-01-13T20:53:57.364774308Z" level=info msg="Ensure that sandbox b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4 in task-service has been cleanup successfully" Jan 13 20:53:57.365412 containerd[1897]: time="2025-01-13T20:53:57.365296381Z" level=info msg="TearDown network for sandbox \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\" successfully" Jan 13 20:53:57.365412 containerd[1897]: time="2025-01-13T20:53:57.365340905Z" level=info msg="StopPodSandbox for \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\" returns successfully" Jan 13 20:53:57.369796 containerd[1897]: time="2025-01-13T20:53:57.367680065Z" level=info msg="StopPodSandbox for \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\"" Jan 13 20:53:57.369796 containerd[1897]: time="2025-01-13T20:53:57.367803762Z" level=info msg="TearDown network for sandbox \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\" successfully" Jan 13 20:53:57.369796 containerd[1897]: time="2025-01-13T20:53:57.369703073Z" level=info msg="StopPodSandbox for \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\" returns successfully" Jan 13 20:53:57.370333 kubelet[2352]: I0113 20:53:57.369493 2352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db" Jan 13 20:53:57.372695 containerd[1897]: time="2025-01-13T20:53:57.370449363Z" level=info msg="StopPodSandbox for \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\"" Jan 13 20:53:57.372695 containerd[1897]: time="2025-01-13T20:53:57.370554408Z" level=info msg="TearDown network for sandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\" successfully" Jan 13 20:53:57.372695 containerd[1897]: time="2025-01-13T20:53:57.370568215Z" level=info msg="StopPodSandbox for \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\" returns successfully" Jan 13 20:53:57.372695 containerd[1897]: time="2025-01-13T20:53:57.370825716Z" level=info msg="StopPodSandbox for \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\"" Jan 13 20:53:57.372695 containerd[1897]: time="2025-01-13T20:53:57.370969983Z" level=info msg="TearDown network for sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\" successfully" Jan 13 20:53:57.372695 containerd[1897]: time="2025-01-13T20:53:57.370985952Z" level=info msg="StopPodSandbox for \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\" returns successfully" Jan 13 20:53:57.372695 containerd[1897]: time="2025-01-13T20:53:57.371382329Z" level=info msg="StopPodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\"" Jan 13 20:53:57.372695 containerd[1897]: time="2025-01-13T20:53:57.371469723Z" level=info msg="TearDown network for sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" successfully" Jan 13 20:53:57.372695 containerd[1897]: time="2025-01-13T20:53:57.371485698Z" level=info msg="StopPodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" returns successfully" Jan 13 20:53:57.372369 systemd[1]: run-netns-cni\x2dd7c8a999\x2dbd27\x2d2286\x2d300d\x2dc5efefbcce25.mount: Deactivated successfully. Jan 13 20:53:57.376672 containerd[1897]: time="2025-01-13T20:53:57.375720386Z" level=info msg="StopPodSandbox for \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\"" Jan 13 20:53:57.376672 containerd[1897]: time="2025-01-13T20:53:57.375961909Z" level=info msg="Ensure that sandbox 205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db in task-service has been cleanup successfully" Jan 13 20:53:57.376672 containerd[1897]: time="2025-01-13T20:53:57.376152405Z" level=info msg="TearDown network for sandbox \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\" successfully" Jan 13 20:53:57.376672 containerd[1897]: time="2025-01-13T20:53:57.376171370Z" level=info msg="StopPodSandbox for \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\" returns successfully" Jan 13 20:53:57.376672 containerd[1897]: time="2025-01-13T20:53:57.376382926Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\"" Jan 13 20:53:57.376672 containerd[1897]: time="2025-01-13T20:53:57.376475321Z" level=info msg="TearDown network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" successfully" Jan 13 20:53:57.376672 containerd[1897]: time="2025-01-13T20:53:57.376489664Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" returns successfully" Jan 13 20:53:57.379564 containerd[1897]: time="2025-01-13T20:53:57.379305082Z" level=info msg="StopPodSandbox for \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\"" Jan 13 20:53:57.379564 containerd[1897]: time="2025-01-13T20:53:57.379402383Z" level=info msg="TearDown network for sandbox \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\" successfully" Jan 13 20:53:57.379564 containerd[1897]: time="2025-01-13T20:53:57.379418258Z" level=info msg="StopPodSandbox for \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\" returns successfully" Jan 13 20:53:57.381737 containerd[1897]: time="2025-01-13T20:53:57.380843120Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\"" Jan 13 20:53:57.381737 containerd[1897]: time="2025-01-13T20:53:57.381053938Z" level=info msg="TearDown network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" successfully" Jan 13 20:53:57.381737 containerd[1897]: time="2025-01-13T20:53:57.381074065Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" returns successfully" Jan 13 20:53:57.384886 containerd[1897]: time="2025-01-13T20:53:57.382687313Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\"" Jan 13 20:53:57.384886 containerd[1897]: time="2025-01-13T20:53:57.382786858Z" level=info msg="TearDown network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" successfully" Jan 13 20:53:57.384886 containerd[1897]: time="2025-01-13T20:53:57.382802848Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" returns successfully" Jan 13 20:53:57.384886 containerd[1897]: time="2025-01-13T20:53:57.384608443Z" level=info msg="StopPodSandbox for \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\"" Jan 13 20:53:57.384886 containerd[1897]: time="2025-01-13T20:53:57.384808238Z" level=info msg="TearDown network for sandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\" successfully" Jan 13 20:53:57.384886 containerd[1897]: time="2025-01-13T20:53:57.384827382Z" level=info msg="StopPodSandbox for \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\" returns successfully" Jan 13 20:53:57.384095 systemd[1]: run-netns-cni\x2da4754fcc\x2d8c5e\x2dd0ee\x2de8e7\x2d5d354e796c75.mount: Deactivated successfully. Jan 13 20:53:57.387345 containerd[1897]: time="2025-01-13T20:53:57.387227318Z" level=info msg="StopPodSandbox for \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\"" Jan 13 20:53:57.387345 containerd[1897]: time="2025-01-13T20:53:57.387249358Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" Jan 13 20:53:57.387345 containerd[1897]: time="2025-01-13T20:53:57.387323480Z" level=info msg="TearDown network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" successfully" Jan 13 20:53:57.387615 containerd[1897]: time="2025-01-13T20:53:57.387323566Z" level=info msg="TearDown network for sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\" successfully" Jan 13 20:53:57.387615 containerd[1897]: time="2025-01-13T20:53:57.387609658Z" level=info msg="StopPodSandbox for \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\" returns successfully" Jan 13 20:53:57.387748 containerd[1897]: time="2025-01-13T20:53:57.387675509Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" returns successfully" Jan 13 20:53:57.388478 containerd[1897]: time="2025-01-13T20:53:57.388319256Z" level=info msg="StopPodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\"" Jan 13 20:53:57.388478 containerd[1897]: time="2025-01-13T20:53:57.388377931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:9,}" Jan 13 20:53:57.388478 containerd[1897]: time="2025-01-13T20:53:57.388407322Z" level=info msg="TearDown network for sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" successfully" Jan 13 20:53:57.388478 containerd[1897]: time="2025-01-13T20:53:57.388421286Z" level=info msg="StopPodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" returns successfully" Jan 13 20:53:57.389818 containerd[1897]: time="2025-01-13T20:53:57.389795401Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\"" Jan 13 20:53:57.390241 containerd[1897]: time="2025-01-13T20:53:57.390017074Z" level=info msg="TearDown network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" successfully" Jan 13 20:53:57.390241 containerd[1897]: time="2025-01-13T20:53:57.390163249Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" returns successfully" Jan 13 20:53:57.396954 containerd[1897]: time="2025-01-13T20:53:57.396611716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:6,}" Jan 13 20:53:57.446403 kubelet[2352]: I0113 20:53:57.446364 2352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-h4qss" podStartSLOduration=4.326105152 podStartE2EDuration="21.446283912s" podCreationTimestamp="2025-01-13 20:53:36 +0000 UTC" firstStartedPulling="2025-01-13 20:53:39.330688019 +0000 UTC m=+3.713770551" lastFinishedPulling="2025-01-13 20:53:56.45086677 +0000 UTC m=+20.833949311" observedRunningTime="2025-01-13 20:53:57.445277229 +0000 UTC m=+21.828359781" watchObservedRunningTime="2025-01-13 20:53:57.446283912 +0000 UTC m=+21.829366461" Jan 13 20:53:57.970132 kubelet[2352]: E0113 20:53:57.970079 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:57.992651 (udev-worker)[3425]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:53:57.992999 systemd-networkd[1737]: cali0a9f38d0833: Link UP Jan 13 20:53:57.994000 systemd-networkd[1737]: cali0a9f38d0833: Gained carrier Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.461 [INFO][3397] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.652 [INFO][3397] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0 nginx-deployment-6d5f899847- default c39c5321-4dfc-4b73-a1d2-cf757388b130 967 0 2025-01-13 20:53:51 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.29.104 nginx-deployment-6d5f899847-kplr4 eth0 default [] [] [kns.default ksa.default.default] cali0a9f38d0833 [] []}} ContainerID="80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" Namespace="default" Pod="nginx-deployment-6d5f899847-kplr4" WorkloadEndpoint="172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-" Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.652 [INFO][3397] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" Namespace="default" Pod="nginx-deployment-6d5f899847-kplr4" WorkloadEndpoint="172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0" Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.815 [INFO][3411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" HandleID="k8s-pod-network.80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" Workload="172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0" Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.861 [INFO][3411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" HandleID="k8s-pod-network.80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" Workload="172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003acea0), Attrs:map[string]string{"namespace":"default", "node":"172.31.29.104", "pod":"nginx-deployment-6d5f899847-kplr4", "timestamp":"2025-01-13 20:53:57.815370717 +0000 UTC"}, Hostname:"172.31.29.104", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.861 [INFO][3411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.861 [INFO][3411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.861 [INFO][3411] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.29.104' Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.873 [INFO][3411] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" host="172.31.29.104" Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.901 [INFO][3411] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.29.104" Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.920 [INFO][3411] ipam/ipam.go 489: Trying affinity for 192.168.54.128/26 host="172.31.29.104" Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.932 [INFO][3411] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.128/26 host="172.31.29.104" Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.940 [INFO][3411] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="172.31.29.104" Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.940 [INFO][3411] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" host="172.31.29.104" Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.943 [INFO][3411] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.953 [INFO][3411] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" host="172.31.29.104" Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.974 [INFO][3411] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.129/26] block=192.168.54.128/26 handle="k8s-pod-network.80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" host="172.31.29.104" Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.974 [INFO][3411] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.129/26] handle="k8s-pod-network.80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" host="172.31.29.104" Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.974 [INFO][3411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:53:58.039028 containerd[1897]: 2025-01-13 20:53:57.974 [INFO][3411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.129/26] IPv6=[] ContainerID="80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" HandleID="k8s-pod-network.80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" Workload="172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0" Jan 13 20:53:58.040431 containerd[1897]: 2025-01-13 20:53:57.977 [INFO][3397] cni-plugin/k8s.go 386: Populated endpoint ContainerID="80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" Namespace="default" Pod="nginx-deployment-6d5f899847-kplr4" WorkloadEndpoint="172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"c39c5321-4dfc-4b73-a1d2-cf757388b130", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 53, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.104", ContainerID:"", Pod:"nginx-deployment-6d5f899847-kplr4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0a9f38d0833", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:58.040431 containerd[1897]: 2025-01-13 20:53:57.977 [INFO][3397] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.129/32] ContainerID="80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" Namespace="default" Pod="nginx-deployment-6d5f899847-kplr4" WorkloadEndpoint="172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0" Jan 13 20:53:58.040431 containerd[1897]: 2025-01-13 20:53:57.977 [INFO][3397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a9f38d0833 ContainerID="80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" Namespace="default" Pod="nginx-deployment-6d5f899847-kplr4" WorkloadEndpoint="172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0" Jan 13 20:53:58.040431 containerd[1897]: 2025-01-13 20:53:57.995 [INFO][3397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" Namespace="default" Pod="nginx-deployment-6d5f899847-kplr4" WorkloadEndpoint="172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0" Jan 13 20:53:58.040431 containerd[1897]: 2025-01-13 20:53:57.996 [INFO][3397] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" Namespace="default" Pod="nginx-deployment-6d5f899847-kplr4" WorkloadEndpoint="172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"c39c5321-4dfc-4b73-a1d2-cf757388b130", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 53, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.104", ContainerID:"80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc", Pod:"nginx-deployment-6d5f899847-kplr4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0a9f38d0833", MAC:"f6:0f:87:12:19:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:58.040431 containerd[1897]: 2025-01-13 20:53:58.037 [INFO][3397] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc" Namespace="default" Pod="nginx-deployment-6d5f899847-kplr4" WorkloadEndpoint="172.31.29.104-k8s-nginx--deployment--6d5f899847--kplr4-eth0" Jan 13 20:53:58.067439 containerd[1897]: time="2025-01-13T20:53:58.066667966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:58.067439 containerd[1897]: time="2025-01-13T20:53:58.066740738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:58.067439 containerd[1897]: time="2025-01-13T20:53:58.066756116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:58.067439 containerd[1897]: time="2025-01-13T20:53:58.066842025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:58.074938 systemd-networkd[1737]: calia49c3809716: Link UP Jan 13 20:53:58.075747 systemd-networkd[1737]: calia49c3809716: Gained carrier Jan 13 20:53:58.102757 systemd[1]: Started cri-containerd-80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc.scope - libcontainer container 80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc. Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:57.456 [INFO][3388] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:57.653 [INFO][3388] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.29.104-k8s-csi--node--driver--jvgjw-eth0 csi-node-driver- calico-system 7e57e640-184c-47ee-a3aa-558418051dc1 786 0 2025-01-13 20:53:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.29.104 csi-node-driver-jvgjw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia49c3809716 [] []}} ContainerID="316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" Namespace="calico-system" Pod="csi-node-driver-jvgjw" WorkloadEndpoint="172.31.29.104-k8s-csi--node--driver--jvgjw-" Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:57.653 [INFO][3388] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" Namespace="calico-system" Pod="csi-node-driver-jvgjw" WorkloadEndpoint="172.31.29.104-k8s-csi--node--driver--jvgjw-eth0" Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:57.819 [INFO][3412] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" HandleID="k8s-pod-network.316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" Workload="172.31.29.104-k8s-csi--node--driver--jvgjw-eth0" Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:57.883 [INFO][3412] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" HandleID="k8s-pod-network.316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" Workload="172.31.29.104-k8s-csi--node--driver--jvgjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ad480), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.29.104", "pod":"csi-node-driver-jvgjw", "timestamp":"2025-01-13 20:53:57.81930105 +0000 UTC"}, Hostname:"172.31.29.104", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:57.883 [INFO][3412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:57.974 [INFO][3412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:57.974 [INFO][3412] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.29.104' Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:57.981 [INFO][3412] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" host="172.31.29.104" Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:57.994 [INFO][3412] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.29.104" Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:58.030 [INFO][3412] ipam/ipam.go 489: Trying affinity for 192.168.54.128/26 host="172.31.29.104" Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:58.037 [INFO][3412] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.128/26 host="172.31.29.104" Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:58.041 [INFO][3412] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="172.31.29.104" Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:58.041 [INFO][3412] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" host="172.31.29.104" Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:58.046 [INFO][3412] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300 Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:58.055 [INFO][3412] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" host="172.31.29.104" Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:58.068 [INFO][3412] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.130/26] block=192.168.54.128/26 handle="k8s-pod-network.316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" host="172.31.29.104" Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:58.068 [INFO][3412] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.130/26] handle="k8s-pod-network.316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" host="172.31.29.104" Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:58.068 [INFO][3412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:53:58.117662 containerd[1897]: 2025-01-13 20:53:58.069 [INFO][3412] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.130/26] IPv6=[] ContainerID="316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" HandleID="k8s-pod-network.316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" Workload="172.31.29.104-k8s-csi--node--driver--jvgjw-eth0" Jan 13 20:53:58.118784 containerd[1897]: 2025-01-13 20:53:58.071 [INFO][3388] cni-plugin/k8s.go 386: Populated endpoint ContainerID="316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" Namespace="calico-system" Pod="csi-node-driver-jvgjw" WorkloadEndpoint="172.31.29.104-k8s-csi--node--driver--jvgjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.104-k8s-csi--node--driver--jvgjw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e57e640-184c-47ee-a3aa-558418051dc1", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 53, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.104", ContainerID:"", Pod:"csi-node-driver-jvgjw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia49c3809716", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:58.118784 containerd[1897]: 2025-01-13 20:53:58.072 [INFO][3388] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.130/32] ContainerID="316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" Namespace="calico-system" Pod="csi-node-driver-jvgjw" WorkloadEndpoint="172.31.29.104-k8s-csi--node--driver--jvgjw-eth0" Jan 13 20:53:58.118784 containerd[1897]: 2025-01-13 20:53:58.072 [INFO][3388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia49c3809716 ContainerID="316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" Namespace="calico-system" Pod="csi-node-driver-jvgjw" WorkloadEndpoint="172.31.29.104-k8s-csi--node--driver--jvgjw-eth0" Jan 13 20:53:58.118784 containerd[1897]: 2025-01-13 20:53:58.076 [INFO][3388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" Namespace="calico-system" Pod="csi-node-driver-jvgjw" WorkloadEndpoint="172.31.29.104-k8s-csi--node--driver--jvgjw-eth0" Jan 13 20:53:58.118784 containerd[1897]: 2025-01-13 20:53:58.078 [INFO][3388] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" Namespace="calico-system" Pod="csi-node-driver-jvgjw" WorkloadEndpoint="172.31.29.104-k8s-csi--node--driver--jvgjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.104-k8s-csi--node--driver--jvgjw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e57e640-184c-47ee-a3aa-558418051dc1", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 53, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.104", ContainerID:"316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300", Pod:"csi-node-driver-jvgjw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia49c3809716", MAC:"12:ee:26:07:46:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:53:58.118784 containerd[1897]: 2025-01-13 20:53:58.113 [INFO][3388] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300" Namespace="calico-system" Pod="csi-node-driver-jvgjw" WorkloadEndpoint="172.31.29.104-k8s-csi--node--driver--jvgjw-eth0" Jan 13 20:53:58.193595 containerd[1897]: time="2025-01-13T20:53:58.191345000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:58.193595 containerd[1897]: time="2025-01-13T20:53:58.191485351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:58.193595 containerd[1897]: time="2025-01-13T20:53:58.191503627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:58.193595 containerd[1897]: time="2025-01-13T20:53:58.191616357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:58.233772 systemd[1]: Started cri-containerd-316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300.scope - libcontainer container 316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300. Jan 13 20:53:58.235635 containerd[1897]: time="2025-01-13T20:53:58.235282420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kplr4,Uid:c39c5321-4dfc-4b73-a1d2-cf757388b130,Namespace:default,Attempt:6,} returns sandbox id \"80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc\"" Jan 13 20:53:58.240016 containerd[1897]: time="2025-01-13T20:53:58.239981831Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:53:58.281002 containerd[1897]: time="2025-01-13T20:53:58.280946258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jvgjw,Uid:7e57e640-184c-47ee-a3aa-558418051dc1,Namespace:calico-system,Attempt:9,} returns sandbox id \"316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300\"" Jan 13 20:53:58.437588 kubelet[2352]: I0113 20:53:58.437007 2352 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:53:58.792573 kernel: bpftool[3623]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:53:58.970694 kubelet[2352]: E0113 20:53:58.970646 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:59.188950 (udev-worker)[3363]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:53:59.193980 systemd-networkd[1737]: vxlan.calico: Link UP Jan 13 20:53:59.193989 systemd-networkd[1737]: vxlan.calico: Gained carrier Jan 13 20:53:59.550833 systemd-networkd[1737]: cali0a9f38d0833: Gained IPv6LL Jan 13 20:53:59.971270 kubelet[2352]: E0113 20:53:59.971024 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:00.126743 systemd-networkd[1737]: calia49c3809716: Gained IPv6LL Jan 13 20:54:00.896378 systemd-networkd[1737]: vxlan.calico: Gained IPv6LL Jan 13 20:54:00.972060 kubelet[2352]: E0113 20:54:00.972004 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:01.386761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602439138.mount: Deactivated successfully. Jan 13 20:54:01.484207 kubelet[2352]: I0113 20:54:01.480800 2352 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:54:01.972768 kubelet[2352]: E0113 20:54:01.972727 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:02.975180 kubelet[2352]: E0113 20:54:02.974011 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:03.023693 ntpd[1867]: Listen normally on 7 vxlan.calico 192.168.54.128:123 Jan 13 20:54:03.023794 ntpd[1867]: Listen normally on 8 cali0a9f38d0833 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 13 20:54:03.024803 ntpd[1867]: 13 Jan 20:54:03 ntpd[1867]: Listen normally on 7 vxlan.calico 192.168.54.128:123 Jan 13 20:54:03.024803 ntpd[1867]: 13 Jan 20:54:03 ntpd[1867]: Listen normally on 8 cali0a9f38d0833 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 13 20:54:03.024803 ntpd[1867]: 13 Jan 20:54:03 ntpd[1867]: Listen normally on 9 calia49c3809716 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 13 20:54:03.024803 ntpd[1867]: 13 Jan 20:54:03 ntpd[1867]: Listen normally on 10 vxlan.calico [fe80::64de:28ff:fed2:ba43%5]:123 Jan 13 20:54:03.024004 ntpd[1867]: Listen normally on 9 calia49c3809716 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 13 20:54:03.024056 ntpd[1867]: Listen normally on 10 vxlan.calico [fe80::64de:28ff:fed2:ba43%5]:123 Jan 13 20:54:03.982639 kubelet[2352]: E0113 20:54:03.975635 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:04.302841 containerd[1897]: time="2025-01-13T20:54:04.302671601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:04.306768 containerd[1897]: time="2025-01-13T20:54:04.306375659Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 20:54:04.308567 containerd[1897]: time="2025-01-13T20:54:04.307997601Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:04.312968 containerd[1897]: time="2025-01-13T20:54:04.312820608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:04.314629 containerd[1897]: time="2025-01-13T20:54:04.314455372Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 6.074283548s" Jan 13 20:54:04.314629 containerd[1897]: time="2025-01-13T20:54:04.314502249Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:54:04.326744 containerd[1897]: time="2025-01-13T20:54:04.325798028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:54:04.352224 containerd[1897]: time="2025-01-13T20:54:04.352164836Z" level=info msg="CreateContainer within sandbox \"80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:54:04.398245 containerd[1897]: time="2025-01-13T20:54:04.398188137Z" level=info msg="CreateContainer within sandbox \"80c6472953ff32bd0cc57303aaf65bb7915f4f287ac074a944c93d5763b3e8dc\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f7ba45110ef6d4797668069cc50e0b97042995482b51ca54be064241106d97f5\"" Jan 13 20:54:04.399282 containerd[1897]: time="2025-01-13T20:54:04.399245816Z" level=info msg="StartContainer for \"f7ba45110ef6d4797668069cc50e0b97042995482b51ca54be064241106d97f5\"" Jan 13 20:54:04.492756 systemd[1]: Started cri-containerd-f7ba45110ef6d4797668069cc50e0b97042995482b51ca54be064241106d97f5.scope - libcontainer container f7ba45110ef6d4797668069cc50e0b97042995482b51ca54be064241106d97f5. Jan 13 20:54:04.532736 containerd[1897]: time="2025-01-13T20:54:04.532684603Z" level=info msg="StartContainer for \"f7ba45110ef6d4797668069cc50e0b97042995482b51ca54be064241106d97f5\" returns successfully" Jan 13 20:54:04.979618 kubelet[2352]: E0113 20:54:04.979561 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:05.890223 containerd[1897]: time="2025-01-13T20:54:05.890164631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:05.892561 containerd[1897]: time="2025-01-13T20:54:05.892478074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 20:54:05.896672 containerd[1897]: time="2025-01-13T20:54:05.896490523Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:05.927071 containerd[1897]: time="2025-01-13T20:54:05.925885585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:05.927071 containerd[1897]: time="2025-01-13T20:54:05.926910815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.601065874s" Jan 13 20:54:05.927071 containerd[1897]: time="2025-01-13T20:54:05.926951090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 20:54:05.929457 containerd[1897]: time="2025-01-13T20:54:05.929420367Z" level=info msg="CreateContainer within sandbox \"316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:54:05.980482 kubelet[2352]: E0113 20:54:05.980428 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:05.988543 containerd[1897]: time="2025-01-13T20:54:05.988480192Z" level=info msg="CreateContainer within sandbox \"316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"24def5caabcc06e089e1e634c10c671a2a5fff4c489c0c11c07f337d07cedc45\"" Jan 13 20:54:05.989272 containerd[1897]: time="2025-01-13T20:54:05.989233741Z" level=info msg="StartContainer for \"24def5caabcc06e089e1e634c10c671a2a5fff4c489c0c11c07f337d07cedc45\"" Jan 13 20:54:06.046977 systemd[1]: Started cri-containerd-24def5caabcc06e089e1e634c10c671a2a5fff4c489c0c11c07f337d07cedc45.scope - libcontainer container 24def5caabcc06e089e1e634c10c671a2a5fff4c489c0c11c07f337d07cedc45. Jan 13 20:54:06.124169 update_engine[1878]: I20250113 20:54:06.122642 1878 update_attempter.cc:509] Updating boot flags... Jan 13 20:54:06.133420 containerd[1897]: time="2025-01-13T20:54:06.132389651Z" level=info msg="StartContainer for \"24def5caabcc06e089e1e634c10c671a2a5fff4c489c0c11c07f337d07cedc45\" returns successfully" Jan 13 20:54:06.138083 containerd[1897]: time="2025-01-13T20:54:06.136740412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:54:06.299599 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3904) Jan 13 20:54:06.595639 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3904) Jan 13 20:54:06.981580 kubelet[2352]: E0113 20:54:06.981511 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:07.541031 containerd[1897]: time="2025-01-13T20:54:07.540981445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:07.542268 containerd[1897]: time="2025-01-13T20:54:07.542140293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 20:54:07.545014 containerd[1897]: time="2025-01-13T20:54:07.543785275Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:07.546601 containerd[1897]: time="2025-01-13T20:54:07.545866829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:07.546601 containerd[1897]: time="2025-01-13T20:54:07.546453884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.409670813s" Jan 13 20:54:07.546601 containerd[1897]: time="2025-01-13T20:54:07.546487984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 20:54:07.548373 containerd[1897]: time="2025-01-13T20:54:07.548345003Z" level=info msg="CreateContainer within sandbox \"316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:54:07.572090 containerd[1897]: time="2025-01-13T20:54:07.572040313Z" level=info msg="CreateContainer within sandbox \"316c8bb435cb98118fc1466a69789d1bac998d0bd47538054674dc59c5d2f300\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e7a56dd5d7e5d4cba31ce6f68301d26acc95ceb786c3673855b1f6e3690c3534\"" Jan 13 20:54:07.572681 containerd[1897]: time="2025-01-13T20:54:07.572645089Z" level=info msg="StartContainer for \"e7a56dd5d7e5d4cba31ce6f68301d26acc95ceb786c3673855b1f6e3690c3534\"" Jan 13 20:54:07.628841 systemd[1]: Started cri-containerd-e7a56dd5d7e5d4cba31ce6f68301d26acc95ceb786c3673855b1f6e3690c3534.scope - libcontainer container e7a56dd5d7e5d4cba31ce6f68301d26acc95ceb786c3673855b1f6e3690c3534. Jan 13 20:54:07.683477 containerd[1897]: time="2025-01-13T20:54:07.680879357Z" level=info msg="StartContainer for \"e7a56dd5d7e5d4cba31ce6f68301d26acc95ceb786c3673855b1f6e3690c3534\" returns successfully" Jan 13 20:54:07.982206 kubelet[2352]: E0113 20:54:07.982084 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:08.119011 kubelet[2352]: I0113 20:54:08.118980 2352 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:54:08.120868 kubelet[2352]: I0113 20:54:08.120850 2352 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:54:08.557914 kubelet[2352]: I0113 20:54:08.557872 2352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-kplr4" podStartSLOduration=11.48147675 podStartE2EDuration="17.557819571s" podCreationTimestamp="2025-01-13 20:53:51 +0000 UTC" firstStartedPulling="2025-01-13 20:53:58.238865838 +0000 UTC m=+22.621948381" lastFinishedPulling="2025-01-13 20:54:04.31520866 +0000 UTC m=+28.698291202" observedRunningTime="2025-01-13 20:54:05.537590203 +0000 UTC m=+29.920672756" watchObservedRunningTime="2025-01-13 20:54:08.557819571 +0000 UTC m=+32.940902126" Jan 13 20:54:08.983229 kubelet[2352]: E0113 20:54:08.983104 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:09.984298 kubelet[2352]: E0113 20:54:09.984242 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:10.985282 kubelet[2352]: E0113 20:54:10.985217 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:11.986416 kubelet[2352]: E0113 20:54:11.986363 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:12.986833 kubelet[2352]: E0113 20:54:12.986778 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:13.987128 kubelet[2352]: E0113 20:54:13.987043 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:14.987300 kubelet[2352]: E0113 20:54:14.987248 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:15.952434 kubelet[2352]: E0113 20:54:15.952382 2352 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:15.988256 kubelet[2352]: E0113 20:54:15.988199 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:16.989150 kubelet[2352]: E0113 20:54:16.989105 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:17.989952 kubelet[2352]: E0113 20:54:17.989908 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:18.682817 kubelet[2352]: I0113 20:54:18.682774 2352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-jvgjw" podStartSLOduration=33.418651802 podStartE2EDuration="42.682734527s" podCreationTimestamp="2025-01-13 20:53:36 +0000 UTC" firstStartedPulling="2025-01-13 20:53:58.282876097 +0000 UTC m=+22.665958639" lastFinishedPulling="2025-01-13 20:54:07.546958819 +0000 UTC m=+31.930041364" observedRunningTime="2025-01-13 20:54:08.558217826 +0000 UTC m=+32.941300375" watchObservedRunningTime="2025-01-13 20:54:18.682734527 +0000 UTC m=+43.065817079" Jan 13 20:54:18.683130 kubelet[2352]: I0113 20:54:18.682992 2352 topology_manager.go:215] "Topology Admit Handler" podUID="9a213143-f329-4cb8-ba9a-d5d713c17d15" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 20:54:18.722730 systemd[1]: Created slice kubepods-besteffort-pod9a213143_f329_4cb8_ba9a_d5d713c17d15.slice - libcontainer container kubepods-besteffort-pod9a213143_f329_4cb8_ba9a_d5d713c17d15.slice. Jan 13 20:54:18.854597 kubelet[2352]: I0113 20:54:18.854548 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9a213143-f329-4cb8-ba9a-d5d713c17d15-data\") pod \"nfs-server-provisioner-0\" (UID: \"9a213143-f329-4cb8-ba9a-d5d713c17d15\") " pod="default/nfs-server-provisioner-0" Jan 13 20:54:18.854597 kubelet[2352]: I0113 20:54:18.854607 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn78x\" (UniqueName: \"kubernetes.io/projected/9a213143-f329-4cb8-ba9a-d5d713c17d15-kube-api-access-xn78x\") pod \"nfs-server-provisioner-0\" (UID: \"9a213143-f329-4cb8-ba9a-d5d713c17d15\") " pod="default/nfs-server-provisioner-0" Jan 13 20:54:18.996934 kubelet[2352]: E0113 20:54:18.996890 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:19.038758 containerd[1897]: time="2025-01-13T20:54:19.038712294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9a213143-f329-4cb8-ba9a-d5d713c17d15,Namespace:default,Attempt:0,}" Jan 13 20:54:19.331611 systemd-networkd[1737]: cali60e51b789ff: Link UP Jan 13 20:54:19.332033 systemd-networkd[1737]: cali60e51b789ff: Gained carrier Jan 13 20:54:19.338030 (udev-worker)[4165]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.139 [INFO][4148] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.29.104-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 9a213143-f329-4cb8-ba9a-d5d713c17d15 1175 0 2025-01-13 20:54:18 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.29.104 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.104-k8s-nfs--server--provisioner--0-" Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.139 [INFO][4148] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.104-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.205 [INFO][4157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" HandleID="k8s-pod-network.d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" Workload="172.31.29.104-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.232 [INFO][4157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" HandleID="k8s-pod-network.d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" Workload="172.31.29.104-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318e10), Attrs:map[string]string{"namespace":"default", "node":"172.31.29.104", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 20:54:19.205685696 +0000 UTC"}, Hostname:"172.31.29.104", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.232 [INFO][4157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.232 [INFO][4157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.232 [INFO][4157] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.29.104' Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.235 [INFO][4157] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" host="172.31.29.104" Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.252 [INFO][4157] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.29.104" Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.270 [INFO][4157] ipam/ipam.go 489: Trying affinity for 192.168.54.128/26 host="172.31.29.104" Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.281 [INFO][4157] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.128/26 host="172.31.29.104" Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.294 [INFO][4157] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="172.31.29.104" Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.294 [INFO][4157] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" host="172.31.29.104" Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.299 [INFO][4157] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.310 [INFO][4157] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" host="172.31.29.104" Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.325 [INFO][4157] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.131/26] block=192.168.54.128/26 handle="k8s-pod-network.d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" host="172.31.29.104" Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.325 [INFO][4157] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.131/26] handle="k8s-pod-network.d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" host="172.31.29.104" Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.325 [INFO][4157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:54:19.368779 containerd[1897]: 2025-01-13 20:54:19.325 [INFO][4157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.131/26] IPv6=[] ContainerID="d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" HandleID="k8s-pod-network.d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" Workload="172.31.29.104-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:54:19.372494 containerd[1897]: 2025-01-13 20:54:19.327 [INFO][4148] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.104-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.104-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"9a213143-f329-4cb8-ba9a-d5d713c17d15", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.104", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.54.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:54:19.372494 containerd[1897]: 2025-01-13 20:54:19.327 [INFO][4148] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.131/32] ContainerID="d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.104-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:54:19.372494 containerd[1897]: 2025-01-13 20:54:19.327 [INFO][4148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.104-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:54:19.372494 containerd[1897]: 2025-01-13 20:54:19.331 [INFO][4148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.104-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:54:19.375352 containerd[1897]: 2025-01-13 20:54:19.334 [INFO][4148] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.104-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.104-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"9a213143-f329-4cb8-ba9a-d5d713c17d15", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.104", ContainerID:"d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.54.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"56:ae:c3:96:ea:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:54:19.375352 containerd[1897]: 2025-01-13 20:54:19.366 [INFO][4148] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.104-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:54:19.412906 containerd[1897]: time="2025-01-13T20:54:19.412638898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:54:19.412906 containerd[1897]: time="2025-01-13T20:54:19.412695711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:54:19.412906 containerd[1897]: time="2025-01-13T20:54:19.412711163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:54:19.412906 containerd[1897]: time="2025-01-13T20:54:19.412834830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:54:19.449729 systemd[1]: Started cri-containerd-d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb.scope - libcontainer container d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb. Jan 13 20:54:19.502435 containerd[1897]: time="2025-01-13T20:54:19.502389346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9a213143-f329-4cb8-ba9a-d5d713c17d15,Namespace:default,Attempt:0,} returns sandbox id \"d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb\"" Jan 13 20:54:19.505252 containerd[1897]: time="2025-01-13T20:54:19.505183062Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:54:19.997513 kubelet[2352]: E0113 20:54:19.997452 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:20.997887 kubelet[2352]: E0113 20:54:20.997801 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:21.056444 systemd-networkd[1737]: cali60e51b789ff: Gained IPv6LL Jan 13 20:54:21.998890 kubelet[2352]: E0113 20:54:21.998851 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:22.083136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633597526.mount: Deactivated successfully. Jan 13 20:54:22.999647 kubelet[2352]: E0113 20:54:22.999609 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:24.000329 kubelet[2352]: E0113 20:54:24.000274 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:24.023597 ntpd[1867]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 20:54:24.024329 ntpd[1867]: 13 Jan 20:54:24 ntpd[1867]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 20:54:24.340844 containerd[1897]: time="2025-01-13T20:54:24.340703194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:24.342456 containerd[1897]: time="2025-01-13T20:54:24.342275026Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 13 20:54:24.343829 containerd[1897]: time="2025-01-13T20:54:24.343353935Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:24.350789 containerd[1897]: time="2025-01-13T20:54:24.350729784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:24.354811 containerd[1897]: time="2025-01-13T20:54:24.354678213Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.849329313s" Jan 13 20:54:24.355109 containerd[1897]: time="2025-01-13T20:54:24.354835177Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 20:54:24.357146 containerd[1897]: time="2025-01-13T20:54:24.357111607Z" level=info msg="CreateContainer within sandbox \"d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:54:24.374855 containerd[1897]: time="2025-01-13T20:54:24.374804639Z" level=info msg="CreateContainer within sandbox \"d37b2fc8ed52fde249ff108da456ba6e5f4689d416a2b36c57264e79f9e1f2bb\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ec0a83f6101cb35d6820474f7370bbe9d08afe997dd44041a5a99ec85b939505\"" Jan 13 20:54:24.375502 containerd[1897]: time="2025-01-13T20:54:24.375471996Z" level=info msg="StartContainer for \"ec0a83f6101cb35d6820474f7370bbe9d08afe997dd44041a5a99ec85b939505\"" Jan 13 20:54:24.414745 systemd[1]: Started cri-containerd-ec0a83f6101cb35d6820474f7370bbe9d08afe997dd44041a5a99ec85b939505.scope - libcontainer container ec0a83f6101cb35d6820474f7370bbe9d08afe997dd44041a5a99ec85b939505. Jan 13 20:54:24.497950 containerd[1897]: time="2025-01-13T20:54:24.497901020Z" level=info msg="StartContainer for \"ec0a83f6101cb35d6820474f7370bbe9d08afe997dd44041a5a99ec85b939505\" returns successfully" Jan 13 20:54:24.623578 kubelet[2352]: I0113 20:54:24.623431 2352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.7722337430000001 podStartE2EDuration="6.623389543s" podCreationTimestamp="2025-01-13 20:54:18 +0000 UTC" firstStartedPulling="2025-01-13 20:54:19.503965578 +0000 UTC m=+43.887048124" lastFinishedPulling="2025-01-13 20:54:24.355121382 +0000 UTC m=+48.738203924" observedRunningTime="2025-01-13 20:54:24.623268177 +0000 UTC m=+49.006350728" watchObservedRunningTime="2025-01-13 20:54:24.623389543 +0000 UTC m=+49.006472095" Jan 13 20:54:25.002701 kubelet[2352]: E0113 20:54:25.000890 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:26.002289 kubelet[2352]: E0113 20:54:26.002235 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:27.003380 kubelet[2352]: E0113 20:54:27.003317 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:28.004357 kubelet[2352]: E0113 20:54:28.004302 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:29.005008 kubelet[2352]: E0113 20:54:29.004919 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:30.006185 kubelet[2352]: E0113 20:54:30.006132 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:31.006717 kubelet[2352]: E0113 20:54:31.006663 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:32.008204 kubelet[2352]: E0113 20:54:32.008152 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:33.009220 kubelet[2352]: E0113 20:54:33.009168 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:34.010437 kubelet[2352]: E0113 20:54:34.010388 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:35.011034 kubelet[2352]: E0113 20:54:35.010981 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:35.953211 kubelet[2352]: E0113 20:54:35.953090 2352 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:36.005178 containerd[1897]: time="2025-01-13T20:54:36.005136819Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" Jan 13 20:54:36.005757 containerd[1897]: time="2025-01-13T20:54:36.005267824Z" level=info msg="TearDown network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" successfully" Jan 13 20:54:36.005757 containerd[1897]: time="2025-01-13T20:54:36.005284944Z" level=info msg="StopPodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" returns successfully" Jan 13 20:54:36.012123 kubelet[2352]: E0113 20:54:36.011970 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:36.024357 containerd[1897]: time="2025-01-13T20:54:36.024291638Z" level=info msg="RemovePodSandbox for \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" Jan 13 20:54:36.033731 containerd[1897]: time="2025-01-13T20:54:36.033675146Z" level=info msg="Forcibly stopping sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\"" Jan 13 20:54:36.034002 containerd[1897]: time="2025-01-13T20:54:36.033818325Z" level=info msg="TearDown network for sandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" successfully" Jan 13 20:54:36.054891 containerd[1897]: time="2025-01-13T20:54:36.054839828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.055213 containerd[1897]: time="2025-01-13T20:54:36.054929025Z" level=info msg="RemovePodSandbox \"bb9863ffb1a7331c4c28ec9357cde11894f42885a2313f4b6419895fbf7780d9\" returns successfully" Jan 13 20:54:36.055799 containerd[1897]: time="2025-01-13T20:54:36.055766218Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\"" Jan 13 20:54:36.056001 containerd[1897]: time="2025-01-13T20:54:36.055976652Z" level=info msg="TearDown network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" successfully" Jan 13 20:54:36.056060 containerd[1897]: time="2025-01-13T20:54:36.055998330Z" level=info msg="StopPodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" returns successfully" Jan 13 20:54:36.056748 containerd[1897]: time="2025-01-13T20:54:36.056690177Z" level=info msg="RemovePodSandbox for \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\"" Jan 13 20:54:36.056889 containerd[1897]: time="2025-01-13T20:54:36.056833920Z" level=info msg="Forcibly stopping sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\"" Jan 13 20:54:36.057694 containerd[1897]: time="2025-01-13T20:54:36.057601721Z" level=info msg="TearDown network for sandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" successfully" Jan 13 20:54:36.061455 containerd[1897]: time="2025-01-13T20:54:36.061230990Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.061756 containerd[1897]: time="2025-01-13T20:54:36.061484435Z" level=info msg="RemovePodSandbox \"f38d0825a97b8cfed1d77bd4afb95892c573d7908acc47d8a5355f58cc933d46\" returns successfully" Jan 13 20:54:36.062235 containerd[1897]: time="2025-01-13T20:54:36.062202662Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\"" Jan 13 20:54:36.062394 containerd[1897]: time="2025-01-13T20:54:36.062367441Z" level=info msg="TearDown network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" successfully" Jan 13 20:54:36.062452 containerd[1897]: time="2025-01-13T20:54:36.062390382Z" level=info msg="StopPodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" returns successfully" Jan 13 20:54:36.064839 containerd[1897]: time="2025-01-13T20:54:36.063166674Z" level=info msg="RemovePodSandbox for \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\"" Jan 13 20:54:36.064839 containerd[1897]: time="2025-01-13T20:54:36.063197885Z" level=info msg="Forcibly stopping sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\"" Jan 13 20:54:36.064839 containerd[1897]: time="2025-01-13T20:54:36.063323010Z" level=info msg="TearDown network for sandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" successfully" Jan 13 20:54:36.066610 containerd[1897]: time="2025-01-13T20:54:36.066566093Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.066705 containerd[1897]: time="2025-01-13T20:54:36.066649676Z" level=info msg="RemovePodSandbox \"ba667cabcb5807129809c6d0edfa1817dd9efca874cac44a9dfe0fca6d7c79cf\" returns successfully" Jan 13 20:54:36.067076 containerd[1897]: time="2025-01-13T20:54:36.067047828Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\"" Jan 13 20:54:36.067215 containerd[1897]: time="2025-01-13T20:54:36.067148700Z" level=info msg="TearDown network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" successfully" Jan 13 20:54:36.067269 containerd[1897]: time="2025-01-13T20:54:36.067214024Z" level=info msg="StopPodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" returns successfully" Jan 13 20:54:36.069123 containerd[1897]: time="2025-01-13T20:54:36.067753311Z" level=info msg="RemovePodSandbox for \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\"" Jan 13 20:54:36.069123 containerd[1897]: time="2025-01-13T20:54:36.067975014Z" level=info msg="Forcibly stopping sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\"" Jan 13 20:54:36.069123 containerd[1897]: time="2025-01-13T20:54:36.068141737Z" level=info msg="TearDown network for sandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" successfully" Jan 13 20:54:36.072101 containerd[1897]: time="2025-01-13T20:54:36.071863637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.072101 containerd[1897]: time="2025-01-13T20:54:36.072008314Z" level=info msg="RemovePodSandbox \"ff5dc2237975fb4e02e361f6f416cc73a5a7b7a4b0dc9d3faa339c28076094f0\" returns successfully" Jan 13 20:54:36.072642 containerd[1897]: time="2025-01-13T20:54:36.072567356Z" level=info msg="StopPodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\"" Jan 13 20:54:36.072786 containerd[1897]: time="2025-01-13T20:54:36.072750672Z" level=info msg="TearDown network for sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" successfully" Jan 13 20:54:36.072786 containerd[1897]: time="2025-01-13T20:54:36.072769508Z" level=info msg="StopPodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" returns successfully" Jan 13 20:54:36.073190 containerd[1897]: time="2025-01-13T20:54:36.073163237Z" level=info msg="RemovePodSandbox for \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\"" Jan 13 20:54:36.073266 containerd[1897]: time="2025-01-13T20:54:36.073190952Z" level=info msg="Forcibly stopping sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\"" Jan 13 20:54:36.073322 containerd[1897]: time="2025-01-13T20:54:36.073273051Z" level=info msg="TearDown network for sandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" successfully" Jan 13 20:54:36.076323 containerd[1897]: time="2025-01-13T20:54:36.076187657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.076435 containerd[1897]: time="2025-01-13T20:54:36.076350610Z" level=info msg="RemovePodSandbox \"29acd988e98c726f2dcbd6e0620fe05539540f99833b5eda6c5488a0f52029fe\" returns successfully" Jan 13 20:54:36.076873 containerd[1897]: time="2025-01-13T20:54:36.076837966Z" level=info msg="StopPodSandbox for \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\"" Jan 13 20:54:36.076968 containerd[1897]: time="2025-01-13T20:54:36.076943699Z" level=info msg="TearDown network for sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\" successfully" Jan 13 20:54:36.076968 containerd[1897]: time="2025-01-13T20:54:36.076958989Z" level=info msg="StopPodSandbox for \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\" returns successfully" Jan 13 20:54:36.077426 containerd[1897]: time="2025-01-13T20:54:36.077386646Z" level=info msg="RemovePodSandbox for \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\"" Jan 13 20:54:36.077426 containerd[1897]: time="2025-01-13T20:54:36.077417586Z" level=info msg="Forcibly stopping sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\"" Jan 13 20:54:36.077560 containerd[1897]: time="2025-01-13T20:54:36.077496486Z" level=info msg="TearDown network for sandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\" successfully" Jan 13 20:54:36.080140 containerd[1897]: time="2025-01-13T20:54:36.080100331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.080240 containerd[1897]: time="2025-01-13T20:54:36.080153019Z" level=info msg="RemovePodSandbox \"d8ecaf006d94a064cfcfe732848a8de8ea7127857a89d6f17b54db1243976885\" returns successfully" Jan 13 20:54:36.080945 containerd[1897]: time="2025-01-13T20:54:36.080916336Z" level=info msg="StopPodSandbox for \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\"" Jan 13 20:54:36.081031 containerd[1897]: time="2025-01-13T20:54:36.081018218Z" level=info msg="TearDown network for sandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\" successfully" Jan 13 20:54:36.081084 containerd[1897]: time="2025-01-13T20:54:36.081034755Z" level=info msg="StopPodSandbox for \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\" returns successfully" Jan 13 20:54:36.081600 containerd[1897]: time="2025-01-13T20:54:36.081572406Z" level=info msg="RemovePodSandbox for \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\"" Jan 13 20:54:36.081688 containerd[1897]: time="2025-01-13T20:54:36.081601480Z" level=info msg="Forcibly stopping sandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\"" Jan 13 20:54:36.081813 containerd[1897]: time="2025-01-13T20:54:36.081693427Z" level=info msg="TearDown network for sandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\" successfully" Jan 13 20:54:36.084476 containerd[1897]: time="2025-01-13T20:54:36.084444272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.084581 containerd[1897]: time="2025-01-13T20:54:36.084495813Z" level=info msg="RemovePodSandbox \"8d0faa00f0efc5f83683a56f6a196ec5340d1df14e870b8a4f14a66a8783359e\" returns successfully" Jan 13 20:54:36.085059 containerd[1897]: time="2025-01-13T20:54:36.085028965Z" level=info msg="StopPodSandbox for \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\"" Jan 13 20:54:36.085158 containerd[1897]: time="2025-01-13T20:54:36.085132312Z" level=info msg="TearDown network for sandbox \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\" successfully" Jan 13 20:54:36.085158 containerd[1897]: time="2025-01-13T20:54:36.085153115Z" level=info msg="StopPodSandbox for \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\" returns successfully" Jan 13 20:54:36.085535 containerd[1897]: time="2025-01-13T20:54:36.085500163Z" level=info msg="RemovePodSandbox for \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\"" Jan 13 20:54:36.085535 containerd[1897]: time="2025-01-13T20:54:36.085545589Z" level=info msg="Forcibly stopping sandbox \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\"" Jan 13 20:54:36.085849 containerd[1897]: time="2025-01-13T20:54:36.085712011Z" level=info msg="TearDown network for sandbox \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\" successfully" Jan 13 20:54:36.092315 containerd[1897]: time="2025-01-13T20:54:36.091928839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.092315 containerd[1897]: time="2025-01-13T20:54:36.092021893Z" level=info msg="RemovePodSandbox \"53b14139a5d259469e4c635bd815d18d5114082ec6225017bc4cf6799db3b10b\" returns successfully" Jan 13 20:54:36.093054 containerd[1897]: time="2025-01-13T20:54:36.092995247Z" level=info msg="StopPodSandbox for \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\"" Jan 13 20:54:36.093198 containerd[1897]: time="2025-01-13T20:54:36.093174904Z" level=info msg="TearDown network for sandbox \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\" successfully" Jan 13 20:54:36.093275 containerd[1897]: time="2025-01-13T20:54:36.093226941Z" level=info msg="StopPodSandbox for \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\" returns successfully" Jan 13 20:54:36.094087 containerd[1897]: time="2025-01-13T20:54:36.094005199Z" level=info msg="RemovePodSandbox for \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\"" Jan 13 20:54:36.094221 containerd[1897]: time="2025-01-13T20:54:36.094088453Z" level=info msg="Forcibly stopping sandbox \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\"" Jan 13 20:54:36.094409 containerd[1897]: time="2025-01-13T20:54:36.094201595Z" level=info msg="TearDown network for sandbox \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\" successfully" Jan 13 20:54:36.098077 containerd[1897]: time="2025-01-13T20:54:36.098033680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.098195 containerd[1897]: time="2025-01-13T20:54:36.098087453Z" level=info msg="RemovePodSandbox \"b6fbf059bee3b8dd6cbcde3c41be5b485cf14e1fb28c19a0b9aac915832c18f4\" returns successfully" Jan 13 20:54:36.098771 containerd[1897]: time="2025-01-13T20:54:36.098735533Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\"" Jan 13 20:54:36.098860 containerd[1897]: time="2025-01-13T20:54:36.098842247Z" level=info msg="TearDown network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" successfully" Jan 13 20:54:36.098915 containerd[1897]: time="2025-01-13T20:54:36.098858462Z" level=info msg="StopPodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" returns successfully" Jan 13 20:54:36.099183 containerd[1897]: time="2025-01-13T20:54:36.099159730Z" level=info msg="RemovePodSandbox for \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\"" Jan 13 20:54:36.099355 containerd[1897]: time="2025-01-13T20:54:36.099185659Z" level=info msg="Forcibly stopping sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\"" Jan 13 20:54:36.099427 containerd[1897]: time="2025-01-13T20:54:36.099377749Z" level=info msg="TearDown network for sandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" successfully" Jan 13 20:54:36.102976 containerd[1897]: time="2025-01-13T20:54:36.102936220Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.103227 containerd[1897]: time="2025-01-13T20:54:36.102990516Z" level=info msg="RemovePodSandbox \"6c638dcac0b1d76fc9ab77875bbd4be2876bed872f0d2987b2390630889ec9e5\" returns successfully" Jan 13 20:54:36.103907 containerd[1897]: time="2025-01-13T20:54:36.103870842Z" level=info msg="StopPodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\"" Jan 13 20:54:36.103997 containerd[1897]: time="2025-01-13T20:54:36.103979039Z" level=info msg="TearDown network for sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" successfully" Jan 13 20:54:36.104045 containerd[1897]: time="2025-01-13T20:54:36.103998031Z" level=info msg="StopPodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" returns successfully" Jan 13 20:54:36.104316 containerd[1897]: time="2025-01-13T20:54:36.104290146Z" level=info msg="RemovePodSandbox for \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\"" Jan 13 20:54:36.104565 containerd[1897]: time="2025-01-13T20:54:36.104317704Z" level=info msg="Forcibly stopping sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\"" Jan 13 20:54:36.106729 containerd[1897]: time="2025-01-13T20:54:36.104397149Z" level=info msg="TearDown network for sandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" successfully" Jan 13 20:54:36.110999 containerd[1897]: time="2025-01-13T20:54:36.110946865Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.111348 containerd[1897]: time="2025-01-13T20:54:36.111025223Z" level=info msg="RemovePodSandbox \"a5dc3be07e2816ef3a454257de853fdaff40ab232662b6088a66d33a54b3b1ee\" returns successfully" Jan 13 20:54:36.117421 containerd[1897]: time="2025-01-13T20:54:36.117145863Z" level=info msg="StopPodSandbox for \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\"" Jan 13 20:54:36.118982 containerd[1897]: time="2025-01-13T20:54:36.118938208Z" level=info msg="TearDown network for sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\" successfully" Jan 13 20:54:36.119686 containerd[1897]: time="2025-01-13T20:54:36.118965048Z" level=info msg="StopPodSandbox for \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\" returns successfully" Jan 13 20:54:36.122406 containerd[1897]: time="2025-01-13T20:54:36.122101514Z" level=info msg="RemovePodSandbox for \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\"" Jan 13 20:54:36.122406 containerd[1897]: time="2025-01-13T20:54:36.122137073Z" level=info msg="Forcibly stopping sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\"" Jan 13 20:54:36.122406 containerd[1897]: time="2025-01-13T20:54:36.122231564Z" level=info msg="TearDown network for sandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\" successfully" Jan 13 20:54:36.129986 containerd[1897]: time="2025-01-13T20:54:36.129900869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.130189 containerd[1897]: time="2025-01-13T20:54:36.130009353Z" level=info msg="RemovePodSandbox \"c666f01e324ddc7231cbe8533b1099c98410cef6f55fbb34ad8ec6845f32898a\" returns successfully" Jan 13 20:54:36.130702 containerd[1897]: time="2025-01-13T20:54:36.130655975Z" level=info msg="StopPodSandbox for \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\"" Jan 13 20:54:36.130800 containerd[1897]: time="2025-01-13T20:54:36.130773331Z" level=info msg="TearDown network for sandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\" successfully" Jan 13 20:54:36.130800 containerd[1897]: time="2025-01-13T20:54:36.130788637Z" level=info msg="StopPodSandbox for \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\" returns successfully" Jan 13 20:54:36.131856 containerd[1897]: time="2025-01-13T20:54:36.131814902Z" level=info msg="RemovePodSandbox for \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\"" Jan 13 20:54:36.131940 containerd[1897]: time="2025-01-13T20:54:36.131861293Z" level=info msg="Forcibly stopping sandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\"" Jan 13 20:54:36.132663 containerd[1897]: time="2025-01-13T20:54:36.131949148Z" level=info msg="TearDown network for sandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\" successfully" Jan 13 20:54:36.138209 containerd[1897]: time="2025-01-13T20:54:36.138150501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.138874 containerd[1897]: time="2025-01-13T20:54:36.138838783Z" level=info msg="RemovePodSandbox \"5b11dd9b3847d6bf8b4063114073a02c862e3cbf3378d517a3e44be4693f2ff0\" returns successfully" Jan 13 20:54:36.140373 containerd[1897]: time="2025-01-13T20:54:36.140341850Z" level=info msg="StopPodSandbox for \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\"" Jan 13 20:54:36.140795 containerd[1897]: time="2025-01-13T20:54:36.140702040Z" level=info msg="TearDown network for sandbox \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\" successfully" Jan 13 20:54:36.140795 containerd[1897]: time="2025-01-13T20:54:36.140755057Z" level=info msg="StopPodSandbox for \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\" returns successfully" Jan 13 20:54:36.144309 containerd[1897]: time="2025-01-13T20:54:36.143842135Z" level=info msg="RemovePodSandbox for \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\"" Jan 13 20:54:36.144309 containerd[1897]: time="2025-01-13T20:54:36.143985777Z" level=info msg="Forcibly stopping sandbox \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\"" Jan 13 20:54:36.144826 containerd[1897]: time="2025-01-13T20:54:36.144610206Z" level=info msg="TearDown network for sandbox \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\" successfully" Jan 13 20:54:36.152757 containerd[1897]: time="2025-01-13T20:54:36.152686302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.153191 containerd[1897]: time="2025-01-13T20:54:36.153000454Z" level=info msg="RemovePodSandbox \"8be99859d0c73f1265c1f07336a3aaed1d0bdf3a8fb78260115788df8849dcbf\" returns successfully" Jan 13 20:54:36.153543 containerd[1897]: time="2025-01-13T20:54:36.153493431Z" level=info msg="StopPodSandbox for \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\"" Jan 13 20:54:36.153661 containerd[1897]: time="2025-01-13T20:54:36.153624567Z" level=info msg="TearDown network for sandbox \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\" successfully" Jan 13 20:54:36.153661 containerd[1897]: time="2025-01-13T20:54:36.153641447Z" level=info msg="StopPodSandbox for \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\" returns successfully" Jan 13 20:54:36.153988 containerd[1897]: time="2025-01-13T20:54:36.153965076Z" level=info msg="RemovePodSandbox for \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\"" Jan 13 20:54:36.154096 containerd[1897]: time="2025-01-13T20:54:36.154069700Z" level=info msg="Forcibly stopping sandbox \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\"" Jan 13 20:54:36.154220 containerd[1897]: time="2025-01-13T20:54:36.154170300Z" level=info msg="TearDown network for sandbox \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\" successfully" Jan 13 20:54:36.168605 containerd[1897]: time="2025-01-13T20:54:36.167442032Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:54:36.168605 containerd[1897]: time="2025-01-13T20:54:36.167562695Z" level=info msg="RemovePodSandbox \"205184cbaab47a01ca2366438cbb4907c98ac5e6849b313cebc1673d9f1fd6db\" returns successfully" Jan 13 20:54:37.012555 kubelet[2352]: E0113 20:54:37.012475 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:38.012754 kubelet[2352]: E0113 20:54:38.012712 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:39.013424 kubelet[2352]: E0113 20:54:39.013368 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:40.014268 kubelet[2352]: E0113 20:54:40.014212 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:41.015353 kubelet[2352]: E0113 20:54:41.015297 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:42.015945 kubelet[2352]: E0113 20:54:42.015889 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:43.016043 kubelet[2352]: E0113 20:54:43.015989 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:44.016364 kubelet[2352]: E0113 20:54:44.016312 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:45.016999 kubelet[2352]: E0113 20:54:45.016944 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:46.018085 kubelet[2352]: E0113 20:54:46.018032 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:47.018247 kubelet[2352]: E0113 20:54:47.018177 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:48.018949 kubelet[2352]: E0113 20:54:48.018834 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:49.020050 kubelet[2352]: E0113 20:54:49.019998 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:50.020629 kubelet[2352]: E0113 20:54:50.020575 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:51.020942 kubelet[2352]: E0113 20:54:51.020881 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:52.021228 kubelet[2352]: E0113 20:54:52.021174 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:52.864800 kubelet[2352]: I0113 20:54:52.864755 2352 topology_manager.go:215] "Topology Admit Handler" podUID="b714f297-083a-4989-9fc3-28a3ac77a334" podNamespace="default" podName="test-pod-1" Jan 13 20:54:52.884964 systemd[1]: Created slice kubepods-besteffort-podb714f297_083a_4989_9fc3_28a3ac77a334.slice - libcontainer container kubepods-besteffort-podb714f297_083a_4989_9fc3_28a3ac77a334.slice. Jan 13 20:54:52.976407 kubelet[2352]: I0113 20:54:52.976351 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-20531dfb-a78c-4b0a-9a66-76e5b54ede46\" (UniqueName: \"kubernetes.io/nfs/b714f297-083a-4989-9fc3-28a3ac77a334-pvc-20531dfb-a78c-4b0a-9a66-76e5b54ede46\") pod \"test-pod-1\" (UID: \"b714f297-083a-4989-9fc3-28a3ac77a334\") " pod="default/test-pod-1" Jan 13 20:54:52.976407 kubelet[2352]: I0113 20:54:52.976415 2352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsnlk\" (UniqueName: \"kubernetes.io/projected/b714f297-083a-4989-9fc3-28a3ac77a334-kube-api-access-fsnlk\") pod \"test-pod-1\" (UID: \"b714f297-083a-4989-9fc3-28a3ac77a334\") " pod="default/test-pod-1" Jan 13 20:54:53.022258 kubelet[2352]: E0113 20:54:53.022214 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:53.156725 kernel: FS-Cache: Loaded Jan 13 20:54:53.245876 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:54:53.246000 kernel: RPC: Registered udp transport module. Jan 13 20:54:53.246024 kernel: RPC: Registered tcp transport module. Jan 13 20:54:53.246685 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:54:53.246736 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:54:53.669564 kernel: NFS: Registering the id_resolver key type Jan 13 20:54:53.669676 kernel: Key type id_resolver registered Jan 13 20:54:53.669701 kernel: Key type id_legacy registered Jan 13 20:54:53.719558 nfsidmap[4380]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 20:54:53.724515 nfsidmap[4381]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 20:54:53.800269 containerd[1897]: time="2025-01-13T20:54:53.800229123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b714f297-083a-4989-9fc3-28a3ac77a334,Namespace:default,Attempt:0,}" Jan 13 20:54:54.016402 (udev-worker)[4378]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:54:54.017830 systemd-networkd[1737]: cali5ec59c6bf6e: Link UP Jan 13 20:54:54.018222 systemd-networkd[1737]: cali5ec59c6bf6e: Gained carrier Jan 13 20:54:54.022713 kubelet[2352]: E0113 20:54:54.022665 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.872 [INFO][4382] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.29.104-k8s-test--pod--1-eth0 default b714f297-083a-4989-9fc3-28a3ac77a334 1300 0 2025-01-13 20:54:20 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.29.104 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.104-k8s-test--pod--1-" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.872 [INFO][4382] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.104-k8s-test--pod--1-eth0" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.913 [INFO][4393] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" HandleID="k8s-pod-network.caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" Workload="172.31.29.104-k8s-test--pod--1-eth0" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.954 [INFO][4393] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" HandleID="k8s-pod-network.caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" Workload="172.31.29.104-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292b70), Attrs:map[string]string{"namespace":"default", "node":"172.31.29.104", "pod":"test-pod-1", "timestamp":"2025-01-13 20:54:53.913008651 +0000 UTC"}, Hostname:"172.31.29.104", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.954 [INFO][4393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.954 [INFO][4393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.954 [INFO][4393] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.29.104' Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.958 [INFO][4393] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" host="172.31.29.104" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.964 [INFO][4393] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.29.104" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.971 [INFO][4393] ipam/ipam.go 489: Trying affinity for 192.168.54.128/26 host="172.31.29.104" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.976 [INFO][4393] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.128/26 host="172.31.29.104" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.981 [INFO][4393] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="172.31.29.104" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.981 [INFO][4393] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" host="172.31.29.104" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.984 [INFO][4393] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572 Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:53.997 [INFO][4393] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" host="172.31.29.104" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:54.008 [INFO][4393] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.132/26] block=192.168.54.128/26 handle="k8s-pod-network.caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" host="172.31.29.104" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:54.008 [INFO][4393] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.132/26] handle="k8s-pod-network.caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" host="172.31.29.104" Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:54.008 [INFO][4393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:54:54.032881 containerd[1897]: 2025-01-13 20:54:54.008 [INFO][4393] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.132/26] IPv6=[] ContainerID="caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" HandleID="k8s-pod-network.caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" Workload="172.31.29.104-k8s-test--pod--1-eth0" Jan 13 20:54:54.034010 containerd[1897]: 2025-01-13 20:54:54.014 [INFO][4382] cni-plugin/k8s.go 386: Populated endpoint ContainerID="caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.104-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.104-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"b714f297-083a-4989-9fc3-28a3ac77a334", ResourceVersion:"1300", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.104", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:54:54.034010 containerd[1897]: 2025-01-13 20:54:54.014 [INFO][4382] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.132/32] ContainerID="caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.104-k8s-test--pod--1-eth0" Jan 13 20:54:54.034010 containerd[1897]: 2025-01-13 20:54:54.014 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.104-k8s-test--pod--1-eth0" Jan 13 20:54:54.034010 containerd[1897]: 2025-01-13 20:54:54.016 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.104-k8s-test--pod--1-eth0" Jan 13 20:54:54.034010 containerd[1897]: 2025-01-13 20:54:54.018 [INFO][4382] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.104-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.104-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"b714f297-083a-4989-9fc3-28a3ac77a334", ResourceVersion:"1300", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.104", ContainerID:"caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"d2:dd:da:a9:7f:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:54:54.034010 containerd[1897]: 2025-01-13 20:54:54.031 [INFO][4382] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.104-k8s-test--pod--1-eth0" Jan 13 20:54:54.078281 containerd[1897]: time="2025-01-13T20:54:54.077864411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:54:54.078281 containerd[1897]: time="2025-01-13T20:54:54.077932359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:54:54.078281 containerd[1897]: time="2025-01-13T20:54:54.077955187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:54:54.078281 containerd[1897]: time="2025-01-13T20:54:54.078054295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:54:54.119722 systemd[1]: Started cri-containerd-caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572.scope - libcontainer container caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572. Jan 13 20:54:54.174451 containerd[1897]: time="2025-01-13T20:54:54.174406407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b714f297-083a-4989-9fc3-28a3ac77a334,Namespace:default,Attempt:0,} returns sandbox id \"caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572\"" Jan 13 20:54:54.176658 containerd[1897]: time="2025-01-13T20:54:54.176474657Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:54:54.524980 containerd[1897]: time="2025-01-13T20:54:54.524928708Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:54:54.527331 containerd[1897]: time="2025-01-13T20:54:54.526730367Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:54:54.538333 containerd[1897]: time="2025-01-13T20:54:54.538279355Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 361.764579ms" Jan 13 20:54:54.538548 containerd[1897]: time="2025-01-13T20:54:54.538495740Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:54:54.548314 containerd[1897]: time="2025-01-13T20:54:54.548270265Z" level=info msg="CreateContainer within sandbox \"caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:54:54.609938 containerd[1897]: time="2025-01-13T20:54:54.609862981Z" level=info msg="CreateContainer within sandbox \"caf840f7b4f66ca49078da9325b90f425c713e506c2072409ed8b66ce6b04572\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7694941557a7e80aaa833cb87ae26dfa19413dd349de3cd589eac78a352c5411\"" Jan 13 20:54:54.613833 containerd[1897]: time="2025-01-13T20:54:54.611870045Z" level=info msg="StartContainer for \"7694941557a7e80aaa833cb87ae26dfa19413dd349de3cd589eac78a352c5411\"" Jan 13 20:54:54.666255 systemd[1]: Started cri-containerd-7694941557a7e80aaa833cb87ae26dfa19413dd349de3cd589eac78a352c5411.scope - libcontainer container 7694941557a7e80aaa833cb87ae26dfa19413dd349de3cd589eac78a352c5411. Jan 13 20:54:54.700974 containerd[1897]: time="2025-01-13T20:54:54.700924539Z" level=info msg="StartContainer for \"7694941557a7e80aaa833cb87ae26dfa19413dd349de3cd589eac78a352c5411\" returns successfully" Jan 13 20:54:55.023550 kubelet[2352]: E0113 20:54:55.023475 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:55.712841 kubelet[2352]: I0113 20:54:55.712788 2352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=35.349678862 podStartE2EDuration="35.712561733s" podCreationTimestamp="2025-01-13 20:54:20 +0000 UTC" firstStartedPulling="2025-01-13 20:54:54.175920387 +0000 UTC m=+78.559002918" lastFinishedPulling="2025-01-13 20:54:54.53880325 +0000 UTC m=+78.921885789" observedRunningTime="2025-01-13 20:54:55.711964199 +0000 UTC m=+80.095046753" watchObservedRunningTime="2025-01-13 20:54:55.712561733 +0000 UTC m=+80.095644285" Jan 13 20:54:55.806970 systemd-networkd[1737]: cali5ec59c6bf6e: Gained IPv6LL Jan 13 20:54:55.952388 kubelet[2352]: E0113 20:54:55.952336 2352 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:56.023728 kubelet[2352]: E0113 20:54:56.023680 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:57.024023 kubelet[2352]: E0113 20:54:57.023978 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:58.023522 ntpd[1867]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 20:54:58.023991 ntpd[1867]: 13 Jan 20:54:58 ntpd[1867]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 20:54:58.025366 kubelet[2352]: E0113 20:54:58.025312 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:54:59.025823 kubelet[2352]: E0113 20:54:59.025764 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:00.026635 kubelet[2352]: E0113 20:55:00.026572 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:01.027000 kubelet[2352]: E0113 20:55:01.026937 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:02.027401 kubelet[2352]: E0113 20:55:02.027352 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:03.028603 kubelet[2352]: E0113 20:55:03.028510 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:04.028997 kubelet[2352]: E0113 20:55:04.028936 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:05.029570 kubelet[2352]: E0113 20:55:05.029500 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:06.030861 kubelet[2352]: E0113 20:55:06.030809 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:07.032106 kubelet[2352]: E0113 20:55:07.032040 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:08.033369 kubelet[2352]: E0113 20:55:08.033304 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:09.034767 kubelet[2352]: E0113 20:55:09.034608 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:10.035303 kubelet[2352]: E0113 20:55:10.034961 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:11.036399 kubelet[2352]: E0113 20:55:11.036318 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:12.036795 kubelet[2352]: E0113 20:55:12.036738 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:13.037551 kubelet[2352]: E0113 20:55:13.037480 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:14.038440 kubelet[2352]: E0113 20:55:14.038388 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:15.039421 kubelet[2352]: E0113 20:55:15.039364 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:15.952060 kubelet[2352]: E0113 20:55:15.952003 2352 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:16.039725 kubelet[2352]: E0113 20:55:16.039646 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:17.040808 kubelet[2352]: E0113 20:55:17.040754 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:18.041227 kubelet[2352]: E0113 20:55:18.041172 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:19.041763 kubelet[2352]: E0113 20:55:19.041707 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:20.042462 kubelet[2352]: E0113 20:55:20.042408 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:21.043474 kubelet[2352]: E0113 20:55:21.043417 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:22.044296 kubelet[2352]: E0113 20:55:22.044249 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:23.044542 kubelet[2352]: E0113 20:55:23.044384 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:24.045376 kubelet[2352]: E0113 20:55:24.045322 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:25.045928 kubelet[2352]: E0113 20:55:25.045875 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:26.046085 kubelet[2352]: E0113 20:55:26.046021 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:27.046415 kubelet[2352]: E0113 20:55:27.046360 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:28.047360 kubelet[2352]: E0113 20:55:28.047305 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:28.495443 kubelet[2352]: E0113 20:55:28.495380 2352 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.29.104\": Get \"https://172.31.31.223:6443/api/v1/nodes/172.31.29.104?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 13 20:55:28.614801 kubelet[2352]: E0113 20:55:28.614723 2352 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.104?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 13 20:55:29.047923 kubelet[2352]: E0113 20:55:29.047871 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:30.048254 kubelet[2352]: E0113 20:55:30.048196 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:31.048921 kubelet[2352]: E0113 20:55:31.048865 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:55:32.049887 kubelet[2352]: E0113 20:55:32.049849 2352 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"