Feb 13 21:36:42.044873 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 21:36:42.044929 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 21:36:42.044945 kernel: BIOS-provided physical RAM map: Feb 13 21:36:42.044962 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 21:36:42.044972 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 21:36:42.044983 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 21:36:42.044995 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 13 21:36:42.045006 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 13 21:36:42.045017 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 21:36:42.045028 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 21:36:42.045039 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 21:36:42.045050 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 21:36:42.045066 kernel: NX (Execute Disable) protection: active Feb 13 21:36:42.045078 kernel: APIC: Static calls initialized Feb 13 21:36:42.045091 kernel: SMBIOS 2.8 present. Feb 13 21:36:42.045103 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 13 21:36:42.045115 kernel: Hypervisor detected: KVM Feb 13 21:36:42.045131 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 21:36:42.045143 kernel: kvm-clock: using sched offset of 4497805938 cycles Feb 13 21:36:42.045156 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 21:36:42.045168 kernel: tsc: Detected 2500.032 MHz processor Feb 13 21:36:42.045780 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 21:36:42.045796 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 21:36:42.045808 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 13 21:36:42.045820 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 21:36:42.045832 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 21:36:42.045852 kernel: Using GB pages for direct mapping Feb 13 21:36:42.045864 kernel: ACPI: Early table checksum verification disabled Feb 13 21:36:42.045876 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 13 21:36:42.045888 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:36:42.045901 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:36:42.045913 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:36:42.045925 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 13 21:36:42.045937 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:36:42.045949 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:36:42.045966 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:36:42.045978 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:36:42.045990 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 13 21:36:42.046002 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 13 21:36:42.046014 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 13 21:36:42.046032 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 13 21:36:42.046045 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 13 21:36:42.046062 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 13 21:36:42.046075 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 13 21:36:42.046087 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 21:36:42.046100 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 21:36:42.046112 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 21:36:42.046125 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 13 21:36:42.046137 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 21:36:42.046150 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 13 21:36:42.046167 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 21:36:42.046204 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 13 21:36:42.046217 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 21:36:42.047230 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 13 21:36:42.047243 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 21:36:42.047256 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 13 21:36:42.047268 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 21:36:42.047280 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 13 21:36:42.047292 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 21:36:42.047311 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 13 21:36:42.047323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 21:36:42.047378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 21:36:42.047394 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 13 21:36:42.047406 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 13 21:36:42.047419 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 13 21:36:42.047432 kernel: Zone ranges: Feb 13 21:36:42.047444 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 21:36:42.047456 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 13 21:36:42.047469 kernel: Normal empty Feb 13 21:36:42.047487 kernel: Movable zone start for each node Feb 13 21:36:42.047500 kernel: Early memory node ranges Feb 13 21:36:42.047512 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 21:36:42.047524 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 13 21:36:42.047537 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 13 21:36:42.047549 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 21:36:42.047562 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 21:36:42.047574 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 13 21:36:42.047587 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 21:36:42.047617 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 21:36:42.047630 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 21:36:42.047643 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 21:36:42.047655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 21:36:42.047668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 21:36:42.047680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 21:36:42.047692 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 21:36:42.047705 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 21:36:42.047717 kernel: TSC deadline timer available Feb 13 21:36:42.047734 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 13 21:36:42.047747 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 21:36:42.047760 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 21:36:42.047772 kernel: Booting paravirtualized kernel on KVM Feb 13 21:36:42.047785 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 21:36:42.047797 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 21:36:42.047810 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 21:36:42.047823 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 21:36:42.047835 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 21:36:42.047852 kernel: kvm-guest: PV spinlocks enabled Feb 13 21:36:42.047865 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 21:36:42.047879 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 21:36:42.047892 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 21:36:42.047904 kernel: random: crng init done Feb 13 21:36:42.047917 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 21:36:42.047929 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 21:36:42.047941 kernel: Fallback order for Node 0: 0 Feb 13 21:36:42.047959 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 13 21:36:42.047971 kernel: Policy zone: DMA32 Feb 13 21:36:42.047984 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 21:36:42.047996 kernel: software IO TLB: area num 16. Feb 13 21:36:42.048009 kernel: Memory: 1899484K/2096616K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 196872K reserved, 0K cma-reserved) Feb 13 21:36:42.048022 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 21:36:42.048034 kernel: Kernel/User page tables isolation: enabled Feb 13 21:36:42.048047 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 21:36:42.048059 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 21:36:42.048077 kernel: Dynamic Preempt: voluntary Feb 13 21:36:42.048090 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 21:36:42.048103 kernel: rcu: RCU event tracing is enabled. Feb 13 21:36:42.048116 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 21:36:42.048129 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 21:36:42.048154 kernel: Rude variant of Tasks RCU enabled. Feb 13 21:36:42.048172 kernel: Tracing variant of Tasks RCU enabled. Feb 13 21:36:42.049216 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 21:36:42.049232 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 21:36:42.049245 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 13 21:36:42.049259 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 21:36:42.049272 kernel: Console: colour VGA+ 80x25 Feb 13 21:36:42.049293 kernel: printk: console [tty0] enabled Feb 13 21:36:42.049306 kernel: printk: console [ttyS0] enabled Feb 13 21:36:42.049319 kernel: ACPI: Core revision 20230628 Feb 13 21:36:42.049333 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 21:36:42.049356 kernel: x2apic enabled Feb 13 21:36:42.049377 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 21:36:42.049391 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Feb 13 21:36:42.049405 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Feb 13 21:36:42.049418 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 21:36:42.049431 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 21:36:42.049444 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 21:36:42.049457 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 21:36:42.049470 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 21:36:42.049483 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 21:36:42.049496 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 21:36:42.049514 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 21:36:42.049528 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 21:36:42.049541 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 21:36:42.049554 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 21:36:42.049566 kernel: MMIO Stale Data: Unknown: No mitigations Feb 13 21:36:42.049579 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 13 21:36:42.049592 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 21:36:42.049605 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 21:36:42.049619 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 21:36:42.049631 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 21:36:42.049650 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 21:36:42.049663 kernel: Freeing SMP alternatives memory: 32K Feb 13 21:36:42.049676 kernel: pid_max: default: 32768 minimum: 301 Feb 13 21:36:42.049689 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 21:36:42.049702 kernel: landlock: Up and running. Feb 13 21:36:42.049715 kernel: SELinux: Initializing. Feb 13 21:36:42.049728 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 21:36:42.049751 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 21:36:42.049765 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Feb 13 21:36:42.049778 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:36:42.049791 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:36:42.049810 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:36:42.049824 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Feb 13 21:36:42.049837 kernel: signal: max sigframe size: 1776 Feb 13 21:36:42.049850 kernel: rcu: Hierarchical SRCU implementation. Feb 13 21:36:42.049864 kernel: rcu: Max phase no-delay instances is 400. Feb 13 21:36:42.049878 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 21:36:42.049891 kernel: smp: Bringing up secondary CPUs ... Feb 13 21:36:42.049904 kernel: smpboot: x86: Booting SMP configuration: Feb 13 21:36:42.049917 kernel: .... node #0, CPUs: #1 Feb 13 21:36:42.049935 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 21:36:42.049948 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 21:36:42.049961 kernel: smpboot: Max logical packages: 16 Feb 13 21:36:42.049979 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Feb 13 21:36:42.049992 kernel: devtmpfs: initialized Feb 13 21:36:42.050005 kernel: x86/mm: Memory block size: 128MB Feb 13 21:36:42.050018 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 21:36:42.050031 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 21:36:42.050055 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 21:36:42.050075 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 21:36:42.050088 kernel: audit: initializing netlink subsys (disabled) Feb 13 21:36:42.050102 kernel: audit: type=2000 audit(1739482600.250:1): state=initialized audit_enabled=0 res=1 Feb 13 21:36:42.050114 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 21:36:42.050128 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 21:36:42.050141 kernel: cpuidle: using governor menu Feb 13 21:36:42.050154 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 21:36:42.050166 kernel: dca service started, version 1.12.1 Feb 13 21:36:42.050201 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 21:36:42.050220 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 21:36:42.050234 kernel: PCI: Using configuration type 1 for base access Feb 13 21:36:42.050247 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 21:36:42.050261 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 21:36:42.050274 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 21:36:42.050287 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 21:36:42.050300 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 21:36:42.050313 kernel: ACPI: Added _OSI(Module Device) Feb 13 21:36:42.050326 kernel: ACPI: Added _OSI(Processor Device) Feb 13 21:36:42.050344 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 21:36:42.050368 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 21:36:42.050381 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 21:36:42.050395 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 21:36:42.050407 kernel: ACPI: Interpreter enabled Feb 13 21:36:42.050420 kernel: ACPI: PM: (supports S0 S5) Feb 13 21:36:42.050433 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 21:36:42.050446 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 21:36:42.050459 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 21:36:42.050478 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 21:36:42.050491 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 21:36:42.050790 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 21:36:42.050975 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 21:36:42.051143 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 21:36:42.051163 kernel: PCI host bridge to bus 0000:00 Feb 13 21:36:42.053462 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 21:36:42.053641 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 21:36:42.053801 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 21:36:42.053958 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 13 21:36:42.054116 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 21:36:42.054299 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 13 21:36:42.054492 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 21:36:42.054699 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 21:36:42.054906 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 13 21:36:42.055081 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 13 21:36:42.057727 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 13 21:36:42.057923 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 13 21:36:42.058094 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 21:36:42.058488 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 21:36:42.060082 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 13 21:36:42.060318 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 21:36:42.060513 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 13 21:36:42.060711 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 21:36:42.060899 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 13 21:36:42.061085 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 21:36:42.062321 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 13 21:36:42.062634 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 21:36:42.062808 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 13 21:36:42.062990 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 21:36:42.063187 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 13 21:36:42.063386 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 21:36:42.063568 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 13 21:36:42.063749 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 21:36:42.063920 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 13 21:36:42.066080 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 21:36:42.066676 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 21:36:42.066869 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 13 21:36:42.067051 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 13 21:36:42.067280 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 13 21:36:42.067493 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 13 21:36:42.067675 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 21:36:42.067854 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 13 21:36:42.068030 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 13 21:36:42.069706 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 21:36:42.069892 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 21:36:42.070091 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 21:36:42.070319 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 13 21:36:42.070504 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 13 21:36:42.070684 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 21:36:42.070851 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 21:36:42.071048 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 13 21:36:42.072275 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 13 21:36:42.072471 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 21:36:42.072644 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 21:36:42.072817 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 21:36:42.073005 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 21:36:42.073223 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 13 21:36:42.073432 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 13 21:36:42.073607 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 21:36:42.073781 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 21:36:42.073980 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 21:36:42.074168 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 13 21:36:42.076430 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 21:36:42.076609 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 21:36:42.076784 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 21:36:42.076983 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 21:36:42.077160 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 13 21:36:42.079400 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 21:36:42.079580 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 21:36:42.079753 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 21:36:42.079928 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 21:36:42.080098 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 21:36:42.080333 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 21:36:42.080526 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 21:36:42.080694 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 21:36:42.080868 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 21:36:42.081045 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 21:36:42.081228 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 21:36:42.081415 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 21:36:42.081590 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 21:36:42.081766 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 21:36:42.081939 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 21:36:42.082114 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 21:36:42.084360 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 21:36:42.084542 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 21:36:42.084564 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 21:36:42.084578 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 21:36:42.084592 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 21:36:42.084606 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 21:36:42.084627 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 21:36:42.084641 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 21:36:42.084654 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 21:36:42.084668 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 21:36:42.084681 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 21:36:42.084694 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 21:36:42.084707 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 21:36:42.084720 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 21:36:42.084734 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 21:36:42.084752 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 21:36:42.084765 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 21:36:42.084779 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 21:36:42.084792 kernel: iommu: Default domain type: Translated Feb 13 21:36:42.084805 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 21:36:42.084818 kernel: PCI: Using ACPI for IRQ routing Feb 13 21:36:42.084831 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 21:36:42.084845 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 21:36:42.084858 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 13 21:36:42.085029 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 21:36:42.085220 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 21:36:42.085404 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 21:36:42.085425 kernel: vgaarb: loaded Feb 13 21:36:42.085439 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 21:36:42.085453 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 21:36:42.085466 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 21:36:42.085479 kernel: pnp: PnP ACPI init Feb 13 21:36:42.085660 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 21:36:42.085682 kernel: pnp: PnP ACPI: found 5 devices Feb 13 21:36:42.085696 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 21:36:42.085709 kernel: NET: Registered PF_INET protocol family Feb 13 21:36:42.085723 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 21:36:42.085736 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 21:36:42.085749 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 21:36:42.085763 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 21:36:42.085784 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 21:36:42.085798 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 21:36:42.085811 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 21:36:42.085824 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 21:36:42.085838 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 21:36:42.085851 kernel: NET: Registered PF_XDP protocol family Feb 13 21:36:42.086015 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 13 21:36:42.088219 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 21:36:42.088431 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 21:36:42.088606 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 21:36:42.088777 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 21:36:42.088945 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 21:36:42.089113 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 21:36:42.089313 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 21:36:42.089503 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 21:36:42.089670 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 21:36:42.089835 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 21:36:42.090000 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 21:36:42.090164 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 21:36:42.096745 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 21:36:42.096922 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 21:36:42.097093 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 21:36:42.097356 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 21:36:42.097543 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 21:36:42.097713 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 21:36:42.097889 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 21:36:42.098056 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 21:36:42.098246 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 21:36:42.098429 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 21:36:42.098598 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 21:36:42.098778 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 21:36:42.098949 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 21:36:42.099120 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 21:36:42.099321 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 21:36:42.099520 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 21:36:42.099710 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 21:36:42.099898 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 21:36:42.100069 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 21:36:42.100294 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 21:36:42.100475 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 21:36:42.100641 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 21:36:42.100811 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 21:36:42.100980 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 21:36:42.101160 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 21:36:42.105416 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 21:36:42.105603 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 21:36:42.105774 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 21:36:42.105942 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 21:36:42.106109 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 21:36:42.106304 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 21:36:42.106495 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 21:36:42.106663 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 21:36:42.106831 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 21:36:42.106997 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 21:36:42.107164 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 21:36:42.107369 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 21:36:42.107534 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 21:36:42.107690 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 21:36:42.107845 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 21:36:42.108008 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 13 21:36:42.108163 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 21:36:42.108342 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 13 21:36:42.108529 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 21:36:42.108693 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 13 21:36:42.108853 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 21:36:42.109027 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 13 21:36:42.109252 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 13 21:36:42.109429 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 13 21:36:42.109587 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 21:36:42.109759 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 13 21:36:42.109918 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 13 21:36:42.110074 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 21:36:42.115158 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 21:36:42.115368 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 13 21:36:42.115535 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 21:36:42.115719 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 13 21:36:42.115881 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 13 21:36:42.116041 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 21:36:42.116227 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 13 21:36:42.116416 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 13 21:36:42.116575 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 21:36:42.116747 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 13 21:36:42.116906 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 13 21:36:42.117066 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 21:36:42.117263 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 13 21:36:42.117441 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 13 21:36:42.117610 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 21:36:42.117633 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 21:36:42.117648 kernel: PCI: CLS 0 bytes, default 64 Feb 13 21:36:42.117669 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 21:36:42.117684 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 13 21:36:42.117698 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 21:36:42.117712 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Feb 13 21:36:42.117726 kernel: Initialise system trusted keyrings Feb 13 21:36:42.117746 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 21:36:42.117760 kernel: Key type asymmetric registered Feb 13 21:36:42.117774 kernel: Asymmetric key parser 'x509' registered Feb 13 21:36:42.117787 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 21:36:42.117801 kernel: io scheduler mq-deadline registered Feb 13 21:36:42.117815 kernel: io scheduler kyber registered Feb 13 21:36:42.117829 kernel: io scheduler bfq registered Feb 13 21:36:42.117998 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 13 21:36:42.118171 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 13 21:36:42.118381 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:36:42.118554 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 13 21:36:42.118725 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 13 21:36:42.118895 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:36:42.119066 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 13 21:36:42.121152 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 13 21:36:42.121381 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:36:42.121565 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 13 21:36:42.121745 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 13 21:36:42.121923 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:36:42.122104 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 13 21:36:42.122298 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 13 21:36:42.122496 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:36:42.122676 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 13 21:36:42.122845 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 13 21:36:42.123013 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:36:42.123223 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 13 21:36:42.123407 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 13 21:36:42.123585 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:36:42.123765 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 13 21:36:42.123944 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 13 21:36:42.124112 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:36:42.124134 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 21:36:42.124149 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 21:36:42.124171 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 21:36:42.124246 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 21:36:42.124261 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 21:36:42.124275 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 21:36:42.124288 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 21:36:42.124303 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 21:36:42.124490 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 21:36:42.124513 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 21:36:42.124678 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 21:36:42.124865 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T21:36:41 UTC (1739482601) Feb 13 21:36:42.125041 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 21:36:42.125063 kernel: intel_pstate: CPU model not supported Feb 13 21:36:42.125077 kernel: NET: Registered PF_INET6 protocol family Feb 13 21:36:42.125091 kernel: Segment Routing with IPv6 Feb 13 21:36:42.125105 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 21:36:42.125119 kernel: NET: Registered PF_PACKET protocol family Feb 13 21:36:42.125133 kernel: Key type dns_resolver registered Feb 13 21:36:42.125154 kernel: IPI shorthand broadcast: enabled Feb 13 21:36:42.125169 kernel: sched_clock: Marking stable (1128004002, 232475770)->(1588223477, -227743705) Feb 13 21:36:42.125205 kernel: registered taskstats version 1 Feb 13 21:36:42.125219 kernel: Loading compiled-in X.509 certificates Feb 13 21:36:42.125239 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 21:36:42.125254 kernel: Key type .fscrypt registered Feb 13 21:36:42.125268 kernel: Key type fscrypt-provisioning registered Feb 13 21:36:42.125281 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 21:36:42.125295 kernel: ima: Allocated hash algorithm: sha1 Feb 13 21:36:42.125315 kernel: ima: No architecture policies found Feb 13 21:36:42.125329 kernel: clk: Disabling unused clocks Feb 13 21:36:42.125354 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 21:36:42.125369 kernel: Write protecting the kernel read-only data: 38912k Feb 13 21:36:42.125383 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 21:36:42.125397 kernel: Run /init as init process Feb 13 21:36:42.125410 kernel: with arguments: Feb 13 21:36:42.125424 kernel: /init Feb 13 21:36:42.125438 kernel: with environment: Feb 13 21:36:42.125457 kernel: HOME=/ Feb 13 21:36:42.125471 kernel: TERM=linux Feb 13 21:36:42.125485 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 21:36:42.125508 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 21:36:42.125528 systemd[1]: Detected virtualization kvm. Feb 13 21:36:42.125543 systemd[1]: Detected architecture x86-64. Feb 13 21:36:42.125558 systemd[1]: Running in initrd. Feb 13 21:36:42.125573 systemd[1]: No hostname configured, using default hostname. Feb 13 21:36:42.125595 systemd[1]: Hostname set to . Feb 13 21:36:42.125610 systemd[1]: Initializing machine ID from VM UUID. Feb 13 21:36:42.125625 systemd[1]: Queued start job for default target initrd.target. Feb 13 21:36:42.125640 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:36:42.125656 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:36:42.125672 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 21:36:42.125687 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 21:36:42.125703 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 21:36:42.125724 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 21:36:42.125741 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 21:36:42.125756 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 21:36:42.125771 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:36:42.125787 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:36:42.125802 systemd[1]: Reached target paths.target - Path Units. Feb 13 21:36:42.125817 systemd[1]: Reached target slices.target - Slice Units. Feb 13 21:36:42.125837 systemd[1]: Reached target swap.target - Swaps. Feb 13 21:36:42.125852 systemd[1]: Reached target timers.target - Timer Units. Feb 13 21:36:42.125868 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 21:36:42.125883 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 21:36:42.125898 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 21:36:42.125913 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 21:36:42.125928 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:36:42.125943 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 21:36:42.125963 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:36:42.125979 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 21:36:42.125994 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 21:36:42.126009 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 21:36:42.126024 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 21:36:42.126039 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 21:36:42.126054 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 21:36:42.126069 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 21:36:42.126084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:36:42.126105 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 21:36:42.126163 systemd-journald[201]: Collecting audit messages is disabled. Feb 13 21:36:42.126250 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:36:42.126266 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 21:36:42.126290 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 21:36:42.126305 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 21:36:42.126328 systemd-journald[201]: Journal started Feb 13 21:36:42.126372 systemd-journald[201]: Runtime Journal (/run/log/journal/bd8c1a52dcff419783d97899afbfd4e7) is 4.7M, max 37.9M, 33.2M free. Feb 13 21:36:42.062545 systemd-modules-load[202]: Inserted module 'overlay' Feb 13 21:36:42.137797 kernel: Bridge firewalling registered Feb 13 21:36:42.137825 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 21:36:42.130367 systemd-modules-load[202]: Inserted module 'br_netfilter' Feb 13 21:36:42.143808 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 21:36:42.144873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:36:42.154508 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:36:42.164421 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 21:36:42.171553 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 21:36:42.176311 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 21:36:42.185355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 21:36:42.188029 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:36:42.201277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:36:42.203458 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:36:42.212456 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 21:36:42.216391 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 21:36:42.218249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:36:42.231598 dracut-cmdline[235]: dracut-dracut-053 Feb 13 21:36:42.238971 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 21:36:42.268611 systemd-resolved[236]: Positive Trust Anchors: Feb 13 21:36:42.268642 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 21:36:42.268692 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 21:36:42.272971 systemd-resolved[236]: Defaulting to hostname 'linux'. Feb 13 21:36:42.275126 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 21:36:42.277403 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:36:42.355240 kernel: SCSI subsystem initialized Feb 13 21:36:42.367280 kernel: Loading iSCSI transport class v2.0-870. Feb 13 21:36:42.380191 kernel: iscsi: registered transport (tcp) Feb 13 21:36:42.407763 kernel: iscsi: registered transport (qla4xxx) Feb 13 21:36:42.407865 kernel: QLogic iSCSI HBA Driver Feb 13 21:36:42.463842 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 21:36:42.470407 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 21:36:42.503557 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 21:36:42.503669 kernel: device-mapper: uevent: version 1.0.3 Feb 13 21:36:42.505255 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 21:36:42.556245 kernel: raid6: sse2x4 gen() 7521 MB/s Feb 13 21:36:42.574230 kernel: raid6: sse2x2 gen() 5184 MB/s Feb 13 21:36:42.592989 kernel: raid6: sse2x1 gen() 5138 MB/s Feb 13 21:36:42.593121 kernel: raid6: using algorithm sse2x4 gen() 7521 MB/s Feb 13 21:36:42.611986 kernel: raid6: .... xor() 4858 MB/s, rmw enabled Feb 13 21:36:42.612155 kernel: raid6: using ssse3x2 recovery algorithm Feb 13 21:36:42.643282 kernel: xor: automatically using best checksumming function avx Feb 13 21:36:42.825229 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 21:36:42.843639 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 21:36:42.850466 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:36:42.879957 systemd-udevd[420]: Using default interface naming scheme 'v255'. Feb 13 21:36:42.887501 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:36:42.895439 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 21:36:42.920599 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Feb 13 21:36:42.964372 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 21:36:42.971444 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 21:36:43.094532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:36:43.102427 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 21:36:43.140674 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 21:36:43.144439 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 21:36:43.145256 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:36:43.145966 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 21:36:43.157408 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 21:36:43.187688 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 21:36:43.228276 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 13 21:36:43.317014 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 21:36:43.317281 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 21:36:43.317328 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 21:36:43.317360 kernel: GPT:17805311 != 125829119 Feb 13 21:36:43.317378 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 21:36:43.317396 kernel: GPT:17805311 != 125829119 Feb 13 21:36:43.317413 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 21:36:43.317431 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 21:36:43.317449 kernel: AVX version of gcm_enc/dec engaged. Feb 13 21:36:43.317467 kernel: AES CTR mode by8 optimization enabled Feb 13 21:36:43.317485 kernel: libata version 3.00 loaded. Feb 13 21:36:43.326634 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 21:36:43.451507 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 21:36:43.451544 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 21:36:43.451782 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 21:36:43.451993 kernel: ACPI: bus type USB registered Feb 13 21:36:43.452014 kernel: usbcore: registered new interface driver usbfs Feb 13 21:36:43.452031 kernel: usbcore: registered new interface driver hub Feb 13 21:36:43.452049 kernel: usbcore: registered new device driver usb Feb 13 21:36:43.452066 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 21:36:43.453976 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 13 21:36:43.456250 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 21:36:43.456533 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 21:36:43.456751 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) Feb 13 21:36:43.456775 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 13 21:36:43.456984 kernel: scsi host0: ahci Feb 13 21:36:43.457653 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 13 21:36:43.457891 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (478) Feb 13 21:36:43.457912 kernel: hub 1-0:1.0: USB hub found Feb 13 21:36:43.458153 kernel: scsi host1: ahci Feb 13 21:36:43.459443 kernel: hub 1-0:1.0: 4 ports detected Feb 13 21:36:43.459679 kernel: scsi host2: ahci Feb 13 21:36:43.459884 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 21:36:43.461261 kernel: hub 2-0:1.0: USB hub found Feb 13 21:36:43.461548 kernel: hub 2-0:1.0: 4 ports detected Feb 13 21:36:43.461782 kernel: scsi host3: ahci Feb 13 21:36:43.462006 kernel: scsi host4: ahci Feb 13 21:36:43.463335 kernel: scsi host5: ahci Feb 13 21:36:43.463557 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Feb 13 21:36:43.463581 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Feb 13 21:36:43.463608 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Feb 13 21:36:43.463628 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Feb 13 21:36:43.463646 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Feb 13 21:36:43.463665 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Feb 13 21:36:43.377415 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 21:36:43.382977 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 21:36:43.383342 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:36:43.385369 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:36:43.411218 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 21:36:43.411450 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:36:43.413651 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:36:43.423567 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:36:43.446550 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 21:36:43.468877 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 21:36:43.472337 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 21:36:43.481514 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 21:36:43.542822 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:36:43.548464 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 21:36:43.554485 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:36:43.565135 disk-uuid[562]: Primary Header is updated. Feb 13 21:36:43.565135 disk-uuid[562]: Secondary Entries is updated. Feb 13 21:36:43.565135 disk-uuid[562]: Secondary Header is updated. Feb 13 21:36:43.571227 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 21:36:43.579249 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 21:36:43.584600 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:36:43.647204 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 21:36:43.759213 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 21:36:43.761633 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 21:36:43.761678 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 21:36:43.771381 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 21:36:43.771421 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 21:36:43.772213 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 21:36:43.796213 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 21:36:43.802463 kernel: usbcore: registered new interface driver usbhid Feb 13 21:36:43.802516 kernel: usbhid: USB HID core driver Feb 13 21:36:43.809238 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Feb 13 21:36:43.813358 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 13 21:36:44.582373 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 21:36:44.584194 disk-uuid[563]: The operation has completed successfully. Feb 13 21:36:44.635359 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 21:36:44.635545 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 21:36:44.660467 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 21:36:44.667307 sh[582]: Success Feb 13 21:36:44.687505 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Feb 13 21:36:44.755898 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 21:36:44.765314 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 21:36:44.768631 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 21:36:44.804376 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 21:36:44.804474 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:36:44.804496 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 21:36:44.806707 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 21:36:44.809864 kernel: BTRFS info (device dm-0): using free space tree Feb 13 21:36:44.819275 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 21:36:44.820799 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 21:36:44.827467 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 21:36:44.830567 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 21:36:44.850609 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:36:44.850685 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:36:44.850707 kernel: BTRFS info (device vda6): using free space tree Feb 13 21:36:44.856205 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 21:36:44.867087 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 21:36:44.871199 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:36:44.877171 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 21:36:44.885428 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 21:36:45.012421 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 21:36:45.018845 ignition[687]: Ignition 2.20.0 Feb 13 21:36:45.018872 ignition[687]: Stage: fetch-offline Feb 13 21:36:45.018963 ignition[687]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:36:45.018982 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:36:45.020651 ignition[687]: parsed url from cmdline: "" Feb 13 21:36:45.020670 ignition[687]: no config URL provided Feb 13 21:36:45.020682 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 21:36:45.020701 ignition[687]: no config at "/usr/lib/ignition/user.ign" Feb 13 21:36:45.020718 ignition[687]: failed to fetch config: resource requires networking Feb 13 21:36:45.021004 ignition[687]: Ignition finished successfully Feb 13 21:36:45.026444 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 21:36:45.027536 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 21:36:45.067273 systemd-networkd[768]: lo: Link UP Feb 13 21:36:45.067287 systemd-networkd[768]: lo: Gained carrier Feb 13 21:36:45.069598 systemd-networkd[768]: Enumeration completed Feb 13 21:36:45.069791 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 21:36:45.070164 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 21:36:45.070171 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:36:45.071525 systemd-networkd[768]: eth0: Link UP Feb 13 21:36:45.071531 systemd-networkd[768]: eth0: Gained carrier Feb 13 21:36:45.071543 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 21:36:45.073614 systemd[1]: Reached target network.target - Network. Feb 13 21:36:45.090467 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 21:36:45.100288 systemd-networkd[768]: eth0: DHCPv4 address 10.230.24.66/30, gateway 10.230.24.65 acquired from 10.230.24.65 Feb 13 21:36:45.108902 ignition[771]: Ignition 2.20.0 Feb 13 21:36:45.108922 ignition[771]: Stage: fetch Feb 13 21:36:45.109239 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:36:45.109275 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:36:45.109418 ignition[771]: parsed url from cmdline: "" Feb 13 21:36:45.109426 ignition[771]: no config URL provided Feb 13 21:36:45.109436 ignition[771]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 21:36:45.109454 ignition[771]: no config at "/usr/lib/ignition/user.ign" Feb 13 21:36:45.109646 ignition[771]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 13 21:36:45.109802 ignition[771]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 13 21:36:45.109850 ignition[771]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 13 21:36:45.126896 ignition[771]: GET result: OK Feb 13 21:36:45.127043 ignition[771]: parsing config with SHA512: 6d1f9a0a417edefae7bdc4a858c1bed732ccaed67ec9303d80f75ad124bd308bcde37cf14a91513f95b4f687c1d3ab7b57a67c1a7cca2b862ceb4bf190f10755 Feb 13 21:36:45.131457 unknown[771]: fetched base config from "system" Feb 13 21:36:45.131474 unknown[771]: fetched base config from "system" Feb 13 21:36:45.131733 ignition[771]: fetch: fetch complete Feb 13 21:36:45.131483 unknown[771]: fetched user config from "openstack" Feb 13 21:36:45.131741 ignition[771]: fetch: fetch passed Feb 13 21:36:45.131808 ignition[771]: Ignition finished successfully Feb 13 21:36:45.136303 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 21:36:45.149427 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 21:36:45.171027 ignition[778]: Ignition 2.20.0 Feb 13 21:36:45.171048 ignition[778]: Stage: kargs Feb 13 21:36:45.171332 ignition[778]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:36:45.173699 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 21:36:45.171353 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:36:45.172235 ignition[778]: kargs: kargs passed Feb 13 21:36:45.172324 ignition[778]: Ignition finished successfully Feb 13 21:36:45.191535 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 21:36:45.206955 ignition[784]: Ignition 2.20.0 Feb 13 21:36:45.206980 ignition[784]: Stage: disks Feb 13 21:36:45.208040 ignition[784]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:36:45.208075 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:36:45.209033 ignition[784]: disks: disks passed Feb 13 21:36:45.210264 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 21:36:45.209108 ignition[784]: Ignition finished successfully Feb 13 21:36:45.211675 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 21:36:45.213122 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 21:36:45.214627 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 21:36:45.216158 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 21:36:45.217692 systemd[1]: Reached target basic.target - Basic System. Feb 13 21:36:45.224525 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 21:36:45.245169 systemd-fsck[793]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 21:36:45.248607 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 21:36:45.257395 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 21:36:45.368204 kernel: EXT4-fs (vda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 21:36:45.369558 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 21:36:45.370868 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 21:36:45.384442 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 21:36:45.387946 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 21:36:45.389481 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 21:36:45.392354 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 13 21:36:45.394489 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 21:36:45.394540 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 21:36:45.401205 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Feb 13 21:36:45.408197 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:36:45.408865 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 21:36:45.413727 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:36:45.413756 kernel: BTRFS info (device vda6): using free space tree Feb 13 21:36:45.424448 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 21:36:45.427601 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 21:36:45.430307 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 21:36:45.497320 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 21:36:45.507340 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Feb 13 21:36:45.514018 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 21:36:45.521498 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 21:36:45.624174 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 21:36:45.630373 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 21:36:45.637447 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 21:36:45.649225 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:36:45.668829 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 21:36:45.677593 ignition[920]: INFO : Ignition 2.20.0 Feb 13 21:36:45.677593 ignition[920]: INFO : Stage: mount Feb 13 21:36:45.679421 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:36:45.679421 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:36:45.679421 ignition[920]: INFO : mount: mount passed Feb 13 21:36:45.679421 ignition[920]: INFO : Ignition finished successfully Feb 13 21:36:45.680021 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 21:36:45.801698 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 21:36:46.504738 systemd-networkd[768]: eth0: Gained IPv6LL Feb 13 21:36:48.011777 systemd-networkd[768]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8610:24:19ff:fee6:1842/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8610:24:19ff:fee6:1842/64 assigned by NDisc. Feb 13 21:36:48.011790 systemd-networkd[768]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 21:36:52.572665 coreos-metadata[803]: Feb 13 21:36:52.572 WARN failed to locate config-drive, using the metadata service API instead Feb 13 21:36:52.595759 coreos-metadata[803]: Feb 13 21:36:52.595 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 21:36:52.609360 coreos-metadata[803]: Feb 13 21:36:52.609 INFO Fetch successful Feb 13 21:36:52.610349 coreos-metadata[803]: Feb 13 21:36:52.609 INFO wrote hostname srv-7n28k.gb1.brightbox.com to /sysroot/etc/hostname Feb 13 21:36:52.612113 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 13 21:36:52.612319 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 13 21:36:52.619302 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 21:36:52.645441 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 21:36:52.676240 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (935) Feb 13 21:36:52.681656 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:36:52.681731 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:36:52.681752 kernel: BTRFS info (device vda6): using free space tree Feb 13 21:36:52.688220 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 21:36:52.690903 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 21:36:52.718813 ignition[953]: INFO : Ignition 2.20.0 Feb 13 21:36:52.718813 ignition[953]: INFO : Stage: files Feb 13 21:36:52.720673 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:36:52.720673 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:36:52.720673 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Feb 13 21:36:52.723469 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 21:36:52.723469 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 21:36:52.725458 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 21:36:52.725458 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 21:36:52.727331 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 21:36:52.725847 unknown[953]: wrote ssh authorized keys file for user: core Feb 13 21:36:52.729339 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 21:36:52.729339 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 21:36:52.729339 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 21:36:52.729339 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 21:36:52.729339 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:36:52.729339 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:36:52.729339 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:36:52.729339 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 21:36:53.082421 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 21:36:54.387249 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:36:54.389673 ignition[953]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 21:36:54.389673 ignition[953]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 21:36:54.389673 ignition[953]: INFO : files: files passed Feb 13 21:36:54.389673 ignition[953]: INFO : Ignition finished successfully Feb 13 21:36:54.390629 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 21:36:54.404486 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 21:36:54.408886 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 21:36:54.410587 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 21:36:54.410784 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 21:36:54.433323 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:36:54.433323 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:36:54.436857 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:36:54.438371 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 21:36:54.441133 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 21:36:54.447376 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 21:36:54.478638 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 21:36:54.478843 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 21:36:54.480971 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 21:36:54.482197 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 21:36:54.483889 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 21:36:54.490379 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 21:36:54.508077 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 21:36:54.515387 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 21:36:54.529256 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:36:54.531125 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:36:54.533121 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 21:36:54.533892 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 21:36:54.534090 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 21:36:54.536169 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 21:36:54.537201 systemd[1]: Stopped target basic.target - Basic System. Feb 13 21:36:54.538764 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 21:36:54.540265 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 21:36:54.541715 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 21:36:54.543507 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 21:36:54.545105 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 21:36:54.546774 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 21:36:54.548168 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 21:36:54.549675 systemd[1]: Stopped target swap.target - Swaps. Feb 13 21:36:54.550966 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 21:36:54.551139 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 21:36:54.552747 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:36:54.553684 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:36:54.555109 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 21:36:54.555296 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:36:54.556752 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 21:36:54.556922 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 21:36:54.559075 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 21:36:54.559269 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 21:36:54.560886 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 21:36:54.561074 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 21:36:54.571890 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 21:36:54.575499 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 21:36:54.576668 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 21:36:54.576914 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:36:54.579489 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 21:36:54.579804 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 21:36:54.591682 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 21:36:54.593442 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 21:36:54.602448 ignition[1005]: INFO : Ignition 2.20.0 Feb 13 21:36:54.602448 ignition[1005]: INFO : Stage: umount Feb 13 21:36:54.602448 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:36:54.602448 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:36:54.606982 ignition[1005]: INFO : umount: umount passed Feb 13 21:36:54.606982 ignition[1005]: INFO : Ignition finished successfully Feb 13 21:36:54.608418 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 21:36:54.608577 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 21:36:54.613093 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 21:36:54.614998 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 21:36:54.615107 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 21:36:54.616661 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 21:36:54.616746 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 21:36:54.618115 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 21:36:54.618232 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 21:36:54.619575 systemd[1]: Stopped target network.target - Network. Feb 13 21:36:54.620994 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 21:36:54.621066 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 21:36:54.622538 systemd[1]: Stopped target paths.target - Path Units. Feb 13 21:36:54.623807 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 21:36:54.628295 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:36:54.629097 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 21:36:54.630867 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 21:36:54.638167 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 21:36:54.638263 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 21:36:54.639577 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 21:36:54.639647 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 21:36:54.640841 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 21:36:54.640907 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 21:36:54.642291 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 21:36:54.642365 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 21:36:54.643933 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 21:36:54.645546 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 21:36:54.650366 systemd-networkd[768]: eth0: DHCPv6 lease lost Feb 13 21:36:54.652687 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 21:36:54.652846 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 21:36:54.655171 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 21:36:54.655499 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:36:54.662307 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 21:36:54.662995 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 21:36:54.663072 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 21:36:54.664618 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:36:54.670861 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 21:36:54.671047 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 21:36:54.674593 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 21:36:54.674841 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:36:54.682977 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 21:36:54.683115 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 21:36:54.686456 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 21:36:54.686530 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:36:54.688062 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 21:36:54.688128 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 21:36:54.690273 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 21:36:54.690343 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 21:36:54.691767 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 21:36:54.691851 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:36:54.702418 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 21:36:54.703579 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 21:36:54.703668 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:36:54.704435 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 21:36:54.704505 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 21:36:54.707390 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 21:36:54.707460 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:36:54.708522 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 21:36:54.708592 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 21:36:54.712065 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 21:36:54.712135 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:36:54.715342 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 21:36:54.715414 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:36:54.716520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 21:36:54.716591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:36:54.718874 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 21:36:54.719070 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 21:36:54.720996 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 21:36:54.721148 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 21:36:54.722432 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 21:36:54.722575 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 21:36:54.725789 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 21:36:54.727035 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 21:36:54.727117 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 21:36:54.737396 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 21:36:54.748117 systemd[1]: Switching root. Feb 13 21:36:54.786833 systemd-journald[201]: Journal stopped Feb 13 21:36:56.289000 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Feb 13 21:36:56.289129 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 21:36:56.289172 kernel: SELinux: policy capability open_perms=1 Feb 13 21:36:56.289220 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 21:36:56.289248 kernel: SELinux: policy capability always_check_network=0 Feb 13 21:36:56.289269 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 21:36:56.289289 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 21:36:56.289308 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 21:36:56.289341 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 21:36:56.289363 kernel: audit: type=1403 audit(1739482615.017:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 21:36:56.289391 systemd[1]: Successfully loaded SELinux policy in 52.565ms. Feb 13 21:36:56.289459 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.597ms. Feb 13 21:36:56.289485 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 21:36:56.289507 systemd[1]: Detected virtualization kvm. Feb 13 21:36:56.289542 systemd[1]: Detected architecture x86-64. Feb 13 21:36:56.289565 systemd[1]: Detected first boot. Feb 13 21:36:56.289599 systemd[1]: Hostname set to . Feb 13 21:36:56.289622 systemd[1]: Initializing machine ID from VM UUID. Feb 13 21:36:56.289653 zram_generator::config[1047]: No configuration found. Feb 13 21:36:56.289678 systemd[1]: Populated /etc with preset unit settings. Feb 13 21:36:56.289698 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 21:36:56.289720 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 21:36:56.289753 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 21:36:56.289783 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 21:36:56.289822 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 21:36:56.289843 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 21:36:56.289877 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 21:36:56.289913 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 21:36:56.289935 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 21:36:56.289956 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 21:36:56.289991 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 21:36:56.290015 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:36:56.290036 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:36:56.290057 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 21:36:56.290078 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 21:36:56.290099 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 21:36:56.290121 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 21:36:56.290142 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 21:36:56.290163 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:36:56.290215 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 21:36:56.290248 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 21:36:56.290282 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 21:36:56.290302 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 21:36:56.290330 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:36:56.290359 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 21:36:56.290393 systemd[1]: Reached target slices.target - Slice Units. Feb 13 21:36:56.290417 systemd[1]: Reached target swap.target - Swaps. Feb 13 21:36:56.290438 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 21:36:56.290458 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 21:36:56.290478 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:36:56.290499 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 21:36:56.290519 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:36:56.290552 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 21:36:56.290574 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 21:36:56.290595 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 21:36:56.290631 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 21:36:56.290654 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:36:56.290675 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 21:36:56.290696 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 21:36:56.290718 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 21:36:56.290747 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 21:36:56.290769 systemd[1]: Reached target machines.target - Containers. Feb 13 21:36:56.290791 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 21:36:56.290844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:36:56.290866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 21:36:56.290908 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 21:36:56.290931 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 21:36:56.290974 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 21:36:56.291015 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 21:36:56.291037 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 21:36:56.291059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 21:36:56.291080 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 21:36:56.291101 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 21:36:56.291129 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 21:36:56.291151 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 21:36:56.291172 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 21:36:56.291224 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 21:36:56.291262 kernel: loop: module loaded Feb 13 21:36:56.291297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 21:36:56.291319 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 21:36:56.291340 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 21:36:56.291366 kernel: fuse: init (API version 7.39) Feb 13 21:36:56.291386 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 21:36:56.291407 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 21:36:56.291438 kernel: ACPI: bus type drm_connector registered Feb 13 21:36:56.291473 systemd[1]: Stopped verity-setup.service. Feb 13 21:36:56.291522 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:36:56.291544 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 21:36:56.291565 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 21:36:56.291598 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 21:36:56.291626 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 21:36:56.291664 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 21:36:56.291716 systemd-journald[1136]: Collecting audit messages is disabled. Feb 13 21:36:56.291770 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 21:36:56.291797 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:36:56.291822 systemd-journald[1136]: Journal started Feb 13 21:36:56.291858 systemd-journald[1136]: Runtime Journal (/run/log/journal/bd8c1a52dcff419783d97899afbfd4e7) is 4.7M, max 37.9M, 33.2M free. Feb 13 21:36:55.887072 systemd[1]: Queued start job for default target multi-user.target. Feb 13 21:36:55.906921 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 21:36:55.907731 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 21:36:56.296750 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 21:36:56.298545 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 21:36:56.298815 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 21:36:56.300016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 21:36:56.301017 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 21:36:56.302218 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 21:36:56.302526 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 21:36:56.304684 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 21:36:56.304914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 21:36:56.306122 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 21:36:56.306646 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 21:36:56.307847 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 21:36:56.308096 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 21:36:56.309369 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 21:36:56.310497 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 21:36:56.311771 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 21:36:56.320295 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 21:36:56.342639 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 21:36:56.353258 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 21:36:56.363432 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 21:36:56.366295 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 21:36:56.366347 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 21:36:56.369716 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 21:36:56.378040 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 21:36:56.382143 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 21:36:56.385505 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:36:56.391337 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 21:36:56.399369 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 21:36:56.401833 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 21:36:56.410454 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 21:36:56.413325 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 21:36:56.417432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 21:36:56.428399 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 21:36:56.431507 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 21:36:56.438417 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 21:36:56.439513 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 21:36:56.445578 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 21:36:56.455635 systemd-journald[1136]: Time spent on flushing to /var/log/journal/bd8c1a52dcff419783d97899afbfd4e7 is 151.940ms for 1124 entries. Feb 13 21:36:56.455635 systemd-journald[1136]: System Journal (/var/log/journal/bd8c1a52dcff419783d97899afbfd4e7) is 8.0M, max 584.8M, 576.8M free. Feb 13 21:36:56.664485 systemd-journald[1136]: Received client request to flush runtime journal. Feb 13 21:36:56.664593 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 21:36:56.664636 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 21:36:56.664669 kernel: loop1: detected capacity change from 0 to 205544 Feb 13 21:36:56.498593 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 21:36:56.502597 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 21:36:56.515097 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 21:36:56.553223 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:36:56.564207 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 21:36:56.567849 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 21:36:56.623314 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Feb 13 21:36:56.623337 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Feb 13 21:36:56.650431 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 21:36:56.660753 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 21:36:56.668408 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 21:36:56.687196 kernel: loop2: detected capacity change from 0 to 8 Feb 13 21:36:56.690256 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:36:56.704433 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 21:36:56.730456 kernel: loop3: detected capacity change from 0 to 141000 Feb 13 21:36:56.733779 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 21:36:56.789843 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 21:36:56.798266 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 21:36:56.802443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 21:36:56.837900 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Feb 13 21:36:56.838453 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Feb 13 21:36:56.845215 kernel: loop5: detected capacity change from 0 to 205544 Feb 13 21:36:56.845945 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:36:56.885232 kernel: loop6: detected capacity change from 0 to 8 Feb 13 21:36:56.892226 kernel: loop7: detected capacity change from 0 to 141000 Feb 13 21:36:56.936974 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 13 21:36:56.937932 (sd-merge)[1207]: Merged extensions into '/usr'. Feb 13 21:36:56.954944 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 21:36:56.954984 systemd[1]: Reloading... Feb 13 21:36:57.136248 zram_generator::config[1235]: No configuration found. Feb 13 21:36:57.232231 ldconfig[1175]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 21:36:57.407659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:36:57.482548 systemd[1]: Reloading finished in 526 ms. Feb 13 21:36:57.513650 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 21:36:57.516231 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 21:36:57.529533 systemd[1]: Starting ensure-sysext.service... Feb 13 21:36:57.532627 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 21:36:57.534236 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 21:36:57.542463 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:36:57.550340 systemd[1]: Reloading requested from client PID 1293 ('systemctl') (unit ensure-sysext.service)... Feb 13 21:36:57.550518 systemd[1]: Reloading... Feb 13 21:36:57.571204 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 21:36:57.571709 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 21:36:57.573567 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 21:36:57.574114 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Feb 13 21:36:57.574350 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Feb 13 21:36:57.582403 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 21:36:57.582422 systemd-tmpfiles[1294]: Skipping /boot Feb 13 21:36:57.609014 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 21:36:57.609033 systemd-tmpfiles[1294]: Skipping /boot Feb 13 21:36:57.640451 systemd-udevd[1296]: Using default interface naming scheme 'v255'. Feb 13 21:36:57.684050 zram_generator::config[1323]: No configuration found. Feb 13 21:36:57.871263 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1352) Feb 13 21:36:57.942814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:36:57.984200 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 21:36:57.991993 kernel: ACPI: button: Power Button [PWRF] Feb 13 21:36:58.025203 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 21:36:58.055105 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 21:36:58.055881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 21:36:58.064499 systemd[1]: Reloading finished in 513 ms. Feb 13 21:36:58.087373 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:36:58.096923 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:36:58.109408 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 21:36:58.112228 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 21:36:58.121718 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 21:36:58.122085 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 21:36:58.145462 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:36:58.151524 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 21:36:58.162614 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 21:36:58.163684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:36:58.168532 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 21:36:58.181482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 21:36:58.191530 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 21:36:58.192476 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:36:58.196529 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 21:36:58.209894 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 21:36:58.217549 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 21:36:58.229142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 21:36:58.235487 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 21:36:58.236297 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:36:58.240160 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:36:58.240494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:36:58.240756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:36:58.240909 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:36:58.246506 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:36:58.246830 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:36:58.252513 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 21:36:58.253401 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:36:58.253563 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:36:58.272603 systemd[1]: Finished ensure-sysext.service. Feb 13 21:36:58.274022 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 21:36:58.274329 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 21:36:58.282596 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 21:36:58.290434 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 21:36:58.294413 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 21:36:58.295673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 21:36:58.295963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 21:36:58.299758 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 21:36:58.300053 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 21:36:58.302906 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 21:36:58.314403 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 21:36:58.314650 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 21:36:58.320712 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 21:36:58.343368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:36:58.346912 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 21:36:58.363943 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 21:36:58.389454 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 21:36:58.393626 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 21:36:58.400638 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 21:36:58.426083 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 21:36:58.472412 augenrules[1449]: No rules Feb 13 21:36:58.473373 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 21:36:58.475529 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 21:36:58.513408 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 21:36:58.644760 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 21:36:58.719903 systemd-networkd[1411]: lo: Link UP Feb 13 21:36:58.719917 systemd-networkd[1411]: lo: Gained carrier Feb 13 21:36:58.728512 systemd-networkd[1411]: Enumeration completed Feb 13 21:36:58.729107 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 21:36:58.729113 systemd-networkd[1411]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:36:58.732095 systemd-resolved[1412]: Positive Trust Anchors: Feb 13 21:36:58.732519 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 21:36:58.732568 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 21:36:58.733301 systemd-networkd[1411]: eth0: Link UP Feb 13 21:36:58.733308 systemd-networkd[1411]: eth0: Gained carrier Feb 13 21:36:58.733328 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 21:36:58.734007 systemd-timesyncd[1423]: No network connectivity, watching for changes. Feb 13 21:36:58.739690 systemd-resolved[1412]: Using system hostname 'srv-7n28k.gb1.brightbox.com'. Feb 13 21:36:58.745626 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 21:36:58.747559 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 21:36:58.748310 systemd-networkd[1411]: eth0: DHCPv4 address 10.230.24.66/30, gateway 10.230.24.65 acquired from 10.230.24.65 Feb 13 21:36:58.748894 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 21:36:58.749882 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Feb 13 21:36:58.751135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:36:58.754423 systemd[1]: Reached target network.target - Network. Feb 13 21:36:58.756080 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:36:58.756887 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 21:36:58.763637 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 21:36:58.773509 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 21:36:58.792427 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 21:36:58.829454 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 21:36:58.830700 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:36:58.831564 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 21:36:58.832439 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 21:36:58.833414 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 21:36:58.834574 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 21:36:58.835480 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 21:36:58.836294 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 21:36:58.837084 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 21:36:58.837150 systemd[1]: Reached target paths.target - Path Units. Feb 13 21:36:58.837829 systemd[1]: Reached target timers.target - Timer Units. Feb 13 21:36:58.840193 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 21:36:58.842819 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 21:36:58.848481 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 21:36:58.851125 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 21:36:58.852621 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 21:36:58.853498 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 21:36:58.854205 systemd[1]: Reached target basic.target - Basic System. Feb 13 21:36:58.854929 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 21:36:58.854982 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 21:36:58.861338 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 21:36:58.866513 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 21:36:58.873212 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 21:36:58.873431 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 21:36:58.879678 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 21:36:58.884412 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 21:36:58.886246 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 21:36:58.893409 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 21:36:58.898590 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 21:36:58.903405 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 21:36:58.910454 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 21:36:58.912067 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 21:36:58.912783 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 21:36:58.918248 jq[1478]: false Feb 13 21:36:58.920380 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 21:36:58.926755 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 21:36:58.939819 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 21:36:58.940538 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 21:36:58.952830 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 21:36:58.987338 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 21:36:58.987661 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 21:36:58.996840 dbus-daemon[1477]: [system] SELinux support is enabled Feb 13 21:36:59.000028 jq[1487]: true Feb 13 21:36:58.999728 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 21:36:58.999875 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 21:36:59.006569 dbus-daemon[1477]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1411 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 21:36:59.007435 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 21:36:59.007486 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 21:36:59.010666 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 21:36:59.010703 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 21:36:59.023257 extend-filesystems[1479]: Found loop4 Feb 13 21:36:59.023257 extend-filesystems[1479]: Found loop5 Feb 13 21:36:59.023257 extend-filesystems[1479]: Found loop6 Feb 13 21:36:59.023257 extend-filesystems[1479]: Found loop7 Feb 13 21:36:59.023257 extend-filesystems[1479]: Found vda Feb 13 21:36:59.023257 extend-filesystems[1479]: Found vda1 Feb 13 21:36:59.023257 extend-filesystems[1479]: Found vda2 Feb 13 21:36:59.023257 extend-filesystems[1479]: Found vda3 Feb 13 21:36:59.023257 extend-filesystems[1479]: Found usr Feb 13 21:36:59.023257 extend-filesystems[1479]: Found vda4 Feb 13 21:36:59.023257 extend-filesystems[1479]: Found vda6 Feb 13 21:36:59.023257 extend-filesystems[1479]: Found vda7 Feb 13 21:36:59.023257 extend-filesystems[1479]: Found vda9 Feb 13 21:36:59.023257 extend-filesystems[1479]: Checking size of /dev/vda9 Feb 13 21:36:59.023412 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 21:36:59.073386 update_engine[1486]: I20250213 21:36:59.052421 1486 main.cc:92] Flatcar Update Engine starting Feb 13 21:36:59.045437 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 21:36:59.048826 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 21:36:59.079070 jq[1503]: true Feb 13 21:36:59.049118 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 21:36:59.075825 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 21:36:59.083694 update_engine[1486]: I20250213 21:36:59.079440 1486 update_check_scheduler.cc:74] Next update check in 5m52s Feb 13 21:36:59.077235 systemd[1]: Started update-engine.service - Update Engine. Feb 13 21:36:59.081172 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 21:36:59.107624 extend-filesystems[1479]: Resized partition /dev/vda9 Feb 13 21:36:59.115205 extend-filesystems[1517]: resize2fs 1.47.1 (20-May-2024) Feb 13 21:36:59.133721 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 13 21:36:59.204699 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1350) Feb 13 21:36:59.203100 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 21:36:59.203161 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 21:36:59.214642 systemd-logind[1485]: New seat seat0. Feb 13 21:36:59.221947 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 21:36:59.312242 systemd-timesyncd[1423]: Contacted time server 83.151.207.133:123 (2.flatcar.pool.ntp.org). Feb 13 21:36:59.312694 systemd-timesyncd[1423]: Initial clock synchronization to Thu 2025-02-13 21:36:59.517227 UTC. Feb 13 21:36:59.360079 bash[1532]: Updated "/home/core/.ssh/authorized_keys" Feb 13 21:36:59.376306 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 21:36:59.384928 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 21:36:59.388561 dbus-daemon[1477]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1507 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 21:36:59.389526 systemd[1]: Starting sshkeys.service... Feb 13 21:36:59.390818 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 21:36:59.405553 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 21:36:59.428852 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 21:36:59.425629 polkitd[1543]: Started polkitd version 121 Feb 13 21:36:59.475458 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 21:36:59.475458 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 21:36:59.475458 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 21:36:59.433649 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 21:36:59.441665 polkitd[1543]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 21:36:59.486431 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Feb 13 21:36:59.443637 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 21:36:59.441757 polkitd[1543]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 21:36:59.455594 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 21:36:59.445241 polkitd[1543]: Finished loading, compiling and executing 2 rules Feb 13 21:36:59.458346 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 21:36:59.446579 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 21:36:59.471335 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 21:36:59.449268 polkitd[1543]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 21:36:59.471711 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 21:36:59.507322 systemd-hostnamed[1507]: Hostname set to (static) Feb 13 21:36:59.563527 containerd[1496]: time="2025-02-13T21:36:59.563356992Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 21:36:59.597268 containerd[1496]: time="2025-02-13T21:36:59.596256391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:36:59.598676 containerd[1496]: time="2025-02-13T21:36:59.598633732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:36:59.598739 containerd[1496]: time="2025-02-13T21:36:59.598686616Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 21:36:59.598739 containerd[1496]: time="2025-02-13T21:36:59.598711672Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 21:36:59.599017 containerd[1496]: time="2025-02-13T21:36:59.598987975Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 21:36:59.599086 containerd[1496]: time="2025-02-13T21:36:59.599029273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 21:36:59.599168 containerd[1496]: time="2025-02-13T21:36:59.599140151Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:36:59.599241 containerd[1496]: time="2025-02-13T21:36:59.599169812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:36:59.600860 containerd[1496]: time="2025-02-13T21:36:59.599490971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:36:59.600860 containerd[1496]: time="2025-02-13T21:36:59.599534482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 21:36:59.600860 containerd[1496]: time="2025-02-13T21:36:59.599559342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:36:59.600860 containerd[1496]: time="2025-02-13T21:36:59.599592281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 21:36:59.600860 containerd[1496]: time="2025-02-13T21:36:59.599724455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:36:59.600860 containerd[1496]: time="2025-02-13T21:36:59.600164423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:36:59.600860 containerd[1496]: time="2025-02-13T21:36:59.600347454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:36:59.600860 containerd[1496]: time="2025-02-13T21:36:59.600369889Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 21:36:59.600860 containerd[1496]: time="2025-02-13T21:36:59.600509592Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 21:36:59.600860 containerd[1496]: time="2025-02-13T21:36:59.600586362Z" level=info msg="metadata content store policy set" policy=shared Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.604409643Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.604490078Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.604517601Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.604540673Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.604565884Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.604744616Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.605080069Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.605281826Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.605316073Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.605337138Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.605357542Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.605395902Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.605413596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 21:36:59.606213 containerd[1496]: time="2025-02-13T21:36:59.605432858Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605466347Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605511608Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605532458Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605551340Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605579078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605600221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605627370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605650012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605669258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605725586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605757988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605816389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605838089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.606733 containerd[1496]: time="2025-02-13T21:36:59.605859436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.607341 containerd[1496]: time="2025-02-13T21:36:59.605877878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.607341 containerd[1496]: time="2025-02-13T21:36:59.605895547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.607341 containerd[1496]: time="2025-02-13T21:36:59.605915963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.607341 containerd[1496]: time="2025-02-13T21:36:59.605938025Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 21:36:59.607341 containerd[1496]: time="2025-02-13T21:36:59.605974254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.607341 containerd[1496]: time="2025-02-13T21:36:59.605999034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.607341 containerd[1496]: time="2025-02-13T21:36:59.606018374Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 21:36:59.607341 containerd[1496]: time="2025-02-13T21:36:59.606108635Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 21:36:59.607341 containerd[1496]: time="2025-02-13T21:36:59.606157239Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 21:36:59.607668 containerd[1496]: time="2025-02-13T21:36:59.606179092Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 21:36:59.607768 containerd[1496]: time="2025-02-13T21:36:59.607739033Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 21:36:59.607872 containerd[1496]: time="2025-02-13T21:36:59.607848258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.607988 containerd[1496]: time="2025-02-13T21:36:59.607963978Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 21:36:59.608089 containerd[1496]: time="2025-02-13T21:36:59.608064819Z" level=info msg="NRI interface is disabled by configuration." Feb 13 21:36:59.608202 containerd[1496]: time="2025-02-13T21:36:59.608157814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 21:36:59.608813 containerd[1496]: time="2025-02-13T21:36:59.608726327Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 21:36:59.609118 containerd[1496]: time="2025-02-13T21:36:59.609093607Z" level=info msg="Connect containerd service" Feb 13 21:36:59.609266 containerd[1496]: time="2025-02-13T21:36:59.609240456Z" level=info msg="using legacy CRI server" Feb 13 21:36:59.609353 containerd[1496]: time="2025-02-13T21:36:59.609331868Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 21:36:59.609673 containerd[1496]: time="2025-02-13T21:36:59.609648344Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 21:36:59.611023 containerd[1496]: time="2025-02-13T21:36:59.610991412Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 21:36:59.611421 containerd[1496]: time="2025-02-13T21:36:59.611346516Z" level=info msg="Start subscribing containerd event" Feb 13 21:36:59.611491 containerd[1496]: time="2025-02-13T21:36:59.611430699Z" level=info msg="Start recovering state" Feb 13 21:36:59.611598 containerd[1496]: time="2025-02-13T21:36:59.611559573Z" level=info msg="Start event monitor" Feb 13 21:36:59.611681 containerd[1496]: time="2025-02-13T21:36:59.611611387Z" level=info msg="Start snapshots syncer" Feb 13 21:36:59.611681 containerd[1496]: time="2025-02-13T21:36:59.611628566Z" level=info msg="Start cni network conf syncer for default" Feb 13 21:36:59.611681 containerd[1496]: time="2025-02-13T21:36:59.611639645Z" level=info msg="Start streaming server" Feb 13 21:36:59.612412 containerd[1496]: time="2025-02-13T21:36:59.612375493Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 21:36:59.612642 containerd[1496]: time="2025-02-13T21:36:59.612607266Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 21:36:59.612927 containerd[1496]: time="2025-02-13T21:36:59.612893823Z" level=info msg="containerd successfully booted in 0.050982s" Feb 13 21:36:59.612994 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 21:36:59.752423 systemd-networkd[1411]: eth0: Gained IPv6LL Feb 13 21:36:59.756768 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 21:36:59.759798 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 21:36:59.769566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:36:59.776596 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 21:36:59.801841 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 21:36:59.821157 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 21:36:59.849234 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 21:36:59.860308 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 21:36:59.865108 systemd[1]: Started sshd@0-10.230.24.66:22-147.75.109.163:55860.service - OpenSSH per-connection server daemon (147.75.109.163:55860). Feb 13 21:36:59.882229 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 21:36:59.882514 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 21:36:59.895431 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 21:36:59.909123 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 21:36:59.919080 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 21:36:59.924540 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 21:36:59.925684 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 21:37:00.742890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:37:00.752779 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:37:00.831084 sshd[1582]: Accepted publickey for core from 147.75.109.163 port 55860 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:37:00.833550 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:37:00.854991 systemd-logind[1485]: New session 1 of user core. Feb 13 21:37:00.856791 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 21:37:00.864945 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 21:37:00.895136 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 21:37:00.905701 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 21:37:00.914420 (systemd)[1605]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 21:37:01.062369 systemd[1605]: Queued start job for default target default.target. Feb 13 21:37:01.081441 systemd[1605]: Created slice app.slice - User Application Slice. Feb 13 21:37:01.081493 systemd[1605]: Reached target paths.target - Paths. Feb 13 21:37:01.081519 systemd[1605]: Reached target timers.target - Timers. Feb 13 21:37:01.084664 systemd[1605]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 21:37:01.122836 systemd[1605]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 21:37:01.123078 systemd[1605]: Reached target sockets.target - Sockets. Feb 13 21:37:01.123107 systemd[1605]: Reached target basic.target - Basic System. Feb 13 21:37:01.123196 systemd[1605]: Reached target default.target - Main User Target. Feb 13 21:37:01.123295 systemd[1605]: Startup finished in 198ms. Feb 13 21:37:01.123642 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 21:37:01.131655 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 21:37:01.262359 systemd-networkd[1411]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8610:24:19ff:fee6:1842/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8610:24:19ff:fee6:1842/64 assigned by NDisc. Feb 13 21:37:01.262384 systemd-networkd[1411]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 21:37:01.430130 kubelet[1598]: E0213 21:37:01.429952 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:37:01.433358 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:37:01.433635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:37:01.434311 systemd[1]: kubelet.service: Consumed 1.009s CPU time. Feb 13 21:37:01.788905 systemd[1]: Started sshd@1-10.230.24.66:22-147.75.109.163:56712.service - OpenSSH per-connection server daemon (147.75.109.163:56712). Feb 13 21:37:02.703368 sshd[1619]: Accepted publickey for core from 147.75.109.163 port 56712 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:37:02.706816 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:37:02.714529 systemd-logind[1485]: New session 2 of user core. Feb 13 21:37:02.724558 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 21:37:03.341278 sshd[1623]: Connection closed by 147.75.109.163 port 56712 Feb 13 21:37:03.343064 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Feb 13 21:37:03.349459 systemd[1]: sshd@1-10.230.24.66:22-147.75.109.163:56712.service: Deactivated successfully. Feb 13 21:37:03.352446 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 21:37:03.354037 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Feb 13 21:37:03.356164 systemd-logind[1485]: Removed session 2. Feb 13 21:37:03.515710 systemd[1]: Started sshd@2-10.230.24.66:22-147.75.109.163:56726.service - OpenSSH per-connection server daemon (147.75.109.163:56726). Feb 13 21:37:04.426063 sshd[1628]: Accepted publickey for core from 147.75.109.163 port 56726 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:37:04.428080 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:37:04.435509 systemd-logind[1485]: New session 3 of user core. Feb 13 21:37:04.446481 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 21:37:04.974341 agetty[1588]: failed to open credentials directory Feb 13 21:37:04.974401 agetty[1589]: failed to open credentials directory Feb 13 21:37:04.989944 login[1589]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 21:37:04.992015 login[1588]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 21:37:04.999383 systemd-logind[1485]: New session 4 of user core. Feb 13 21:37:05.015562 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 21:37:05.019208 systemd-logind[1485]: New session 5 of user core. Feb 13 21:37:05.028454 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 21:37:05.057570 sshd[1630]: Connection closed by 147.75.109.163 port 56726 Feb 13 21:37:05.058442 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Feb 13 21:37:05.063061 systemd[1]: sshd@2-10.230.24.66:22-147.75.109.163:56726.service: Deactivated successfully. Feb 13 21:37:05.066078 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 21:37:05.067925 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Feb 13 21:37:05.069551 systemd-logind[1485]: Removed session 3. Feb 13 21:37:06.006242 coreos-metadata[1476]: Feb 13 21:37:06.005 WARN failed to locate config-drive, using the metadata service API instead Feb 13 21:37:06.032098 coreos-metadata[1476]: Feb 13 21:37:06.032 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 13 21:37:06.038574 coreos-metadata[1476]: Feb 13 21:37:06.038 INFO Fetch failed with 404: resource not found Feb 13 21:37:06.038574 coreos-metadata[1476]: Feb 13 21:37:06.038 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 21:37:06.039118 coreos-metadata[1476]: Feb 13 21:37:06.039 INFO Fetch successful Feb 13 21:37:06.039314 coreos-metadata[1476]: Feb 13 21:37:06.039 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 13 21:37:06.050546 coreos-metadata[1476]: Feb 13 21:37:06.050 INFO Fetch successful Feb 13 21:37:06.050797 coreos-metadata[1476]: Feb 13 21:37:06.050 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 13 21:37:06.063078 coreos-metadata[1476]: Feb 13 21:37:06.063 INFO Fetch successful Feb 13 21:37:06.063325 coreos-metadata[1476]: Feb 13 21:37:06.063 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 13 21:37:06.076128 coreos-metadata[1476]: Feb 13 21:37:06.076 INFO Fetch successful Feb 13 21:37:06.076320 coreos-metadata[1476]: Feb 13 21:37:06.076 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 13 21:37:06.093005 coreos-metadata[1476]: Feb 13 21:37:06.092 INFO Fetch successful Feb 13 21:37:06.126753 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 21:37:06.128132 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 21:37:06.554671 coreos-metadata[1552]: Feb 13 21:37:06.554 WARN failed to locate config-drive, using the metadata service API instead Feb 13 21:37:06.577666 coreos-metadata[1552]: Feb 13 21:37:06.577 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 13 21:37:06.601548 coreos-metadata[1552]: Feb 13 21:37:06.601 INFO Fetch successful Feb 13 21:37:06.601875 coreos-metadata[1552]: Feb 13 21:37:06.601 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 21:37:06.626309 coreos-metadata[1552]: Feb 13 21:37:06.626 INFO Fetch successful Feb 13 21:37:06.628624 unknown[1552]: wrote ssh authorized keys file for user: core Feb 13 21:37:06.654325 update-ssh-keys[1670]: Updated "/home/core/.ssh/authorized_keys" Feb 13 21:37:06.656481 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 21:37:06.659570 systemd[1]: Finished sshkeys.service. Feb 13 21:37:06.661357 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 21:37:06.663351 systemd[1]: Startup finished in 1.310s (kernel) + 13.257s (initrd) + 11.697s (userspace) = 26.265s. Feb 13 21:37:11.684611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 21:37:11.692467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:37:11.861581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:37:11.876680 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:37:11.928970 kubelet[1682]: E0213 21:37:11.928876 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:37:11.932632 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:37:11.932890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:37:15.301544 systemd[1]: Started sshd@3-10.230.24.66:22-147.75.109.163:40944.service - OpenSSH per-connection server daemon (147.75.109.163:40944). Feb 13 21:37:16.196869 sshd[1690]: Accepted publickey for core from 147.75.109.163 port 40944 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:37:16.199090 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:37:16.207066 systemd-logind[1485]: New session 6 of user core. Feb 13 21:37:16.214411 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 21:37:16.817355 sshd[1692]: Connection closed by 147.75.109.163 port 40944 Feb 13 21:37:16.818472 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Feb 13 21:37:16.824577 systemd[1]: sshd@3-10.230.24.66:22-147.75.109.163:40944.service: Deactivated successfully. Feb 13 21:37:16.826871 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 21:37:16.827823 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Feb 13 21:37:16.829425 systemd-logind[1485]: Removed session 6. Feb 13 21:37:16.983766 systemd[1]: Started sshd@4-10.230.24.66:22-147.75.109.163:40960.service - OpenSSH per-connection server daemon (147.75.109.163:40960). Feb 13 21:37:17.877230 sshd[1697]: Accepted publickey for core from 147.75.109.163 port 40960 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:37:17.879638 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:37:17.887057 systemd-logind[1485]: New session 7 of user core. Feb 13 21:37:17.898439 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 21:37:18.492432 sshd[1699]: Connection closed by 147.75.109.163 port 40960 Feb 13 21:37:18.492249 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Feb 13 21:37:18.497533 systemd[1]: sshd@4-10.230.24.66:22-147.75.109.163:40960.service: Deactivated successfully. Feb 13 21:37:18.499865 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 21:37:18.500860 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Feb 13 21:37:18.502612 systemd-logind[1485]: Removed session 7. Feb 13 21:37:18.655794 systemd[1]: Started sshd@5-10.230.24.66:22-147.75.109.163:40964.service - OpenSSH per-connection server daemon (147.75.109.163:40964). Feb 13 21:37:19.550008 sshd[1704]: Accepted publickey for core from 147.75.109.163 port 40964 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:37:19.552310 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:37:19.560350 systemd-logind[1485]: New session 8 of user core. Feb 13 21:37:19.568514 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 21:37:20.171508 sshd[1706]: Connection closed by 147.75.109.163 port 40964 Feb 13 21:37:20.172705 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Feb 13 21:37:20.177421 systemd[1]: sshd@5-10.230.24.66:22-147.75.109.163:40964.service: Deactivated successfully. Feb 13 21:37:20.180006 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 21:37:20.182228 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Feb 13 21:37:20.183945 systemd-logind[1485]: Removed session 8. Feb 13 21:37:20.334544 systemd[1]: Started sshd@6-10.230.24.66:22-147.75.109.163:52666.service - OpenSSH per-connection server daemon (147.75.109.163:52666). Feb 13 21:37:21.234746 sshd[1711]: Accepted publickey for core from 147.75.109.163 port 52666 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:37:21.236766 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:37:21.244377 systemd-logind[1485]: New session 9 of user core. Feb 13 21:37:21.252619 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 21:37:21.729354 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 21:37:21.729920 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:37:21.747903 sudo[1714]: pam_unix(sudo:session): session closed for user root Feb 13 21:37:21.892694 sshd[1713]: Connection closed by 147.75.109.163 port 52666 Feb 13 21:37:21.893926 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Feb 13 21:37:21.900926 systemd[1]: sshd@6-10.230.24.66:22-147.75.109.163:52666.service: Deactivated successfully. Feb 13 21:37:21.903728 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 21:37:21.904881 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Feb 13 21:37:21.906941 systemd-logind[1485]: Removed session 9. Feb 13 21:37:22.050440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 21:37:22.061410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:37:22.063567 systemd[1]: Started sshd@7-10.230.24.66:22-147.75.109.163:52668.service - OpenSSH per-connection server daemon (147.75.109.163:52668). Feb 13 21:37:22.069231 systemd[1]: Started sshd@8-10.230.24.66:22-103.172.204.117:48700.service - OpenSSH per-connection server daemon (103.172.204.117:48700). Feb 13 21:37:22.222934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:37:22.237344 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:37:22.294912 kubelet[1732]: E0213 21:37:22.294798 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:37:22.298315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:37:22.298610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:37:22.999065 sshd[1721]: Accepted publickey for core from 147.75.109.163 port 52668 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:37:23.001463 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:37:23.008330 systemd-logind[1485]: New session 10 of user core. Feb 13 21:37:23.016422 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 21:37:23.149641 sshd[1722]: Invalid user mazzella from 103.172.204.117 port 48700 Feb 13 21:37:23.349275 sshd[1722]: Received disconnect from 103.172.204.117 port 48700:11: Bye Bye [preauth] Feb 13 21:37:23.349275 sshd[1722]: Disconnected from invalid user mazzella 103.172.204.117 port 48700 [preauth] Feb 13 21:37:23.351788 systemd[1]: sshd@8-10.230.24.66:22-103.172.204.117:48700.service: Deactivated successfully. Feb 13 21:37:23.482492 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 21:37:23.483048 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:37:23.489144 sudo[1743]: pam_unix(sudo:session): session closed for user root Feb 13 21:37:23.497068 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 21:37:23.497579 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:37:23.521138 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 21:37:23.562560 augenrules[1765]: No rules Feb 13 21:37:23.563501 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 21:37:23.563785 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 21:37:23.565232 sudo[1742]: pam_unix(sudo:session): session closed for user root Feb 13 21:37:23.709939 sshd[1739]: Connection closed by 147.75.109.163 port 52668 Feb 13 21:37:23.711461 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Feb 13 21:37:23.715483 systemd[1]: sshd@7-10.230.24.66:22-147.75.109.163:52668.service: Deactivated successfully. Feb 13 21:37:23.717990 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 21:37:23.720114 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Feb 13 21:37:23.721710 systemd-logind[1485]: Removed session 10. Feb 13 21:37:23.872589 systemd[1]: Started sshd@9-10.230.24.66:22-147.75.109.163:52682.service - OpenSSH per-connection server daemon (147.75.109.163:52682). Feb 13 21:37:24.758381 sshd[1773]: Accepted publickey for core from 147.75.109.163 port 52682 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:37:24.760357 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:37:24.767903 systemd-logind[1485]: New session 11 of user core. Feb 13 21:37:24.774487 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 21:37:25.233567 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 21:37:25.234051 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:37:25.940269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:37:25.950540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:37:25.987728 systemd[1]: Reloading requested from client PID 1808 ('systemctl') (unit session-11.scope)... Feb 13 21:37:25.987768 systemd[1]: Reloading... Feb 13 21:37:26.134921 zram_generator::config[1847]: No configuration found. Feb 13 21:37:26.318756 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:37:26.432744 systemd[1]: Reloading finished in 444 ms. Feb 13 21:37:26.511131 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 21:37:26.511310 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 21:37:26.511792 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:37:26.518629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:37:26.663411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:37:26.664814 (kubelet)[1916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 21:37:26.743239 kubelet[1916]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:37:26.743239 kubelet[1916]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 21:37:26.743239 kubelet[1916]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:37:26.744964 kubelet[1916]: I0213 21:37:26.744867 1916 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 21:37:27.175133 kubelet[1916]: I0213 21:37:27.174538 1916 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 21:37:27.175133 kubelet[1916]: I0213 21:37:27.174577 1916 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 21:37:27.176090 kubelet[1916]: I0213 21:37:27.176059 1916 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 21:37:27.202085 kubelet[1916]: I0213 21:37:27.202032 1916 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 21:37:27.212102 kubelet[1916]: E0213 21:37:27.212051 1916 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 21:37:27.212102 kubelet[1916]: I0213 21:37:27.212103 1916 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 21:37:27.219633 kubelet[1916]: I0213 21:37:27.219596 1916 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 21:37:27.221170 kubelet[1916]: I0213 21:37:27.221042 1916 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 21:37:27.221413 kubelet[1916]: I0213 21:37:27.221349 1916 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 21:37:27.221741 kubelet[1916]: I0213 21:37:27.221406 1916 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.24.66","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 21:37:27.221980 kubelet[1916]: I0213 21:37:27.221760 1916 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 21:37:27.221980 kubelet[1916]: I0213 21:37:27.221778 1916 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 21:37:27.221980 kubelet[1916]: I0213 21:37:27.221967 1916 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:37:27.224868 kubelet[1916]: I0213 21:37:27.224370 1916 kubelet.go:408] "Attempting to sync node with API server" Feb 13 21:37:27.224868 kubelet[1916]: I0213 21:37:27.224402 1916 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 21:37:27.224868 kubelet[1916]: I0213 21:37:27.224491 1916 kubelet.go:314] "Adding apiserver pod source" Feb 13 21:37:27.224868 kubelet[1916]: I0213 21:37:27.224528 1916 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 21:37:27.229713 kubelet[1916]: I0213 21:37:27.229606 1916 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 21:37:27.229907 kubelet[1916]: E0213 21:37:27.229858 1916 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:27.230190 kubelet[1916]: E0213 21:37:27.229931 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:27.232051 kubelet[1916]: I0213 21:37:27.232027 1916 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 21:37:27.232740 kubelet[1916]: W0213 21:37:27.232265 1916 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 21:37:27.233969 kubelet[1916]: I0213 21:37:27.233472 1916 server.go:1269] "Started kubelet" Feb 13 21:37:27.236289 kubelet[1916]: I0213 21:37:27.235410 1916 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 21:37:27.246238 kubelet[1916]: I0213 21:37:27.246051 1916 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 21:37:27.248633 kubelet[1916]: I0213 21:37:27.248607 1916 server.go:460] "Adding debug handlers to kubelet server" Feb 13 21:37:27.248805 kubelet[1916]: I0213 21:37:27.248779 1916 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 21:37:27.249138 kubelet[1916]: E0213 21:37:27.249106 1916 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.24.66\" not found" Feb 13 21:37:27.252866 kubelet[1916]: I0213 21:37:27.252803 1916 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 21:37:27.253361 kubelet[1916]: I0213 21:37:27.253338 1916 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 21:37:27.253870 kubelet[1916]: I0213 21:37:27.253844 1916 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 21:37:27.255812 kubelet[1916]: I0213 21:37:27.255784 1916 factory.go:221] Registration of the systemd container factory successfully Feb 13 21:37:27.256077 kubelet[1916]: I0213 21:37:27.256038 1916 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 21:37:27.257313 kubelet[1916]: I0213 21:37:27.256095 1916 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 21:37:27.259169 kubelet[1916]: I0213 21:37:27.256229 1916 reconciler.go:26] "Reconciler: start to sync state" Feb 13 21:37:27.261104 kubelet[1916]: E0213 21:37:27.261065 1916 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 21:37:27.262758 kubelet[1916]: I0213 21:37:27.262585 1916 factory.go:221] Registration of the containerd container factory successfully Feb 13 21:37:27.281397 kubelet[1916]: W0213 21:37:27.281344 1916 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 21:37:27.283596 kubelet[1916]: E0213 21:37:27.278987 1916 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.24.66.1823e240614c377f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.24.66,UID:10.230.24.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.24.66,},FirstTimestamp:2025-02-13 21:37:27.233312639 +0000 UTC m=+0.560499022,LastTimestamp:2025-02-13 21:37:27.233312639 +0000 UTC m=+0.560499022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.24.66,}" Feb 13 21:37:27.285889 kubelet[1916]: E0213 21:37:27.285855 1916 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 21:37:27.285889 kubelet[1916]: W0213 21:37:27.282369 1916 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.24.66" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 21:37:27.286047 kubelet[1916]: E0213 21:37:27.285909 1916 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.230.24.66\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 21:37:27.290808 kubelet[1916]: E0213 21:37:27.290775 1916 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.230.24.66\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 21:37:27.291327 kubelet[1916]: W0213 21:37:27.291301 1916 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 21:37:27.291475 kubelet[1916]: E0213 21:37:27.291448 1916 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 21:37:27.291722 kubelet[1916]: E0213 21:37:27.291544 1916 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.24.66.1823e24062f37e2c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.24.66,UID:10.230.24.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.230.24.66,},FirstTimestamp:2025-02-13 21:37:27.26105246 +0000 UTC m=+0.588238838,LastTimestamp:2025-02-13 21:37:27.26105246 +0000 UTC m=+0.588238838,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.24.66,}" Feb 13 21:37:27.292565 kubelet[1916]: I0213 21:37:27.292518 1916 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 21:37:27.292565 kubelet[1916]: I0213 21:37:27.292564 1916 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 21:37:27.292693 kubelet[1916]: I0213 21:37:27.292603 1916 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:37:27.294413 kubelet[1916]: I0213 21:37:27.294389 1916 policy_none.go:49] "None policy: Start" Feb 13 21:37:27.295271 kubelet[1916]: I0213 21:37:27.295241 1916 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 21:37:27.295432 kubelet[1916]: I0213 21:37:27.295403 1916 state_mem.go:35] "Initializing new in-memory state store" Feb 13 21:37:27.312784 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 21:37:27.329658 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 21:37:27.340935 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 21:37:27.344851 kubelet[1916]: I0213 21:37:27.344818 1916 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 21:37:27.345117 kubelet[1916]: I0213 21:37:27.345095 1916 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 21:37:27.345258 kubelet[1916]: I0213 21:37:27.345136 1916 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 21:37:27.346525 kubelet[1916]: I0213 21:37:27.346497 1916 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 21:37:27.351237 kubelet[1916]: E0213 21:37:27.351211 1916 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.24.66\" not found" Feb 13 21:37:27.362266 kubelet[1916]: I0213 21:37:27.362228 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 21:37:27.365035 kubelet[1916]: I0213 21:37:27.364431 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 21:37:27.365035 kubelet[1916]: I0213 21:37:27.364487 1916 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 21:37:27.365035 kubelet[1916]: I0213 21:37:27.364525 1916 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 21:37:27.365035 kubelet[1916]: E0213 21:37:27.364691 1916 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 21:37:27.449230 kubelet[1916]: I0213 21:37:27.447403 1916 kubelet_node_status.go:72] "Attempting to register node" node="10.230.24.66" Feb 13 21:37:27.499098 kubelet[1916]: I0213 21:37:27.499060 1916 kubelet_node_status.go:75] "Successfully registered node" node="10.230.24.66" Feb 13 21:37:27.499320 kubelet[1916]: E0213 21:37:27.499298 1916 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.230.24.66\": node \"10.230.24.66\" not found" Feb 13 21:37:27.786994 kubelet[1916]: E0213 21:37:27.786812 1916 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.24.66\" not found" Feb 13 21:37:27.827015 sudo[1776]: pam_unix(sudo:session): session closed for user root Feb 13 21:37:27.887395 kubelet[1916]: E0213 21:37:27.887349 1916 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.24.66\" not found" Feb 13 21:37:27.970136 sshd[1775]: Connection closed by 147.75.109.163 port 52682 Feb 13 21:37:27.971015 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Feb 13 21:37:27.975341 systemd[1]: sshd@9-10.230.24.66:22-147.75.109.163:52682.service: Deactivated successfully. Feb 13 21:37:27.978430 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 21:37:27.980514 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Feb 13 21:37:27.982120 systemd-logind[1485]: Removed session 11. Feb 13 21:37:27.988142 kubelet[1916]: E0213 21:37:27.988061 1916 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.24.66\" not found" Feb 13 21:37:28.089355 kubelet[1916]: E0213 21:37:28.089056 1916 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.24.66\" not found" Feb 13 21:37:28.179223 kubelet[1916]: I0213 21:37:28.179071 1916 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 21:37:28.179460 kubelet[1916]: W0213 21:37:28.179388 1916 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 21:37:28.189683 kubelet[1916]: E0213 21:37:28.189618 1916 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.24.66\" not found" Feb 13 21:37:28.230225 kubelet[1916]: E0213 21:37:28.230086 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:28.290630 kubelet[1916]: E0213 21:37:28.290577 1916 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.24.66\" not found" Feb 13 21:37:28.391248 kubelet[1916]: E0213 21:37:28.390907 1916 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.24.66\" not found" Feb 13 21:37:28.493139 kubelet[1916]: I0213 21:37:28.493040 1916 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 21:37:28.493783 containerd[1496]: time="2025-02-13T21:37:28.493680553Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 21:37:28.494898 kubelet[1916]: I0213 21:37:28.494507 1916 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 21:37:29.230927 kubelet[1916]: E0213 21:37:29.230827 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:29.230927 kubelet[1916]: I0213 21:37:29.230869 1916 apiserver.go:52] "Watching apiserver" Feb 13 21:37:29.240746 kubelet[1916]: E0213 21:37:29.240607 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:29.251538 systemd[1]: Created slice kubepods-besteffort-podf3b3066d_6b6a_4341_a96c_3e41a092c458.slice - libcontainer container kubepods-besteffort-podf3b3066d_6b6a_4341_a96c_3e41a092c458.slice. Feb 13 21:37:29.264217 kubelet[1916]: I0213 21:37:29.264016 1916 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 21:37:29.266555 kubelet[1916]: I0213 21:37:29.266514 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3b3066d-6b6a-4341-a96c-3e41a092c458-tigera-ca-bundle\") pod \"calico-typha-846cc8cc9c-clrgq\" (UID: \"f3b3066d-6b6a-4341-a96c-3e41a092c458\") " pod="calico-system/calico-typha-846cc8cc9c-clrgq" Feb 13 21:37:29.266650 kubelet[1916]: I0213 21:37:29.266565 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f3b3066d-6b6a-4341-a96c-3e41a092c458-typha-certs\") pod \"calico-typha-846cc8cc9c-clrgq\" (UID: \"f3b3066d-6b6a-4341-a96c-3e41a092c458\") " pod="calico-system/calico-typha-846cc8cc9c-clrgq" Feb 13 21:37:29.266650 kubelet[1916]: I0213 21:37:29.266598 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7fq8\" (UniqueName: \"kubernetes.io/projected/f3b3066d-6b6a-4341-a96c-3e41a092c458-kube-api-access-n7fq8\") pod \"calico-typha-846cc8cc9c-clrgq\" (UID: \"f3b3066d-6b6a-4341-a96c-3e41a092c458\") " pod="calico-system/calico-typha-846cc8cc9c-clrgq" Feb 13 21:37:29.266650 kubelet[1916]: I0213 21:37:29.266626 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc7b9d42-0137-453d-a4fc-714915cd6044-cni-net-dir\") pod \"calico-node-f2wv6\" (UID: \"fc7b9d42-0137-453d-a4fc-714915cd6044\") " pod="calico-system/calico-node-f2wv6" Feb 13 21:37:29.266806 kubelet[1916]: I0213 21:37:29.266652 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc7b9d42-0137-453d-a4fc-714915cd6044-cni-log-dir\") pod \"calico-node-f2wv6\" (UID: \"fc7b9d42-0137-453d-a4fc-714915cd6044\") " pod="calico-system/calico-node-f2wv6" Feb 13 21:37:29.266806 kubelet[1916]: I0213 21:37:29.266675 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa37c304-b0cb-4ab2-957c-24161a763ce8-kubelet-dir\") pod \"csi-node-driver-7sxsz\" (UID: \"aa37c304-b0cb-4ab2-957c-24161a763ce8\") " pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:29.266806 kubelet[1916]: I0213 21:37:29.266698 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc7b9d42-0137-453d-a4fc-714915cd6044-lib-modules\") pod \"calico-node-f2wv6\" (UID: \"fc7b9d42-0137-453d-a4fc-714915cd6044\") " pod="calico-system/calico-node-f2wv6" Feb 13 21:37:29.266806 kubelet[1916]: I0213 21:37:29.266722 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aa37c304-b0cb-4ab2-957c-24161a763ce8-registration-dir\") pod \"csi-node-driver-7sxsz\" (UID: \"aa37c304-b0cb-4ab2-957c-24161a763ce8\") " pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:29.266806 kubelet[1916]: I0213 21:37:29.266757 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b15391cb-630d-4f68-bd16-37302259722a-xtables-lock\") pod \"kube-proxy-s7f4q\" (UID: \"b15391cb-630d-4f68-bd16-37302259722a\") " pod="kube-system/kube-proxy-s7f4q" Feb 13 21:37:29.267057 kubelet[1916]: I0213 21:37:29.266781 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aa37c304-b0cb-4ab2-957c-24161a763ce8-socket-dir\") pod \"csi-node-driver-7sxsz\" (UID: \"aa37c304-b0cb-4ab2-957c-24161a763ce8\") " pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:29.267057 kubelet[1916]: I0213 21:37:29.266805 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc7b9d42-0137-453d-a4fc-714915cd6044-xtables-lock\") pod \"calico-node-f2wv6\" (UID: \"fc7b9d42-0137-453d-a4fc-714915cd6044\") " pod="calico-system/calico-node-f2wv6" Feb 13 21:37:29.267057 kubelet[1916]: I0213 21:37:29.266830 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc7b9d42-0137-453d-a4fc-714915cd6044-tigera-ca-bundle\") pod \"calico-node-f2wv6\" (UID: \"fc7b9d42-0137-453d-a4fc-714915cd6044\") " pod="calico-system/calico-node-f2wv6" Feb 13 21:37:29.267057 kubelet[1916]: I0213 21:37:29.266854 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc7b9d42-0137-453d-a4fc-714915cd6044-var-run-calico\") pod \"calico-node-f2wv6\" (UID: \"fc7b9d42-0137-453d-a4fc-714915cd6044\") " pod="calico-system/calico-node-f2wv6" Feb 13 21:37:29.267057 kubelet[1916]: I0213 21:37:29.266879 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc7b9d42-0137-453d-a4fc-714915cd6044-cni-bin-dir\") pod \"calico-node-f2wv6\" (UID: \"fc7b9d42-0137-453d-a4fc-714915cd6044\") " pod="calico-system/calico-node-f2wv6" Feb 13 21:37:29.267481 kubelet[1916]: I0213 21:37:29.266903 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc7b9d42-0137-453d-a4fc-714915cd6044-flexvol-driver-host\") pod \"calico-node-f2wv6\" (UID: \"fc7b9d42-0137-453d-a4fc-714915cd6044\") " pod="calico-system/calico-node-f2wv6" Feb 13 21:37:29.267481 kubelet[1916]: I0213 21:37:29.266927 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbpzv\" (UniqueName: \"kubernetes.io/projected/fc7b9d42-0137-453d-a4fc-714915cd6044-kube-api-access-fbpzv\") pod \"calico-node-f2wv6\" (UID: \"fc7b9d42-0137-453d-a4fc-714915cd6044\") " pod="calico-system/calico-node-f2wv6" Feb 13 21:37:29.267481 kubelet[1916]: I0213 21:37:29.266952 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/aa37c304-b0cb-4ab2-957c-24161a763ce8-varrun\") pod \"csi-node-driver-7sxsz\" (UID: \"aa37c304-b0cb-4ab2-957c-24161a763ce8\") " pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:29.267481 kubelet[1916]: I0213 21:37:29.266991 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b15391cb-630d-4f68-bd16-37302259722a-kube-proxy\") pod \"kube-proxy-s7f4q\" (UID: \"b15391cb-630d-4f68-bd16-37302259722a\") " pod="kube-system/kube-proxy-s7f4q" Feb 13 21:37:29.267481 kubelet[1916]: I0213 21:37:29.267016 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvj6r\" (UniqueName: \"kubernetes.io/projected/b15391cb-630d-4f68-bd16-37302259722a-kube-api-access-tvj6r\") pod \"kube-proxy-s7f4q\" (UID: \"b15391cb-630d-4f68-bd16-37302259722a\") " pod="kube-system/kube-proxy-s7f4q" Feb 13 21:37:29.267756 kubelet[1916]: I0213 21:37:29.267072 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc7b9d42-0137-453d-a4fc-714915cd6044-policysync\") pod \"calico-node-f2wv6\" (UID: \"fc7b9d42-0137-453d-a4fc-714915cd6044\") " pod="calico-system/calico-node-f2wv6" Feb 13 21:37:29.267756 kubelet[1916]: I0213 21:37:29.267099 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc7b9d42-0137-453d-a4fc-714915cd6044-node-certs\") pod \"calico-node-f2wv6\" (UID: \"fc7b9d42-0137-453d-a4fc-714915cd6044\") " pod="calico-system/calico-node-f2wv6" Feb 13 21:37:29.267756 kubelet[1916]: I0213 21:37:29.267125 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc7b9d42-0137-453d-a4fc-714915cd6044-var-lib-calico\") pod \"calico-node-f2wv6\" (UID: \"fc7b9d42-0137-453d-a4fc-714915cd6044\") " pod="calico-system/calico-node-f2wv6" Feb 13 21:37:29.267756 kubelet[1916]: I0213 21:37:29.267150 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pglrh\" (UniqueName: \"kubernetes.io/projected/aa37c304-b0cb-4ab2-957c-24161a763ce8-kube-api-access-pglrh\") pod \"csi-node-driver-7sxsz\" (UID: \"aa37c304-b0cb-4ab2-957c-24161a763ce8\") " pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:29.267756 kubelet[1916]: I0213 21:37:29.267197 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b15391cb-630d-4f68-bd16-37302259722a-lib-modules\") pod \"kube-proxy-s7f4q\" (UID: \"b15391cb-630d-4f68-bd16-37302259722a\") " pod="kube-system/kube-proxy-s7f4q" Feb 13 21:37:29.279232 systemd[1]: Created slice kubepods-besteffort-podb15391cb_630d_4f68_bd16_37302259722a.slice - libcontainer container kubepods-besteffort-podb15391cb_630d_4f68_bd16_37302259722a.slice. Feb 13 21:37:29.291279 systemd[1]: Created slice kubepods-besteffort-podfc7b9d42_0137_453d_a4fc_714915cd6044.slice - libcontainer container kubepods-besteffort-podfc7b9d42_0137_453d_a4fc_714915cd6044.slice. Feb 13 21:37:29.370023 kubelet[1916]: E0213 21:37:29.369852 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.370023 kubelet[1916]: W0213 21:37:29.369883 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.370023 kubelet[1916]: E0213 21:37:29.369942 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.370451 kubelet[1916]: E0213 21:37:29.370407 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.370451 kubelet[1916]: W0213 21:37:29.370425 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.370645 kubelet[1916]: E0213 21:37:29.370460 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.370799 kubelet[1916]: E0213 21:37:29.370779 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.370864 kubelet[1916]: W0213 21:37:29.370801 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.370864 kubelet[1916]: E0213 21:37:29.370818 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.371208 kubelet[1916]: E0213 21:37:29.371160 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.371208 kubelet[1916]: W0213 21:37:29.371201 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.371415 kubelet[1916]: E0213 21:37:29.371295 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.371511 kubelet[1916]: E0213 21:37:29.371486 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.371511 kubelet[1916]: W0213 21:37:29.371499 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.372041 kubelet[1916]: E0213 21:37:29.372007 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.372367 kubelet[1916]: E0213 21:37:29.372229 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.372367 kubelet[1916]: W0213 21:37:29.372249 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.373028 kubelet[1916]: E0213 21:37:29.372883 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.373245 kubelet[1916]: E0213 21:37:29.373130 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.373245 kubelet[1916]: W0213 21:37:29.373149 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.373245 kubelet[1916]: E0213 21:37:29.373215 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.373859 kubelet[1916]: E0213 21:37:29.373701 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.373859 kubelet[1916]: W0213 21:37:29.373719 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.373859 kubelet[1916]: E0213 21:37:29.373762 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.374284 kubelet[1916]: E0213 21:37:29.374148 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.374284 kubelet[1916]: W0213 21:37:29.374165 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.374284 kubelet[1916]: E0213 21:37:29.374244 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.374606 kubelet[1916]: E0213 21:37:29.374578 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.374606 kubelet[1916]: W0213 21:37:29.374602 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.374812 kubelet[1916]: E0213 21:37:29.374778 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.374974 kubelet[1916]: E0213 21:37:29.374956 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.375025 kubelet[1916]: W0213 21:37:29.374991 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.375097 kubelet[1916]: E0213 21:37:29.375076 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.375416 kubelet[1916]: E0213 21:37:29.375368 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.375416 kubelet[1916]: W0213 21:37:29.375408 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.375622 kubelet[1916]: E0213 21:37:29.375591 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.375756 kubelet[1916]: E0213 21:37:29.375738 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.375756 kubelet[1916]: W0213 21:37:29.375755 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.375918 kubelet[1916]: E0213 21:37:29.375887 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.376214 kubelet[1916]: E0213 21:37:29.376165 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.376293 kubelet[1916]: W0213 21:37:29.376248 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.376420 kubelet[1916]: E0213 21:37:29.376372 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.376568 kubelet[1916]: E0213 21:37:29.376550 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.376568 kubelet[1916]: W0213 21:37:29.376567 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.376662 kubelet[1916]: E0213 21:37:29.376629 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.376946 kubelet[1916]: E0213 21:37:29.376919 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.376946 kubelet[1916]: W0213 21:37:29.376944 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.377105 kubelet[1916]: E0213 21:37:29.377084 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.377374 kubelet[1916]: E0213 21:37:29.377355 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.377374 kubelet[1916]: W0213 21:37:29.377374 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.378350 kubelet[1916]: E0213 21:37:29.377706 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.378350 kubelet[1916]: W0213 21:37:29.377725 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.378350 kubelet[1916]: E0213 21:37:29.378059 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.378350 kubelet[1916]: W0213 21:37:29.378073 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.378350 kubelet[1916]: E0213 21:37:29.378347 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.378634 kubelet[1916]: W0213 21:37:29.378362 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.378634 kubelet[1916]: E0213 21:37:29.378626 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.378709 kubelet[1916]: W0213 21:37:29.378641 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.378874 kubelet[1916]: E0213 21:37:29.378850 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.378874 kubelet[1916]: W0213 21:37:29.378869 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.379133 kubelet[1916]: E0213 21:37:29.379098 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.379133 kubelet[1916]: W0213 21:37:29.379128 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.379412 kubelet[1916]: E0213 21:37:29.379370 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.379412 kubelet[1916]: W0213 21:37:29.379406 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.379640 kubelet[1916]: E0213 21:37:29.379619 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.379697 kubelet[1916]: W0213 21:37:29.379646 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.380208 kubelet[1916]: E0213 21:37:29.379857 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.380208 kubelet[1916]: W0213 21:37:29.379875 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.380208 kubelet[1916]: E0213 21:37:29.380093 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.380208 kubelet[1916]: W0213 21:37:29.380105 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.380405 kubelet[1916]: E0213 21:37:29.380354 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.380405 kubelet[1916]: W0213 21:37:29.380366 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.380649 kubelet[1916]: E0213 21:37:29.380620 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.380649 kubelet[1916]: W0213 21:37:29.380639 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.380873 kubelet[1916]: E0213 21:37:29.380836 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.380933 kubelet[1916]: E0213 21:37:29.380879 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.380933 kubelet[1916]: E0213 21:37:29.380900 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.381038 kubelet[1916]: E0213 21:37:29.380931 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.381038 kubelet[1916]: E0213 21:37:29.380946 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.381038 kubelet[1916]: E0213 21:37:29.380956 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.381038 kubelet[1916]: E0213 21:37:29.380968 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.381038 kubelet[1916]: E0213 21:37:29.380992 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.381038 kubelet[1916]: E0213 21:37:29.381003 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.381038 kubelet[1916]: E0213 21:37:29.381015 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.381038 kubelet[1916]: E0213 21:37:29.381026 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.381038 kubelet[1916]: E0213 21:37:29.381037 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.381482 kubelet[1916]: E0213 21:37:29.381057 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.381482 kubelet[1916]: E0213 21:37:29.381100 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.381482 kubelet[1916]: W0213 21:37:29.381125 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.383199 kubelet[1916]: E0213 21:37:29.381754 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.383199 kubelet[1916]: W0213 21:37:29.381774 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.383312 kubelet[1916]: E0213 21:37:29.383265 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.383312 kubelet[1916]: W0213 21:37:29.383279 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.383312 kubelet[1916]: E0213 21:37:29.383309 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.383651 kubelet[1916]: E0213 21:37:29.383614 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.383651 kubelet[1916]: W0213 21:37:29.383648 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.383784 kubelet[1916]: E0213 21:37:29.383663 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.383935 kubelet[1916]: E0213 21:37:29.383912 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.383995 kubelet[1916]: E0213 21:37:29.383938 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.384060 kubelet[1916]: E0213 21:37:29.384025 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.384060 kubelet[1916]: W0213 21:37:29.384037 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.384155 kubelet[1916]: E0213 21:37:29.384102 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.384635 kubelet[1916]: E0213 21:37:29.384611 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.384696 kubelet[1916]: W0213 21:37:29.384655 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.384870 kubelet[1916]: E0213 21:37:29.384845 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.385888 kubelet[1916]: E0213 21:37:29.385748 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.385888 kubelet[1916]: W0213 21:37:29.385885 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.388288 kubelet[1916]: E0213 21:37:29.385933 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.388365 kubelet[1916]: E0213 21:37:29.388237 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.396341 kubelet[1916]: W0213 21:37:29.388316 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.396470 kubelet[1916]: E0213 21:37:29.396447 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.396877 kubelet[1916]: E0213 21:37:29.396850 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.396965 kubelet[1916]: W0213 21:37:29.396946 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.397049 kubelet[1916]: E0213 21:37:29.397031 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.397441 kubelet[1916]: E0213 21:37:29.397420 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.397566 kubelet[1916]: W0213 21:37:29.397537 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.397696 kubelet[1916]: E0213 21:37:29.397676 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.398046 kubelet[1916]: E0213 21:37:29.398026 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.398168 kubelet[1916]: W0213 21:37:29.398148 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.398369 kubelet[1916]: E0213 21:37:29.398281 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.398575 kubelet[1916]: E0213 21:37:29.398555 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.398779 kubelet[1916]: W0213 21:37:29.398649 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.398779 kubelet[1916]: E0213 21:37:29.398693 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.399359 kubelet[1916]: E0213 21:37:29.399220 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.399359 kubelet[1916]: W0213 21:37:29.399242 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.399359 kubelet[1916]: E0213 21:37:29.399285 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.399996 kubelet[1916]: E0213 21:37:29.399783 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.399996 kubelet[1916]: W0213 21:37:29.399837 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.399996 kubelet[1916]: E0213 21:37:29.399897 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.400463 kubelet[1916]: E0213 21:37:29.400329 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.400463 kubelet[1916]: W0213 21:37:29.400347 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.400463 kubelet[1916]: E0213 21:37:29.400384 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.400702 kubelet[1916]: E0213 21:37:29.400683 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.400823 kubelet[1916]: W0213 21:37:29.400804 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.401035 kubelet[1916]: E0213 21:37:29.400947 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.401297 kubelet[1916]: E0213 21:37:29.401277 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.401506 kubelet[1916]: W0213 21:37:29.401380 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.401506 kubelet[1916]: E0213 21:37:29.401417 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.401703 kubelet[1916]: E0213 21:37:29.401684 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.401793 kubelet[1916]: W0213 21:37:29.401775 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.401985 kubelet[1916]: E0213 21:37:29.401895 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.402339 kubelet[1916]: E0213 21:37:29.402150 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.402339 kubelet[1916]: W0213 21:37:29.402181 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.402339 kubelet[1916]: E0213 21:37:29.402236 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.402553 kubelet[1916]: E0213 21:37:29.402534 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.402651 kubelet[1916]: W0213 21:37:29.402632 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.402814 kubelet[1916]: E0213 21:37:29.402777 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.403097 kubelet[1916]: E0213 21:37:29.403079 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.403350 kubelet[1916]: W0213 21:37:29.403228 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.403350 kubelet[1916]: E0213 21:37:29.403268 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.403825 kubelet[1916]: E0213 21:37:29.403677 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.403825 kubelet[1916]: W0213 21:37:29.403695 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.403825 kubelet[1916]: E0213 21:37:29.403742 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.404078 kubelet[1916]: E0213 21:37:29.404060 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.404227 kubelet[1916]: W0213 21:37:29.404206 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.404353 kubelet[1916]: E0213 21:37:29.404327 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.404839 kubelet[1916]: E0213 21:37:29.404674 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.404839 kubelet[1916]: W0213 21:37:29.404691 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.404839 kubelet[1916]: E0213 21:37:29.404721 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.405093 kubelet[1916]: E0213 21:37:29.405076 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.405234 kubelet[1916]: W0213 21:37:29.405214 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.405345 kubelet[1916]: E0213 21:37:29.405317 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.405836 kubelet[1916]: E0213 21:37:29.405681 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.405836 kubelet[1916]: W0213 21:37:29.405699 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.405836 kubelet[1916]: E0213 21:37:29.405728 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.406085 kubelet[1916]: E0213 21:37:29.406068 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.406237 kubelet[1916]: W0213 21:37:29.406216 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.406475 kubelet[1916]: E0213 21:37:29.406345 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.407463 kubelet[1916]: E0213 21:37:29.407268 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.407463 kubelet[1916]: W0213 21:37:29.407289 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.409388 kubelet[1916]: E0213 21:37:29.409366 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.412205 kubelet[1916]: E0213 21:37:29.409555 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.412205 kubelet[1916]: W0213 21:37:29.409573 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.416210 kubelet[1916]: E0213 21:37:29.415222 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.416210 kubelet[1916]: W0213 21:37:29.415242 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.416210 kubelet[1916]: E0213 21:37:29.415331 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.416210 kubelet[1916]: E0213 21:37:29.415361 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.416548 kubelet[1916]: E0213 21:37:29.416527 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.416658 kubelet[1916]: W0213 21:37:29.416638 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.421613 kubelet[1916]: E0213 21:37:29.421585 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.421613 kubelet[1916]: W0213 21:37:29.421608 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.421870 kubelet[1916]: E0213 21:37:29.421848 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.421951 kubelet[1916]: E0213 21:37:29.421882 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.422459 kubelet[1916]: E0213 21:37:29.422338 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.422459 kubelet[1916]: W0213 21:37:29.422360 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.422459 kubelet[1916]: E0213 21:37:29.422411 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.425205 kubelet[1916]: E0213 21:37:29.423338 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.425205 kubelet[1916]: W0213 21:37:29.423362 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.425205 kubelet[1916]: E0213 21:37:29.423380 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.426464 kubelet[1916]: E0213 21:37:29.426442 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.426620 kubelet[1916]: W0213 21:37:29.426599 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.426728 kubelet[1916]: E0213 21:37:29.426710 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.430419 kubelet[1916]: E0213 21:37:29.430399 1916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:37:29.430516 kubelet[1916]: W0213 21:37:29.430496 1916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:37:29.430646 kubelet[1916]: E0213 21:37:29.430625 1916 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:37:29.576675 containerd[1496]: time="2025-02-13T21:37:29.576504884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-846cc8cc9c-clrgq,Uid:f3b3066d-6b6a-4341-a96c-3e41a092c458,Namespace:calico-system,Attempt:0,}" Feb 13 21:37:29.583216 containerd[1496]: time="2025-02-13T21:37:29.583074286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s7f4q,Uid:b15391cb-630d-4f68-bd16-37302259722a,Namespace:kube-system,Attempt:0,}" Feb 13 21:37:29.595354 containerd[1496]: time="2025-02-13T21:37:29.595025990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f2wv6,Uid:fc7b9d42-0137-453d-a4fc-714915cd6044,Namespace:calico-system,Attempt:0,}" Feb 13 21:37:30.232338 kubelet[1916]: E0213 21:37:30.232269 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:30.241654 containerd[1496]: time="2025-02-13T21:37:30.241598641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:37:30.242973 containerd[1496]: time="2025-02-13T21:37:30.242916333Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:37:30.245747 containerd[1496]: time="2025-02-13T21:37:30.245704389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 21:37:30.246501 containerd[1496]: time="2025-02-13T21:37:30.246401558Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 21:37:30.248053 containerd[1496]: time="2025-02-13T21:37:30.248015951Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:37:30.249574 containerd[1496]: time="2025-02-13T21:37:30.249300686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 21:37:30.252296 containerd[1496]: time="2025-02-13T21:37:30.252243097Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:37:30.267564 containerd[1496]: time="2025-02-13T21:37:30.267442037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:37:30.268852 containerd[1496]: time="2025-02-13T21:37:30.268811723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 691.994395ms" Feb 13 21:37:30.272985 containerd[1496]: time="2025-02-13T21:37:30.272813464Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.706834ms" Feb 13 21:37:30.274117 containerd[1496]: time="2025-02-13T21:37:30.274078162Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 690.914202ms" Feb 13 21:37:30.422837 containerd[1496]: time="2025-02-13T21:37:30.422606061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:37:30.423212 containerd[1496]: time="2025-02-13T21:37:30.423074619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:37:30.423297 containerd[1496]: time="2025-02-13T21:37:30.423211700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:37:30.423804 containerd[1496]: time="2025-02-13T21:37:30.423743793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:37:30.424439 containerd[1496]: time="2025-02-13T21:37:30.424253308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:37:30.424439 containerd[1496]: time="2025-02-13T21:37:30.424319263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:37:30.425954 containerd[1496]: time="2025-02-13T21:37:30.425595371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:37:30.427371 containerd[1496]: time="2025-02-13T21:37:30.426139890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:37:30.427371 containerd[1496]: time="2025-02-13T21:37:30.426258543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:37:30.427371 containerd[1496]: time="2025-02-13T21:37:30.426279656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:37:30.427371 containerd[1496]: time="2025-02-13T21:37:30.427173386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:37:30.428546 containerd[1496]: time="2025-02-13T21:37:30.428392121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:37:30.547473 systemd[1]: Started cri-containerd-853962a77546df7bd32d72b4b4307936ccad47b7164e1886c30ae3ddcb78b5dc.scope - libcontainer container 853962a77546df7bd32d72b4b4307936ccad47b7164e1886c30ae3ddcb78b5dc. Feb 13 21:37:30.552611 systemd[1]: Started cri-containerd-b64f852d519afe0f4cc1db0fa3791b2d87a45906eb38eddd20aeec779e4ed8ba.scope - libcontainer container b64f852d519afe0f4cc1db0fa3791b2d87a45906eb38eddd20aeec779e4ed8ba. Feb 13 21:37:30.566761 systemd[1]: Started cri-containerd-86374ee2b894f8424ff820988873927d7fba8e3dc9655df294e9250f7ee57f32.scope - libcontainer container 86374ee2b894f8424ff820988873927d7fba8e3dc9655df294e9250f7ee57f32. Feb 13 21:37:30.619633 containerd[1496]: time="2025-02-13T21:37:30.619421428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f2wv6,Uid:fc7b9d42-0137-453d-a4fc-714915cd6044,Namespace:calico-system,Attempt:0,} returns sandbox id \"853962a77546df7bd32d72b4b4307936ccad47b7164e1886c30ae3ddcb78b5dc\"" Feb 13 21:37:30.627076 containerd[1496]: time="2025-02-13T21:37:30.626879423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 21:37:30.629087 containerd[1496]: time="2025-02-13T21:37:30.628986193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s7f4q,Uid:b15391cb-630d-4f68-bd16-37302259722a,Namespace:kube-system,Attempt:0,} returns sandbox id \"86374ee2b894f8424ff820988873927d7fba8e3dc9655df294e9250f7ee57f32\"" Feb 13 21:37:30.650926 containerd[1496]: time="2025-02-13T21:37:30.650817281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-846cc8cc9c-clrgq,Uid:f3b3066d-6b6a-4341-a96c-3e41a092c458,Namespace:calico-system,Attempt:0,} returns sandbox id \"b64f852d519afe0f4cc1db0fa3791b2d87a45906eb38eddd20aeec779e4ed8ba\"" Feb 13 21:37:31.232631 kubelet[1916]: E0213 21:37:31.232559 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:31.295888 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 21:37:31.366247 kubelet[1916]: E0213 21:37:31.365798 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:31.389272 systemd[1]: run-containerd-runc-k8s.io-853962a77546df7bd32d72b4b4307936ccad47b7164e1886c30ae3ddcb78b5dc-runc.UW1E5u.mount: Deactivated successfully. Feb 13 21:37:32.214217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2372061012.mount: Deactivated successfully. Feb 13 21:37:32.233463 kubelet[1916]: E0213 21:37:32.233274 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:32.351934 containerd[1496]: time="2025-02-13T21:37:32.351824041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:32.353435 containerd[1496]: time="2025-02-13T21:37:32.353353277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 21:37:32.354447 containerd[1496]: time="2025-02-13T21:37:32.354390626Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:32.357631 containerd[1496]: time="2025-02-13T21:37:32.357508929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:32.358758 containerd[1496]: time="2025-02-13T21:37:32.358713491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.731763521s" Feb 13 21:37:32.358838 containerd[1496]: time="2025-02-13T21:37:32.358761409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 21:37:32.362274 containerd[1496]: time="2025-02-13T21:37:32.361927713Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 21:37:32.364275 containerd[1496]: time="2025-02-13T21:37:32.364241167Z" level=info msg="CreateContainer within sandbox \"853962a77546df7bd32d72b4b4307936ccad47b7164e1886c30ae3ddcb78b5dc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 21:37:32.381109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3979283846.mount: Deactivated successfully. Feb 13 21:37:32.383309 containerd[1496]: time="2025-02-13T21:37:32.383269421Z" level=info msg="CreateContainer within sandbox \"853962a77546df7bd32d72b4b4307936ccad47b7164e1886c30ae3ddcb78b5dc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"522d2f622cf78c7a678306ad0932ad1b9aa2798323e4a1d4b866c82d608d63db\"" Feb 13 21:37:32.384077 containerd[1496]: time="2025-02-13T21:37:32.384033152Z" level=info msg="StartContainer for \"522d2f622cf78c7a678306ad0932ad1b9aa2798323e4a1d4b866c82d608d63db\"" Feb 13 21:37:32.427838 systemd[1]: Started cri-containerd-522d2f622cf78c7a678306ad0932ad1b9aa2798323e4a1d4b866c82d608d63db.scope - libcontainer container 522d2f622cf78c7a678306ad0932ad1b9aa2798323e4a1d4b866c82d608d63db. Feb 13 21:37:32.468355 containerd[1496]: time="2025-02-13T21:37:32.467803106Z" level=info msg="StartContainer for \"522d2f622cf78c7a678306ad0932ad1b9aa2798323e4a1d4b866c82d608d63db\" returns successfully" Feb 13 21:37:32.488345 systemd[1]: cri-containerd-522d2f622cf78c7a678306ad0932ad1b9aa2798323e4a1d4b866c82d608d63db.scope: Deactivated successfully. Feb 13 21:37:32.521870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-522d2f622cf78c7a678306ad0932ad1b9aa2798323e4a1d4b866c82d608d63db-rootfs.mount: Deactivated successfully. Feb 13 21:37:32.596009 containerd[1496]: time="2025-02-13T21:37:32.595281271Z" level=info msg="shim disconnected" id=522d2f622cf78c7a678306ad0932ad1b9aa2798323e4a1d4b866c82d608d63db namespace=k8s.io Feb 13 21:37:32.596009 containerd[1496]: time="2025-02-13T21:37:32.595788247Z" level=warning msg="cleaning up after shim disconnected" id=522d2f622cf78c7a678306ad0932ad1b9aa2798323e4a1d4b866c82d608d63db namespace=k8s.io Feb 13 21:37:32.596009 containerd[1496]: time="2025-02-13T21:37:32.595811740Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:37:32.616498 containerd[1496]: time="2025-02-13T21:37:32.616442126Z" level=warning msg="cleanup warnings time=\"2025-02-13T21:37:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 21:37:33.234484 kubelet[1916]: E0213 21:37:33.234417 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:33.366250 kubelet[1916]: E0213 21:37:33.365798 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:34.062928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount844603795.mount: Deactivated successfully. Feb 13 21:37:34.235114 kubelet[1916]: E0213 21:37:34.234929 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:34.755253 containerd[1496]: time="2025-02-13T21:37:34.755141568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:34.756554 containerd[1496]: time="2025-02-13T21:37:34.756493445Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229116" Feb 13 21:37:34.757578 containerd[1496]: time="2025-02-13T21:37:34.757517685Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:34.760203 containerd[1496]: time="2025-02-13T21:37:34.760126357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:34.761436 containerd[1496]: time="2025-02-13T21:37:34.761256685Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 2.399274264s" Feb 13 21:37:34.761436 containerd[1496]: time="2025-02-13T21:37:34.761302623Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 21:37:34.762888 containerd[1496]: time="2025-02-13T21:37:34.762852985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 21:37:34.765229 containerd[1496]: time="2025-02-13T21:37:34.764990024Z" level=info msg="CreateContainer within sandbox \"86374ee2b894f8424ff820988873927d7fba8e3dc9655df294e9250f7ee57f32\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 21:37:34.785606 containerd[1496]: time="2025-02-13T21:37:34.785567850Z" level=info msg="CreateContainer within sandbox \"86374ee2b894f8424ff820988873927d7fba8e3dc9655df294e9250f7ee57f32\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1caa627b552287cca5ed182f266354af37a8dfb78a459f18ac540dded585eb41\"" Feb 13 21:37:34.786276 containerd[1496]: time="2025-02-13T21:37:34.786241305Z" level=info msg="StartContainer for \"1caa627b552287cca5ed182f266354af37a8dfb78a459f18ac540dded585eb41\"" Feb 13 21:37:34.826367 systemd[1]: Started cri-containerd-1caa627b552287cca5ed182f266354af37a8dfb78a459f18ac540dded585eb41.scope - libcontainer container 1caa627b552287cca5ed182f266354af37a8dfb78a459f18ac540dded585eb41. Feb 13 21:37:34.876486 containerd[1496]: time="2025-02-13T21:37:34.875703114Z" level=info msg="StartContainer for \"1caa627b552287cca5ed182f266354af37a8dfb78a459f18ac540dded585eb41\" returns successfully" Feb 13 21:37:35.235777 kubelet[1916]: E0213 21:37:35.235584 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:35.366063 kubelet[1916]: E0213 21:37:35.365998 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:35.416292 kubelet[1916]: I0213 21:37:35.416170 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s7f4q" podStartSLOduration=4.284531713 podStartE2EDuration="8.41613613s" podCreationTimestamp="2025-02-13 21:37:27 +0000 UTC" firstStartedPulling="2025-02-13 21:37:30.631118459 +0000 UTC m=+3.958304834" lastFinishedPulling="2025-02-13 21:37:34.762722875 +0000 UTC m=+8.089909251" observedRunningTime="2025-02-13 21:37:35.415575648 +0000 UTC m=+8.742762046" watchObservedRunningTime="2025-02-13 21:37:35.41613613 +0000 UTC m=+8.743322512" Feb 13 21:37:36.236339 kubelet[1916]: E0213 21:37:36.236249 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:37.237284 kubelet[1916]: E0213 21:37:37.237123 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:37.324885 containerd[1496]: time="2025-02-13T21:37:37.324801623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:37.326097 containerd[1496]: time="2025-02-13T21:37:37.326035379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 21:37:37.327095 containerd[1496]: time="2025-02-13T21:37:37.327016507Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:37.331372 containerd[1496]: time="2025-02-13T21:37:37.331336409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:37.332556 containerd[1496]: time="2025-02-13T21:37:37.332374599Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.569479627s" Feb 13 21:37:37.332556 containerd[1496]: time="2025-02-13T21:37:37.332429968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 21:37:37.334724 containerd[1496]: time="2025-02-13T21:37:37.334456143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 21:37:37.349145 containerd[1496]: time="2025-02-13T21:37:37.348581858Z" level=info msg="CreateContainer within sandbox \"b64f852d519afe0f4cc1db0fa3791b2d87a45906eb38eddd20aeec779e4ed8ba\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 21:37:37.366723 kubelet[1916]: E0213 21:37:37.365891 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:37.367948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3887500617.mount: Deactivated successfully. Feb 13 21:37:37.375806 containerd[1496]: time="2025-02-13T21:37:37.375499642Z" level=info msg="CreateContainer within sandbox \"b64f852d519afe0f4cc1db0fa3791b2d87a45906eb38eddd20aeec779e4ed8ba\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4f87f3fe1018de71b139942f7cd5f7cc8be9841a04e07559c1a8a00dfd352171\"" Feb 13 21:37:37.376201 containerd[1496]: time="2025-02-13T21:37:37.376119670Z" level=info msg="StartContainer for \"4f87f3fe1018de71b139942f7cd5f7cc8be9841a04e07559c1a8a00dfd352171\"" Feb 13 21:37:37.415504 systemd[1]: Started cri-containerd-4f87f3fe1018de71b139942f7cd5f7cc8be9841a04e07559c1a8a00dfd352171.scope - libcontainer container 4f87f3fe1018de71b139942f7cd5f7cc8be9841a04e07559c1a8a00dfd352171. Feb 13 21:37:37.474550 containerd[1496]: time="2025-02-13T21:37:37.474433535Z" level=info msg="StartContainer for \"4f87f3fe1018de71b139942f7cd5f7cc8be9841a04e07559c1a8a00dfd352171\" returns successfully" Feb 13 21:37:38.237477 kubelet[1916]: E0213 21:37:38.237413 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:38.444795 kubelet[1916]: I0213 21:37:38.444717 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-846cc8cc9c-clrgq" podStartSLOduration=6.763762823 podStartE2EDuration="13.444680493s" podCreationTimestamp="2025-02-13 21:37:25 +0000 UTC" firstStartedPulling="2025-02-13 21:37:30.65257733 +0000 UTC m=+3.979763705" lastFinishedPulling="2025-02-13 21:37:37.333495003 +0000 UTC m=+10.660681375" observedRunningTime="2025-02-13 21:37:38.443678912 +0000 UTC m=+11.770865299" watchObservedRunningTime="2025-02-13 21:37:38.444680493 +0000 UTC m=+11.771866883" Feb 13 21:37:39.238525 kubelet[1916]: E0213 21:37:39.238422 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:39.366231 kubelet[1916]: E0213 21:37:39.365359 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:40.238693 kubelet[1916]: E0213 21:37:40.238625 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:41.240093 kubelet[1916]: E0213 21:37:41.239997 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:41.367100 kubelet[1916]: E0213 21:37:41.366574 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:42.241146 kubelet[1916]: E0213 21:37:42.241089 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:42.735427 containerd[1496]: time="2025-02-13T21:37:42.735333230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:42.736553 containerd[1496]: time="2025-02-13T21:37:42.736510052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 21:37:42.737343 containerd[1496]: time="2025-02-13T21:37:42.737252261Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:42.740412 containerd[1496]: time="2025-02-13T21:37:42.740318007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:42.741715 containerd[1496]: time="2025-02-13T21:37:42.741528524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.407020212s" Feb 13 21:37:42.741715 containerd[1496]: time="2025-02-13T21:37:42.741573168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 21:37:42.744874 containerd[1496]: time="2025-02-13T21:37:42.744643549Z" level=info msg="CreateContainer within sandbox \"853962a77546df7bd32d72b4b4307936ccad47b7164e1886c30ae3ddcb78b5dc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 21:37:42.762083 containerd[1496]: time="2025-02-13T21:37:42.761914727Z" level=info msg="CreateContainer within sandbox \"853962a77546df7bd32d72b4b4307936ccad47b7164e1886c30ae3ddcb78b5dc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"43b651e10d0d299d4a19dcec389f96ecc9d46a06bf167225da71376262260c02\"" Feb 13 21:37:42.762909 containerd[1496]: time="2025-02-13T21:37:42.762877376Z" level=info msg="StartContainer for \"43b651e10d0d299d4a19dcec389f96ecc9d46a06bf167225da71376262260c02\"" Feb 13 21:37:42.801336 systemd[1]: run-containerd-runc-k8s.io-43b651e10d0d299d4a19dcec389f96ecc9d46a06bf167225da71376262260c02-runc.Hr0MSy.mount: Deactivated successfully. Feb 13 21:37:42.813477 systemd[1]: Started cri-containerd-43b651e10d0d299d4a19dcec389f96ecc9d46a06bf167225da71376262260c02.scope - libcontainer container 43b651e10d0d299d4a19dcec389f96ecc9d46a06bf167225da71376262260c02. Feb 13 21:37:42.857895 containerd[1496]: time="2025-02-13T21:37:42.857733960Z" level=info msg="StartContainer for \"43b651e10d0d299d4a19dcec389f96ecc9d46a06bf167225da71376262260c02\" returns successfully" Feb 13 21:37:43.242875 kubelet[1916]: E0213 21:37:43.242769 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:43.365912 kubelet[1916]: E0213 21:37:43.365485 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:43.760790 containerd[1496]: time="2025-02-13T21:37:43.760723254Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 21:37:43.763700 systemd[1]: cri-containerd-43b651e10d0d299d4a19dcec389f96ecc9d46a06bf167225da71376262260c02.scope: Deactivated successfully. Feb 13 21:37:43.772444 kubelet[1916]: I0213 21:37:43.772302 1916 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 21:37:43.796511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43b651e10d0d299d4a19dcec389f96ecc9d46a06bf167225da71376262260c02-rootfs.mount: Deactivated successfully. Feb 13 21:37:43.959860 containerd[1496]: time="2025-02-13T21:37:43.959522577Z" level=info msg="shim disconnected" id=43b651e10d0d299d4a19dcec389f96ecc9d46a06bf167225da71376262260c02 namespace=k8s.io Feb 13 21:37:43.959860 containerd[1496]: time="2025-02-13T21:37:43.959595604Z" level=warning msg="cleaning up after shim disconnected" id=43b651e10d0d299d4a19dcec389f96ecc9d46a06bf167225da71376262260c02 namespace=k8s.io Feb 13 21:37:43.959860 containerd[1496]: time="2025-02-13T21:37:43.959611604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:37:44.243311 kubelet[1916]: E0213 21:37:44.243238 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:44.400264 update_engine[1486]: I20250213 21:37:44.399876 1486 update_attempter.cc:509] Updating boot flags... Feb 13 21:37:44.448479 containerd[1496]: time="2025-02-13T21:37:44.446318648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 21:37:44.458284 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2526) Feb 13 21:37:44.554986 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2524) Feb 13 21:37:44.631314 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2524) Feb 13 21:37:45.243868 kubelet[1916]: E0213 21:37:45.243731 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:45.374990 systemd[1]: Created slice kubepods-besteffort-podaa37c304_b0cb_4ab2_957c_24161a763ce8.slice - libcontainer container kubepods-besteffort-podaa37c304_b0cb_4ab2_957c_24161a763ce8.slice. Feb 13 21:37:45.378926 containerd[1496]: time="2025-02-13T21:37:45.378866760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:0,}" Feb 13 21:37:45.481972 containerd[1496]: time="2025-02-13T21:37:45.481857468Z" level=error msg="Failed to destroy network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:45.484872 containerd[1496]: time="2025-02-13T21:37:45.484593347Z" level=error msg="encountered an error cleaning up failed sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:45.484872 containerd[1496]: time="2025-02-13T21:37:45.484704455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:45.484719 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16-shm.mount: Deactivated successfully. Feb 13 21:37:45.486250 kubelet[1916]: E0213 21:37:45.485444 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:45.486250 kubelet[1916]: E0213 21:37:45.485548 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:45.486250 kubelet[1916]: E0213 21:37:45.485586 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:45.486448 kubelet[1916]: E0213 21:37:45.485651 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:46.244888 kubelet[1916]: E0213 21:37:46.244801 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:46.450168 kubelet[1916]: I0213 21:37:46.449972 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16" Feb 13 21:37:46.453213 containerd[1496]: time="2025-02-13T21:37:46.451216787Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" Feb 13 21:37:46.453213 containerd[1496]: time="2025-02-13T21:37:46.451536758Z" level=info msg="Ensure that sandbox 6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16 in task-service has been cleanup successfully" Feb 13 21:37:46.457143 containerd[1496]: time="2025-02-13T21:37:46.456305144Z" level=info msg="TearDown network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" successfully" Feb 13 21:37:46.457143 containerd[1496]: time="2025-02-13T21:37:46.456342691Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" returns successfully" Feb 13 21:37:46.457433 containerd[1496]: time="2025-02-13T21:37:46.457148729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:1,}" Feb 13 21:37:46.457653 systemd[1]: run-netns-cni\x2d8d231823\x2dbc40\x2d8559\x2d8a33\x2df24154abd874.mount: Deactivated successfully. Feb 13 21:37:46.575373 containerd[1496]: time="2025-02-13T21:37:46.574635562Z" level=error msg="Failed to destroy network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:46.578718 containerd[1496]: time="2025-02-13T21:37:46.578673345Z" level=error msg="encountered an error cleaning up failed sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:46.578834 containerd[1496]: time="2025-02-13T21:37:46.578753473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:46.579624 kubelet[1916]: E0213 21:37:46.579018 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:46.579624 kubelet[1916]: E0213 21:37:46.579081 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:46.579624 kubelet[1916]: E0213 21:37:46.579117 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:46.579856 kubelet[1916]: E0213 21:37:46.579198 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:46.579967 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7-shm.mount: Deactivated successfully. Feb 13 21:37:46.804779 systemd[1]: Created slice kubepods-besteffort-podd307c13f_d9b3_46b9_8352_ab6bcd9eb0cb.slice - libcontainer container kubepods-besteffort-podd307c13f_d9b3_46b9_8352_ab6bcd9eb0cb.slice. Feb 13 21:37:46.805318 kubelet[1916]: W0213 21:37:46.805245 1916 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:10.230.24.66" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '10.230.24.66' and this object Feb 13 21:37:46.807049 kubelet[1916]: E0213 21:37:46.805764 1916 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:10.230.24.66\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node '10.230.24.66' and this object" logger="UnhandledError" Feb 13 21:37:46.978463 kubelet[1916]: I0213 21:37:46.978373 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkgwj\" (UniqueName: \"kubernetes.io/projected/d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb-kube-api-access-kkgwj\") pod \"nginx-deployment-8587fbcb89-xq22c\" (UID: \"d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb\") " pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:47.225394 kubelet[1916]: E0213 21:37:47.225303 1916 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:47.245454 kubelet[1916]: E0213 21:37:47.245085 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:47.453990 kubelet[1916]: I0213 21:37:47.453933 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7" Feb 13 21:37:47.455114 containerd[1496]: time="2025-02-13T21:37:47.454643723Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\"" Feb 13 21:37:47.458565 containerd[1496]: time="2025-02-13T21:37:47.455511236Z" level=info msg="Ensure that sandbox 376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7 in task-service has been cleanup successfully" Feb 13 21:37:47.458122 systemd[1]: run-netns-cni\x2d537607de\x2da7cf\x2df656\x2d553a\x2df86e2d9e3d4c.mount: Deactivated successfully. Feb 13 21:37:47.459947 containerd[1496]: time="2025-02-13T21:37:47.459841010Z" level=info msg="TearDown network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" successfully" Feb 13 21:37:47.460059 containerd[1496]: time="2025-02-13T21:37:47.459986879Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" returns successfully" Feb 13 21:37:47.460844 containerd[1496]: time="2025-02-13T21:37:47.460789294Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" Feb 13 21:37:47.461020 containerd[1496]: time="2025-02-13T21:37:47.460985853Z" level=info msg="TearDown network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" successfully" Feb 13 21:37:47.461020 containerd[1496]: time="2025-02-13T21:37:47.461014627Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" returns successfully" Feb 13 21:37:47.462432 containerd[1496]: time="2025-02-13T21:37:47.462386760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:2,}" Feb 13 21:37:47.560056 containerd[1496]: time="2025-02-13T21:37:47.557611334Z" level=error msg="Failed to destroy network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:47.560571 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a-shm.mount: Deactivated successfully. Feb 13 21:37:47.562898 containerd[1496]: time="2025-02-13T21:37:47.562745517Z" level=error msg="encountered an error cleaning up failed sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:47.563320 containerd[1496]: time="2025-02-13T21:37:47.563269485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:47.563676 kubelet[1916]: E0213 21:37:47.563613 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:47.563780 kubelet[1916]: E0213 21:37:47.563696 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:47.563780 kubelet[1916]: E0213 21:37:47.563727 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:47.563919 kubelet[1916]: E0213 21:37:47.563793 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:48.090710 kubelet[1916]: E0213 21:37:48.090636 1916 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 21:37:48.090710 kubelet[1916]: E0213 21:37:48.090706 1916 projected.go:194] Error preparing data for projected volume kube-api-access-kkgwj for pod default/nginx-deployment-8587fbcb89-xq22c: failed to sync configmap cache: timed out waiting for the condition Feb 13 21:37:48.091155 kubelet[1916]: E0213 21:37:48.090818 1916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb-kube-api-access-kkgwj podName:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb nodeName:}" failed. No retries permitted until 2025-02-13 21:37:48.590780887 +0000 UTC m=+21.917967270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kkgwj" (UniqueName: "kubernetes.io/projected/d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb-kube-api-access-kkgwj") pod "nginx-deployment-8587fbcb89-xq22c" (UID: "d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb") : failed to sync configmap cache: timed out waiting for the condition Feb 13 21:37:48.246506 kubelet[1916]: E0213 21:37:48.246423 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:48.461267 kubelet[1916]: I0213 21:37:48.461082 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a" Feb 13 21:37:48.461953 containerd[1496]: time="2025-02-13T21:37:48.461790740Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\"" Feb 13 21:37:48.463030 containerd[1496]: time="2025-02-13T21:37:48.462727982Z" level=info msg="Ensure that sandbox 8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a in task-service has been cleanup successfully" Feb 13 21:37:48.465290 containerd[1496]: time="2025-02-13T21:37:48.464205974Z" level=info msg="TearDown network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" successfully" Feb 13 21:37:48.465775 containerd[1496]: time="2025-02-13T21:37:48.465493749Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" returns successfully" Feb 13 21:37:48.467517 containerd[1496]: time="2025-02-13T21:37:48.466008639Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\"" Feb 13 21:37:48.467517 containerd[1496]: time="2025-02-13T21:37:48.466181707Z" level=info msg="TearDown network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" successfully" Feb 13 21:37:48.467517 containerd[1496]: time="2025-02-13T21:37:48.466200129Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" returns successfully" Feb 13 21:37:48.467560 systemd[1]: run-netns-cni\x2d4e2b5d78\x2d7334\x2d3e32\x2d9818\x2d6c999553a972.mount: Deactivated successfully. Feb 13 21:37:48.469521 containerd[1496]: time="2025-02-13T21:37:48.469021216Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" Feb 13 21:37:48.469521 containerd[1496]: time="2025-02-13T21:37:48.469148657Z" level=info msg="TearDown network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" successfully" Feb 13 21:37:48.469521 containerd[1496]: time="2025-02-13T21:37:48.469189583Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" returns successfully" Feb 13 21:37:48.470726 containerd[1496]: time="2025-02-13T21:37:48.470156135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:3,}" Feb 13 21:37:48.590939 containerd[1496]: time="2025-02-13T21:37:48.587269125Z" level=error msg="Failed to destroy network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:48.593227 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60-shm.mount: Deactivated successfully. Feb 13 21:37:48.595735 containerd[1496]: time="2025-02-13T21:37:48.594404360Z" level=error msg="encountered an error cleaning up failed sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:48.595735 containerd[1496]: time="2025-02-13T21:37:48.594503369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:48.595887 kubelet[1916]: E0213 21:37:48.595646 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:48.595887 kubelet[1916]: E0213 21:37:48.595739 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:48.595887 kubelet[1916]: E0213 21:37:48.595771 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:48.596064 kubelet[1916]: E0213 21:37:48.595821 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:48.612961 containerd[1496]: time="2025-02-13T21:37:48.612900439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:0,}" Feb 13 21:37:48.733758 containerd[1496]: time="2025-02-13T21:37:48.733437994Z" level=error msg="Failed to destroy network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:48.734903 containerd[1496]: time="2025-02-13T21:37:48.734707239Z" level=error msg="encountered an error cleaning up failed sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:48.734903 containerd[1496]: time="2025-02-13T21:37:48.734799465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:48.735871 kubelet[1916]: E0213 21:37:48.735070 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:48.735871 kubelet[1916]: E0213 21:37:48.735145 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:48.735871 kubelet[1916]: E0213 21:37:48.735198 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:48.736053 kubelet[1916]: E0213 21:37:48.735276 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-xq22c_default(d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-xq22c_default(d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-xq22c" podUID="d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb" Feb 13 21:37:49.247506 kubelet[1916]: E0213 21:37:49.247437 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:49.469194 kubelet[1916]: I0213 21:37:49.468322 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58" Feb 13 21:37:49.469645 containerd[1496]: time="2025-02-13T21:37:49.469605282Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\"" Feb 13 21:37:49.471780 containerd[1496]: time="2025-02-13T21:37:49.471586013Z" level=info msg="Ensure that sandbox ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58 in task-service has been cleanup successfully" Feb 13 21:37:49.472010 containerd[1496]: time="2025-02-13T21:37:49.471981511Z" level=info msg="TearDown network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" successfully" Feb 13 21:37:49.472480 containerd[1496]: time="2025-02-13T21:37:49.472107110Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" returns successfully" Feb 13 21:37:49.472877 containerd[1496]: time="2025-02-13T21:37:49.472813234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:1,}" Feb 13 21:37:49.474174 systemd[1]: run-netns-cni\x2d1a6ff67b\x2d1adb\x2d00e9\x2d3234\x2da994ec6c6e13.mount: Deactivated successfully. Feb 13 21:37:49.492698 kubelet[1916]: I0213 21:37:49.491632 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60" Feb 13 21:37:49.493210 containerd[1496]: time="2025-02-13T21:37:49.493151440Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\"" Feb 13 21:37:49.493565 containerd[1496]: time="2025-02-13T21:37:49.493395653Z" level=info msg="Ensure that sandbox a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60 in task-service has been cleanup successfully" Feb 13 21:37:49.499553 containerd[1496]: time="2025-02-13T21:37:49.495556242Z" level=info msg="TearDown network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" successfully" Feb 13 21:37:49.499553 containerd[1496]: time="2025-02-13T21:37:49.495585934Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" returns successfully" Feb 13 21:37:49.500461 systemd[1]: run-netns-cni\x2d911a7f4e\x2d7869\x2d0cb6\x2d37e8\x2dcaf7d20c54cd.mount: Deactivated successfully. Feb 13 21:37:49.502435 containerd[1496]: time="2025-02-13T21:37:49.502402578Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\"" Feb 13 21:37:49.504509 containerd[1496]: time="2025-02-13T21:37:49.504479441Z" level=info msg="TearDown network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" successfully" Feb 13 21:37:49.504653 containerd[1496]: time="2025-02-13T21:37:49.504627955Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" returns successfully" Feb 13 21:37:49.508173 containerd[1496]: time="2025-02-13T21:37:49.508133534Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\"" Feb 13 21:37:49.508637 containerd[1496]: time="2025-02-13T21:37:49.508509977Z" level=info msg="TearDown network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" successfully" Feb 13 21:37:49.511389 containerd[1496]: time="2025-02-13T21:37:49.511360716Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" returns successfully" Feb 13 21:37:49.512647 containerd[1496]: time="2025-02-13T21:37:49.512613679Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" Feb 13 21:37:49.512780 containerd[1496]: time="2025-02-13T21:37:49.512741484Z" level=info msg="TearDown network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" successfully" Feb 13 21:37:49.513240 containerd[1496]: time="2025-02-13T21:37:49.512779112Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" returns successfully" Feb 13 21:37:49.514069 containerd[1496]: time="2025-02-13T21:37:49.514040733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:4,}" Feb 13 21:37:49.648477 containerd[1496]: time="2025-02-13T21:37:49.648373502Z" level=error msg="Failed to destroy network for sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:49.650067 containerd[1496]: time="2025-02-13T21:37:49.649107548Z" level=error msg="encountered an error cleaning up failed sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:49.650067 containerd[1496]: time="2025-02-13T21:37:49.649246988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:49.650293 kubelet[1916]: E0213 21:37:49.649565 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:49.650293 kubelet[1916]: E0213 21:37:49.649657 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:49.650293 kubelet[1916]: E0213 21:37:49.649704 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:49.650722 kubelet[1916]: E0213 21:37:49.650564 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-xq22c_default(d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-xq22c_default(d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-xq22c" podUID="d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb" Feb 13 21:37:49.684679 containerd[1496]: time="2025-02-13T21:37:49.684522934Z" level=error msg="Failed to destroy network for sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:49.685889 containerd[1496]: time="2025-02-13T21:37:49.685612543Z" level=error msg="encountered an error cleaning up failed sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:49.685889 containerd[1496]: time="2025-02-13T21:37:49.685702153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:49.686710 kubelet[1916]: E0213 21:37:49.686236 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:49.686710 kubelet[1916]: E0213 21:37:49.686314 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:49.686710 kubelet[1916]: E0213 21:37:49.686345 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:49.686938 kubelet[1916]: E0213 21:37:49.686413 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:50.249354 kubelet[1916]: E0213 21:37:50.249261 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:50.468001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6-shm.mount: Deactivated successfully. Feb 13 21:37:50.468469 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5-shm.mount: Deactivated successfully. Feb 13 21:37:50.500412 kubelet[1916]: I0213 21:37:50.500098 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6" Feb 13 21:37:50.501432 containerd[1496]: time="2025-02-13T21:37:50.500982087Z" level=info msg="StopPodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\"" Feb 13 21:37:50.501432 containerd[1496]: time="2025-02-13T21:37:50.501304378Z" level=info msg="Ensure that sandbox 55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6 in task-service has been cleanup successfully" Feb 13 21:37:50.506289 kubelet[1916]: I0213 21:37:50.503926 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5" Feb 13 21:37:50.506391 containerd[1496]: time="2025-02-13T21:37:50.504413079Z" level=info msg="StopPodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\"" Feb 13 21:37:50.506391 containerd[1496]: time="2025-02-13T21:37:50.504638864Z" level=info msg="Ensure that sandbox c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5 in task-service has been cleanup successfully" Feb 13 21:37:50.506391 containerd[1496]: time="2025-02-13T21:37:50.504858918Z" level=info msg="TearDown network for sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" successfully" Feb 13 21:37:50.506391 containerd[1496]: time="2025-02-13T21:37:50.504887499Z" level=info msg="StopPodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" returns successfully" Feb 13 21:37:50.506391 containerd[1496]: time="2025-02-13T21:37:50.505195490Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\"" Feb 13 21:37:50.506391 containerd[1496]: time="2025-02-13T21:37:50.505291949Z" level=info msg="TearDown network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" successfully" Feb 13 21:37:50.506391 containerd[1496]: time="2025-02-13T21:37:50.505312081Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" returns successfully" Feb 13 21:37:50.506391 containerd[1496]: time="2025-02-13T21:37:50.505744839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:2,}" Feb 13 21:37:50.508206 containerd[1496]: time="2025-02-13T21:37:50.506972314Z" level=info msg="TearDown network for sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" successfully" Feb 13 21:37:50.508206 containerd[1496]: time="2025-02-13T21:37:50.507062605Z" level=info msg="StopPodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" returns successfully" Feb 13 21:37:50.508605 systemd[1]: run-netns-cni\x2da8491dd1\x2dcf51\x2d63c1\x2d441a\x2d3ce3a3a31c14.mount: Deactivated successfully. Feb 13 21:37:50.510340 containerd[1496]: time="2025-02-13T21:37:50.510307828Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\"" Feb 13 21:37:50.511428 containerd[1496]: time="2025-02-13T21:37:50.510808603Z" level=info msg="TearDown network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" successfully" Feb 13 21:37:50.512770 containerd[1496]: time="2025-02-13T21:37:50.511600978Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" returns successfully" Feb 13 21:37:50.514817 systemd[1]: run-netns-cni\x2d7d169368\x2dd4d8\x2dd9c2\x2df12b\x2d71a5eed42e33.mount: Deactivated successfully. Feb 13 21:37:50.518964 containerd[1496]: time="2025-02-13T21:37:50.518367148Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\"" Feb 13 21:37:50.518964 containerd[1496]: time="2025-02-13T21:37:50.518496673Z" level=info msg="TearDown network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" successfully" Feb 13 21:37:50.518964 containerd[1496]: time="2025-02-13T21:37:50.518517976Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" returns successfully" Feb 13 21:37:50.521281 containerd[1496]: time="2025-02-13T21:37:50.519289835Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\"" Feb 13 21:37:50.521281 containerd[1496]: time="2025-02-13T21:37:50.519393778Z" level=info msg="TearDown network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" successfully" Feb 13 21:37:50.521281 containerd[1496]: time="2025-02-13T21:37:50.519413364Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" returns successfully" Feb 13 21:37:50.521281 containerd[1496]: time="2025-02-13T21:37:50.520469676Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" Feb 13 21:37:50.521281 containerd[1496]: time="2025-02-13T21:37:50.520627626Z" level=info msg="TearDown network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" successfully" Feb 13 21:37:50.521281 containerd[1496]: time="2025-02-13T21:37:50.520646843Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" returns successfully" Feb 13 21:37:50.521281 containerd[1496]: time="2025-02-13T21:37:50.521132327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:5,}" Feb 13 21:37:50.698211 containerd[1496]: time="2025-02-13T21:37:50.697953609Z" level=error msg="Failed to destroy network for sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:50.700204 containerd[1496]: time="2025-02-13T21:37:50.699297073Z" level=error msg="encountered an error cleaning up failed sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:50.700204 containerd[1496]: time="2025-02-13T21:37:50.699387648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:50.700341 kubelet[1916]: E0213 21:37:50.699682 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:50.700341 kubelet[1916]: E0213 21:37:50.699763 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:50.700341 kubelet[1916]: E0213 21:37:50.699794 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:50.700585 kubelet[1916]: E0213 21:37:50.699854 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:50.710215 containerd[1496]: time="2025-02-13T21:37:50.709657874Z" level=error msg="Failed to destroy network for sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:50.710215 containerd[1496]: time="2025-02-13T21:37:50.710124657Z" level=error msg="encountered an error cleaning up failed sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:50.710578 containerd[1496]: time="2025-02-13T21:37:50.710533718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:50.711119 kubelet[1916]: E0213 21:37:50.710997 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:50.711373 kubelet[1916]: E0213 21:37:50.711243 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:50.711793 kubelet[1916]: E0213 21:37:50.711286 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:50.712002 kubelet[1916]: E0213 21:37:50.711938 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-xq22c_default(d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-xq22c_default(d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-xq22c" podUID="d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb" Feb 13 21:37:51.249941 kubelet[1916]: E0213 21:37:51.249849 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:51.469260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3-shm.mount: Deactivated successfully. Feb 13 21:37:51.510673 kubelet[1916]: I0213 21:37:51.510524 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0" Feb 13 21:37:51.512054 containerd[1496]: time="2025-02-13T21:37:51.511742902Z" level=info msg="StopPodSandbox for \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\"" Feb 13 21:37:51.512054 containerd[1496]: time="2025-02-13T21:37:51.511995325Z" level=info msg="Ensure that sandbox 4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0 in task-service has been cleanup successfully" Feb 13 21:37:51.515295 containerd[1496]: time="2025-02-13T21:37:51.514166714Z" level=info msg="TearDown network for sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\" successfully" Feb 13 21:37:51.515295 containerd[1496]: time="2025-02-13T21:37:51.514209075Z" level=info msg="StopPodSandbox for \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\" returns successfully" Feb 13 21:37:51.516770 systemd[1]: run-netns-cni\x2dfcf93007\x2d6d0a\x2d873e\x2d3530\x2d6e79172fc2cd.mount: Deactivated successfully. Feb 13 21:37:51.518055 containerd[1496]: time="2025-02-13T21:37:51.517748475Z" level=info msg="StopPodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\"" Feb 13 21:37:51.518055 containerd[1496]: time="2025-02-13T21:37:51.517859063Z" level=info msg="TearDown network for sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" successfully" Feb 13 21:37:51.518055 containerd[1496]: time="2025-02-13T21:37:51.517881175Z" level=info msg="StopPodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" returns successfully" Feb 13 21:37:51.519092 containerd[1496]: time="2025-02-13T21:37:51.519058164Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\"" Feb 13 21:37:51.519243 containerd[1496]: time="2025-02-13T21:37:51.519206008Z" level=info msg="TearDown network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" successfully" Feb 13 21:37:51.519337 containerd[1496]: time="2025-02-13T21:37:51.519241693Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" returns successfully" Feb 13 21:37:51.519733 containerd[1496]: time="2025-02-13T21:37:51.519700240Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\"" Feb 13 21:37:51.519883 containerd[1496]: time="2025-02-13T21:37:51.519815725Z" level=info msg="TearDown network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" successfully" Feb 13 21:37:51.520007 containerd[1496]: time="2025-02-13T21:37:51.519883600Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" returns successfully" Feb 13 21:37:51.520194 kubelet[1916]: I0213 21:37:51.520149 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3" Feb 13 21:37:51.521555 containerd[1496]: time="2025-02-13T21:37:51.521525394Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\"" Feb 13 21:37:51.522496 containerd[1496]: time="2025-02-13T21:37:51.522292497Z" level=info msg="TearDown network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" successfully" Feb 13 21:37:51.522496 containerd[1496]: time="2025-02-13T21:37:51.522317736Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" returns successfully" Feb 13 21:37:51.522496 containerd[1496]: time="2025-02-13T21:37:51.522389655Z" level=info msg="StopPodSandbox for \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\"" Feb 13 21:37:51.522700 containerd[1496]: time="2025-02-13T21:37:51.522611867Z" level=info msg="Ensure that sandbox b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3 in task-service has been cleanup successfully" Feb 13 21:37:51.525761 systemd[1]: run-netns-cni\x2df1a5c827\x2d9274\x2d44ed\x2d396c\x2d4bfe3735ecc8.mount: Deactivated successfully. Feb 13 21:37:51.526747 containerd[1496]: time="2025-02-13T21:37:51.526003254Z" level=info msg="TearDown network for sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\" successfully" Feb 13 21:37:51.526747 containerd[1496]: time="2025-02-13T21:37:51.526034026Z" level=info msg="StopPodSandbox for \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\" returns successfully" Feb 13 21:37:51.527137 containerd[1496]: time="2025-02-13T21:37:51.527068726Z" level=info msg="StopPodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\"" Feb 13 21:37:51.527248 containerd[1496]: time="2025-02-13T21:37:51.527171694Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" Feb 13 21:37:51.528167 containerd[1496]: time="2025-02-13T21:37:51.527353507Z" level=info msg="TearDown network for sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" successfully" Feb 13 21:37:51.528167 containerd[1496]: time="2025-02-13T21:37:51.527492174Z" level=info msg="StopPodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" returns successfully" Feb 13 21:37:51.528167 containerd[1496]: time="2025-02-13T21:37:51.527424656Z" level=info msg="TearDown network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" successfully" Feb 13 21:37:51.528167 containerd[1496]: time="2025-02-13T21:37:51.527669344Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" returns successfully" Feb 13 21:37:51.529487 containerd[1496]: time="2025-02-13T21:37:51.528966339Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\"" Feb 13 21:37:51.529487 containerd[1496]: time="2025-02-13T21:37:51.529378122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:6,}" Feb 13 21:37:51.529998 containerd[1496]: time="2025-02-13T21:37:51.529953760Z" level=info msg="TearDown network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" successfully" Feb 13 21:37:51.529998 containerd[1496]: time="2025-02-13T21:37:51.529992970Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" returns successfully" Feb 13 21:37:51.538432 containerd[1496]: time="2025-02-13T21:37:51.538341572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:3,}" Feb 13 21:37:51.719970 containerd[1496]: time="2025-02-13T21:37:51.719800363Z" level=error msg="Failed to destroy network for sandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:51.721306 containerd[1496]: time="2025-02-13T21:37:51.720777185Z" level=error msg="encountered an error cleaning up failed sandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:51.721306 containerd[1496]: time="2025-02-13T21:37:51.720877987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:51.721506 kubelet[1916]: E0213 21:37:51.721175 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:51.722421 kubelet[1916]: E0213 21:37:51.721890 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:51.722421 kubelet[1916]: E0213 21:37:51.721928 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:51.722421 kubelet[1916]: E0213 21:37:51.722022 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-xq22c_default(d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-xq22c_default(d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-xq22c" podUID="d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb" Feb 13 21:37:51.724659 containerd[1496]: time="2025-02-13T21:37:51.724289643Z" level=error msg="Failed to destroy network for sandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:51.725361 containerd[1496]: time="2025-02-13T21:37:51.725325344Z" level=error msg="encountered an error cleaning up failed sandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:51.725447 containerd[1496]: time="2025-02-13T21:37:51.725393661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:51.725950 kubelet[1916]: E0213 21:37:51.725548 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:51.725950 kubelet[1916]: E0213 21:37:51.725589 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:51.725950 kubelet[1916]: E0213 21:37:51.725614 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:51.726118 kubelet[1916]: E0213 21:37:51.725651 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:52.250349 kubelet[1916]: E0213 21:37:52.250276 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:52.467169 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7-shm.mount: Deactivated successfully. Feb 13 21:37:52.530221 kubelet[1916]: I0213 21:37:52.529997 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7" Feb 13 21:37:52.531224 containerd[1496]: time="2025-02-13T21:37:52.530800196Z" level=info msg="StopPodSandbox for \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\"" Feb 13 21:37:52.532495 containerd[1496]: time="2025-02-13T21:37:52.532074399Z" level=info msg="Ensure that sandbox b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7 in task-service has been cleanup successfully" Feb 13 21:37:52.534313 containerd[1496]: time="2025-02-13T21:37:52.534280744Z" level=info msg="TearDown network for sandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\" successfully" Feb 13 21:37:52.534468 containerd[1496]: time="2025-02-13T21:37:52.534440388Z" level=info msg="StopPodSandbox for \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\" returns successfully" Feb 13 21:37:52.536419 containerd[1496]: time="2025-02-13T21:37:52.535226830Z" level=info msg="StopPodSandbox for \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\"" Feb 13 21:37:52.536419 containerd[1496]: time="2025-02-13T21:37:52.535356840Z" level=info msg="TearDown network for sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\" successfully" Feb 13 21:37:52.536419 containerd[1496]: time="2025-02-13T21:37:52.535389474Z" level=info msg="StopPodSandbox for \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\" returns successfully" Feb 13 21:37:52.536500 systemd[1]: run-netns-cni\x2d79662d3e\x2ddd02\x2d4cb9\x2d5021\x2d0534a111d699.mount: Deactivated successfully. Feb 13 21:37:52.538970 containerd[1496]: time="2025-02-13T21:37:52.538825583Z" level=info msg="StopPodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\"" Feb 13 21:37:52.539066 containerd[1496]: time="2025-02-13T21:37:52.538965661Z" level=info msg="TearDown network for sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" successfully" Feb 13 21:37:52.539066 containerd[1496]: time="2025-02-13T21:37:52.538985632Z" level=info msg="StopPodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" returns successfully" Feb 13 21:37:52.540811 containerd[1496]: time="2025-02-13T21:37:52.540165161Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\"" Feb 13 21:37:52.540874 containerd[1496]: time="2025-02-13T21:37:52.540835130Z" level=info msg="TearDown network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" successfully" Feb 13 21:37:52.540874 containerd[1496]: time="2025-02-13T21:37:52.540855445Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" returns successfully" Feb 13 21:37:52.542112 containerd[1496]: time="2025-02-13T21:37:52.542066992Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\"" Feb 13 21:37:52.542418 containerd[1496]: time="2025-02-13T21:37:52.542275991Z" level=info msg="TearDown network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" successfully" Feb 13 21:37:52.542418 containerd[1496]: time="2025-02-13T21:37:52.542404705Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" returns successfully" Feb 13 21:37:52.543205 containerd[1496]: time="2025-02-13T21:37:52.543150772Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\"" Feb 13 21:37:52.543317 containerd[1496]: time="2025-02-13T21:37:52.543291126Z" level=info msg="TearDown network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" successfully" Feb 13 21:37:52.543396 containerd[1496]: time="2025-02-13T21:37:52.543316622Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" returns successfully" Feb 13 21:37:52.543627 kubelet[1916]: I0213 21:37:52.543584 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2" Feb 13 21:37:52.549208 containerd[1496]: time="2025-02-13T21:37:52.545589141Z" level=info msg="StopPodSandbox for \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\"" Feb 13 21:37:52.549208 containerd[1496]: time="2025-02-13T21:37:52.545918583Z" level=info msg="Ensure that sandbox c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2 in task-service has been cleanup successfully" Feb 13 21:37:52.549738 containerd[1496]: time="2025-02-13T21:37:52.549675337Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" Feb 13 21:37:52.550817 systemd[1]: run-netns-cni\x2d50c17c75\x2d3bbe\x2d44d6\x2de53b\x2d33f5e955b2df.mount: Deactivated successfully. Feb 13 21:37:52.553335 containerd[1496]: time="2025-02-13T21:37:52.553293644Z" level=info msg="TearDown network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" successfully" Feb 13 21:37:52.553505 containerd[1496]: time="2025-02-13T21:37:52.553475617Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" returns successfully" Feb 13 21:37:52.553705 containerd[1496]: time="2025-02-13T21:37:52.549883312Z" level=info msg="TearDown network for sandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\" successfully" Feb 13 21:37:52.553820 containerd[1496]: time="2025-02-13T21:37:52.553796058Z" level=info msg="StopPodSandbox for \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\" returns successfully" Feb 13 21:37:52.554508 containerd[1496]: time="2025-02-13T21:37:52.554477350Z" level=info msg="StopPodSandbox for \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\"" Feb 13 21:37:52.556342 containerd[1496]: time="2025-02-13T21:37:52.556294934Z" level=info msg="TearDown network for sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\" successfully" Feb 13 21:37:52.556342 containerd[1496]: time="2025-02-13T21:37:52.556321821Z" level=info msg="StopPodSandbox for \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\" returns successfully" Feb 13 21:37:52.556741 containerd[1496]: time="2025-02-13T21:37:52.556700912Z" level=info msg="StopPodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\"" Feb 13 21:37:52.556912 containerd[1496]: time="2025-02-13T21:37:52.556885382Z" level=info msg="TearDown network for sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" successfully" Feb 13 21:37:52.556971 containerd[1496]: time="2025-02-13T21:37:52.556914319Z" level=info msg="StopPodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" returns successfully" Feb 13 21:37:52.557122 containerd[1496]: time="2025-02-13T21:37:52.557094062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:7,}" Feb 13 21:37:52.568019 containerd[1496]: time="2025-02-13T21:37:52.567966780Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\"" Feb 13 21:37:52.568169 containerd[1496]: time="2025-02-13T21:37:52.568111885Z" level=info msg="TearDown network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" successfully" Feb 13 21:37:52.568169 containerd[1496]: time="2025-02-13T21:37:52.568132594Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" returns successfully" Feb 13 21:37:52.572361 containerd[1496]: time="2025-02-13T21:37:52.572252585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:4,}" Feb 13 21:37:52.728977 containerd[1496]: time="2025-02-13T21:37:52.728708615Z" level=error msg="Failed to destroy network for sandbox \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:52.730740 containerd[1496]: time="2025-02-13T21:37:52.730695193Z" level=error msg="encountered an error cleaning up failed sandbox \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:52.730874 containerd[1496]: time="2025-02-13T21:37:52.730835881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:52.731703 kubelet[1916]: E0213 21:37:52.731229 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:52.731703 kubelet[1916]: E0213 21:37:52.731311 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:52.731703 kubelet[1916]: E0213 21:37:52.731342 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:52.731928 kubelet[1916]: E0213 21:37:52.731425 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:52.769291 containerd[1496]: time="2025-02-13T21:37:52.769219584Z" level=error msg="Failed to destroy network for sandbox \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:52.769907 containerd[1496]: time="2025-02-13T21:37:52.769864697Z" level=error msg="encountered an error cleaning up failed sandbox \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:52.769970 containerd[1496]: time="2025-02-13T21:37:52.769946883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:52.770396 kubelet[1916]: E0213 21:37:52.770265 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:52.770396 kubelet[1916]: E0213 21:37:52.770348 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:52.770396 kubelet[1916]: E0213 21:37:52.770392 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:52.770779 kubelet[1916]: E0213 21:37:52.770481 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-xq22c_default(d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-xq22c_default(d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-xq22c" podUID="d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb" Feb 13 21:37:53.251244 kubelet[1916]: E0213 21:37:53.251164 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:53.467451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7-shm.mount: Deactivated successfully. Feb 13 21:37:53.548812 kubelet[1916]: I0213 21:37:53.548662 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43" Feb 13 21:37:53.550483 containerd[1496]: time="2025-02-13T21:37:53.549900709Z" level=info msg="StopPodSandbox for \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\"" Feb 13 21:37:53.550483 containerd[1496]: time="2025-02-13T21:37:53.550281373Z" level=info msg="Ensure that sandbox 27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43 in task-service has been cleanup successfully" Feb 13 21:37:53.551113 containerd[1496]: time="2025-02-13T21:37:53.551071243Z" level=info msg="TearDown network for sandbox \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\" successfully" Feb 13 21:37:53.551901 containerd[1496]: time="2025-02-13T21:37:53.551236335Z" level=info msg="StopPodSandbox for \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\" returns successfully" Feb 13 21:37:53.551901 containerd[1496]: time="2025-02-13T21:37:53.551688825Z" level=info msg="StopPodSandbox for \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\"" Feb 13 21:37:53.551901 containerd[1496]: time="2025-02-13T21:37:53.551786659Z" level=info msg="TearDown network for sandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\" successfully" Feb 13 21:37:53.551901 containerd[1496]: time="2025-02-13T21:37:53.551816805Z" level=info msg="StopPodSandbox for \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\" returns successfully" Feb 13 21:37:53.552992 containerd[1496]: time="2025-02-13T21:37:53.552631624Z" level=info msg="StopPodSandbox for \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\"" Feb 13 21:37:53.552992 containerd[1496]: time="2025-02-13T21:37:53.552759843Z" level=info msg="TearDown network for sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\" successfully" Feb 13 21:37:53.552992 containerd[1496]: time="2025-02-13T21:37:53.552778938Z" level=info msg="StopPodSandbox for \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\" returns successfully" Feb 13 21:37:53.553872 containerd[1496]: time="2025-02-13T21:37:53.553435948Z" level=info msg="StopPodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\"" Feb 13 21:37:53.553872 containerd[1496]: time="2025-02-13T21:37:53.553604708Z" level=info msg="TearDown network for sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" successfully" Feb 13 21:37:53.553872 containerd[1496]: time="2025-02-13T21:37:53.553624479Z" level=info msg="StopPodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" returns successfully" Feb 13 21:37:53.555125 containerd[1496]: time="2025-02-13T21:37:53.554349988Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\"" Feb 13 21:37:53.555125 containerd[1496]: time="2025-02-13T21:37:53.554458163Z" level=info msg="TearDown network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" successfully" Feb 13 21:37:53.555125 containerd[1496]: time="2025-02-13T21:37:53.554475355Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" returns successfully" Feb 13 21:37:53.555125 containerd[1496]: time="2025-02-13T21:37:53.554944536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:5,}" Feb 13 21:37:53.556162 systemd[1]: run-netns-cni\x2db4acbd73\x2d1f2e\x2db041\x2dba1e\x2df7202d67d1bc.mount: Deactivated successfully. Feb 13 21:37:53.570799 kubelet[1916]: I0213 21:37:53.569854 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7" Feb 13 21:37:53.571114 containerd[1496]: time="2025-02-13T21:37:53.571080194Z" level=info msg="StopPodSandbox for \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\"" Feb 13 21:37:53.571777 containerd[1496]: time="2025-02-13T21:37:53.571736960Z" level=info msg="Ensure that sandbox 493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7 in task-service has been cleanup successfully" Feb 13 21:37:53.572638 containerd[1496]: time="2025-02-13T21:37:53.572589755Z" level=info msg="TearDown network for sandbox \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\" successfully" Feb 13 21:37:53.572821 containerd[1496]: time="2025-02-13T21:37:53.572750461Z" level=info msg="StopPodSandbox for \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\" returns successfully" Feb 13 21:37:53.574464 containerd[1496]: time="2025-02-13T21:37:53.574410810Z" level=info msg="StopPodSandbox for \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\"" Feb 13 21:37:53.576879 containerd[1496]: time="2025-02-13T21:37:53.576693879Z" level=info msg="TearDown network for sandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\" successfully" Feb 13 21:37:53.576879 containerd[1496]: time="2025-02-13T21:37:53.576787414Z" level=info msg="StopPodSandbox for \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\" returns successfully" Feb 13 21:37:53.577647 systemd[1]: run-netns-cni\x2d21020258\x2dc881\x2dbe9a\x2d3b27\x2daf384451a8b5.mount: Deactivated successfully. Feb 13 21:37:53.578314 containerd[1496]: time="2025-02-13T21:37:53.577945063Z" level=info msg="StopPodSandbox for \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\"" Feb 13 21:37:53.578314 containerd[1496]: time="2025-02-13T21:37:53.578065284Z" level=info msg="TearDown network for sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\" successfully" Feb 13 21:37:53.578314 containerd[1496]: time="2025-02-13T21:37:53.578084445Z" level=info msg="StopPodSandbox for \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\" returns successfully" Feb 13 21:37:53.580269 containerd[1496]: time="2025-02-13T21:37:53.580148686Z" level=info msg="StopPodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\"" Feb 13 21:37:53.580354 containerd[1496]: time="2025-02-13T21:37:53.580294910Z" level=info msg="TearDown network for sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" successfully" Feb 13 21:37:53.580354 containerd[1496]: time="2025-02-13T21:37:53.580315499Z" level=info msg="StopPodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" returns successfully" Feb 13 21:37:53.581310 containerd[1496]: time="2025-02-13T21:37:53.581226488Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\"" Feb 13 21:37:53.581836 containerd[1496]: time="2025-02-13T21:37:53.581767651Z" level=info msg="TearDown network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" successfully" Feb 13 21:37:53.581836 containerd[1496]: time="2025-02-13T21:37:53.581793504Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" returns successfully" Feb 13 21:37:53.582868 containerd[1496]: time="2025-02-13T21:37:53.582631816Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\"" Feb 13 21:37:53.582868 containerd[1496]: time="2025-02-13T21:37:53.582732551Z" level=info msg="TearDown network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" successfully" Feb 13 21:37:53.582868 containerd[1496]: time="2025-02-13T21:37:53.582751503Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" returns successfully" Feb 13 21:37:53.583979 containerd[1496]: time="2025-02-13T21:37:53.583618629Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\"" Feb 13 21:37:53.583979 containerd[1496]: time="2025-02-13T21:37:53.583724922Z" level=info msg="TearDown network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" successfully" Feb 13 21:37:53.583979 containerd[1496]: time="2025-02-13T21:37:53.583743518Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" returns successfully" Feb 13 21:37:53.584951 containerd[1496]: time="2025-02-13T21:37:53.584719844Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" Feb 13 21:37:53.585052 containerd[1496]: time="2025-02-13T21:37:53.584998064Z" level=info msg="TearDown network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" successfully" Feb 13 21:37:53.585052 containerd[1496]: time="2025-02-13T21:37:53.585024467Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" returns successfully" Feb 13 21:37:53.586485 containerd[1496]: time="2025-02-13T21:37:53.586162742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:8,}" Feb 13 21:37:53.746332 containerd[1496]: time="2025-02-13T21:37:53.746260429Z" level=error msg="Failed to destroy network for sandbox \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:53.747704 containerd[1496]: time="2025-02-13T21:37:53.746737420Z" level=error msg="encountered an error cleaning up failed sandbox \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:53.747704 containerd[1496]: time="2025-02-13T21:37:53.746814249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:53.747901 kubelet[1916]: E0213 21:37:53.747136 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:53.747901 kubelet[1916]: E0213 21:37:53.747240 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:53.747901 kubelet[1916]: E0213 21:37:53.747288 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-xq22c" Feb 13 21:37:53.748105 kubelet[1916]: E0213 21:37:53.747352 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-xq22c_default(d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-xq22c_default(d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-xq22c" podUID="d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb" Feb 13 21:37:53.751723 containerd[1496]: time="2025-02-13T21:37:53.749919749Z" level=error msg="Failed to destroy network for sandbox \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:53.751723 containerd[1496]: time="2025-02-13T21:37:53.750429748Z" level=error msg="encountered an error cleaning up failed sandbox \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:53.751723 containerd[1496]: time="2025-02-13T21:37:53.750493299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:53.752101 kubelet[1916]: E0213 21:37:53.752040 1916 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:37:53.752393 kubelet[1916]: E0213 21:37:53.752246 1916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:53.752393 kubelet[1916]: E0213 21:37:53.752281 1916 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7sxsz" Feb 13 21:37:53.752393 kubelet[1916]: E0213 21:37:53.752334 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7sxsz_calico-system(aa37c304-b0cb-4ab2-957c-24161a763ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7sxsz" podUID="aa37c304-b0cb-4ab2-957c-24161a763ce8" Feb 13 21:37:54.035815 containerd[1496]: time="2025-02-13T21:37:54.035752565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:54.037030 containerd[1496]: time="2025-02-13T21:37:54.036967864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 21:37:54.037218 containerd[1496]: time="2025-02-13T21:37:54.037169181Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:54.039925 containerd[1496]: time="2025-02-13T21:37:54.039886810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:54.041130 containerd[1496]: time="2025-02-13T21:37:54.041088965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.594676819s" Feb 13 21:37:54.041322 containerd[1496]: time="2025-02-13T21:37:54.041292918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 21:37:54.066670 containerd[1496]: time="2025-02-13T21:37:54.066615313Z" level=info msg="CreateContainer within sandbox \"853962a77546df7bd32d72b4b4307936ccad47b7164e1886c30ae3ddcb78b5dc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 21:37:54.087421 containerd[1496]: time="2025-02-13T21:37:54.087357032Z" level=info msg="CreateContainer within sandbox \"853962a77546df7bd32d72b4b4307936ccad47b7164e1886c30ae3ddcb78b5dc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5f62b4edac1d57c08549ffb4c1ce786898c44df399e41f6cb17f6e60a5e775ba\"" Feb 13 21:37:54.092189 containerd[1496]: time="2025-02-13T21:37:54.092146957Z" level=info msg="StartContainer for \"5f62b4edac1d57c08549ffb4c1ce786898c44df399e41f6cb17f6e60a5e775ba\"" Feb 13 21:37:54.252999 kubelet[1916]: E0213 21:37:54.252900 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:54.296423 systemd[1]: Started cri-containerd-5f62b4edac1d57c08549ffb4c1ce786898c44df399e41f6cb17f6e60a5e775ba.scope - libcontainer container 5f62b4edac1d57c08549ffb4c1ce786898c44df399e41f6cb17f6e60a5e775ba. Feb 13 21:37:54.344824 containerd[1496]: time="2025-02-13T21:37:54.344760591Z" level=info msg="StartContainer for \"5f62b4edac1d57c08549ffb4c1ce786898c44df399e41f6cb17f6e60a5e775ba\" returns successfully" Feb 13 21:37:54.455209 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 21:37:54.457226 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 21:37:54.472154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc-shm.mount: Deactivated successfully. Feb 13 21:37:54.472730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount750589743.mount: Deactivated successfully. Feb 13 21:37:54.580990 kubelet[1916]: I0213 21:37:54.580776 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd" Feb 13 21:37:54.582671 containerd[1496]: time="2025-02-13T21:37:54.582382332Z" level=info msg="StopPodSandbox for \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\"" Feb 13 21:37:54.583739 containerd[1496]: time="2025-02-13T21:37:54.583706966Z" level=info msg="Ensure that sandbox 1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd in task-service has been cleanup successfully" Feb 13 21:37:54.585292 containerd[1496]: time="2025-02-13T21:37:54.584172281Z" level=info msg="TearDown network for sandbox \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\" successfully" Feb 13 21:37:54.585292 containerd[1496]: time="2025-02-13T21:37:54.584256336Z" level=info msg="StopPodSandbox for \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\" returns successfully" Feb 13 21:37:54.587604 systemd[1]: run-netns-cni\x2dc7d965b9\x2d5bd5\x2daccd\x2d3e9a\x2d822d10fd2bb0.mount: Deactivated successfully. Feb 13 21:37:54.588559 containerd[1496]: time="2025-02-13T21:37:54.587788273Z" level=info msg="StopPodSandbox for \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\"" Feb 13 21:37:54.588559 containerd[1496]: time="2025-02-13T21:37:54.587900181Z" level=info msg="TearDown network for sandbox \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\" successfully" Feb 13 21:37:54.588559 containerd[1496]: time="2025-02-13T21:37:54.587920005Z" level=info msg="StopPodSandbox for \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\" returns successfully" Feb 13 21:37:54.590768 containerd[1496]: time="2025-02-13T21:37:54.590313502Z" level=info msg="StopPodSandbox for \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\"" Feb 13 21:37:54.590768 containerd[1496]: time="2025-02-13T21:37:54.590479100Z" level=info msg="TearDown network for sandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\" successfully" Feb 13 21:37:54.590768 containerd[1496]: time="2025-02-13T21:37:54.590498440Z" level=info msg="StopPodSandbox for \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\" returns successfully" Feb 13 21:37:54.591394 kubelet[1916]: I0213 21:37:54.590839 1916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc" Feb 13 21:37:54.591846 containerd[1496]: time="2025-02-13T21:37:54.591538020Z" level=info msg="StopPodSandbox for \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\"" Feb 13 21:37:54.591846 containerd[1496]: time="2025-02-13T21:37:54.591655556Z" level=info msg="TearDown network for sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\" successfully" Feb 13 21:37:54.591846 containerd[1496]: time="2025-02-13T21:37:54.591686442Z" level=info msg="StopPodSandbox for \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\" returns successfully" Feb 13 21:37:54.592281 containerd[1496]: time="2025-02-13T21:37:54.592232441Z" level=info msg="StopPodSandbox for \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\"" Feb 13 21:37:54.592526 containerd[1496]: time="2025-02-13T21:37:54.592454396Z" level=info msg="Ensure that sandbox 4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc in task-service has been cleanup successfully" Feb 13 21:37:54.594231 containerd[1496]: time="2025-02-13T21:37:54.592789791Z" level=info msg="StopPodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\"" Feb 13 21:37:54.594231 containerd[1496]: time="2025-02-13T21:37:54.592906834Z" level=info msg="TearDown network for sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" successfully" Feb 13 21:37:54.594231 containerd[1496]: time="2025-02-13T21:37:54.592928020Z" level=info msg="StopPodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" returns successfully" Feb 13 21:37:54.594438 containerd[1496]: time="2025-02-13T21:37:54.594395762Z" level=info msg="TearDown network for sandbox \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\" successfully" Feb 13 21:37:54.594518 containerd[1496]: time="2025-02-13T21:37:54.594451482Z" level=info msg="StopPodSandbox for \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\" returns successfully" Feb 13 21:37:54.596971 containerd[1496]: time="2025-02-13T21:37:54.596427870Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\"" Feb 13 21:37:54.596971 containerd[1496]: time="2025-02-13T21:37:54.596539627Z" level=info msg="TearDown network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" successfully" Feb 13 21:37:54.596971 containerd[1496]: time="2025-02-13T21:37:54.596567111Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" returns successfully" Feb 13 21:37:54.598509 containerd[1496]: time="2025-02-13T21:37:54.597742751Z" level=info msg="StopPodSandbox for \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\"" Feb 13 21:37:54.598509 containerd[1496]: time="2025-02-13T21:37:54.597828678Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\"" Feb 13 21:37:54.598509 containerd[1496]: time="2025-02-13T21:37:54.597917054Z" level=info msg="TearDown network for sandbox \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\" successfully" Feb 13 21:37:54.598509 containerd[1496]: time="2025-02-13T21:37:54.597938628Z" level=info msg="StopPodSandbox for \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\" returns successfully" Feb 13 21:37:54.598509 containerd[1496]: time="2025-02-13T21:37:54.597940481Z" level=info msg="TearDown network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" successfully" Feb 13 21:37:54.598509 containerd[1496]: time="2025-02-13T21:37:54.598058570Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" returns successfully" Feb 13 21:37:54.598053 systemd[1]: run-netns-cni\x2d07719227\x2d25db\x2d45a6\x2dee1c\x2d70f82f43e50b.mount: Deactivated successfully. Feb 13 21:37:54.600747 containerd[1496]: time="2025-02-13T21:37:54.600066127Z" level=info msg="StopPodSandbox for \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\"" Feb 13 21:37:54.600747 containerd[1496]: time="2025-02-13T21:37:54.600186811Z" level=info msg="TearDown network for sandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\" successfully" Feb 13 21:37:54.600747 containerd[1496]: time="2025-02-13T21:37:54.600262826Z" level=info msg="StopPodSandbox for \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\" returns successfully" Feb 13 21:37:54.600747 containerd[1496]: time="2025-02-13T21:37:54.600343903Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\"" Feb 13 21:37:54.600747 containerd[1496]: time="2025-02-13T21:37:54.600449354Z" level=info msg="TearDown network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" successfully" Feb 13 21:37:54.600747 containerd[1496]: time="2025-02-13T21:37:54.600476565Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" returns successfully" Feb 13 21:37:54.601881 containerd[1496]: time="2025-02-13T21:37:54.601571715Z" level=info msg="StopPodSandbox for \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\"" Feb 13 21:37:54.601881 containerd[1496]: time="2025-02-13T21:37:54.601673011Z" level=info msg="TearDown network for sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\" successfully" Feb 13 21:37:54.601881 containerd[1496]: time="2025-02-13T21:37:54.601705306Z" level=info msg="StopPodSandbox for \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\" returns successfully" Feb 13 21:37:54.602783 containerd[1496]: time="2025-02-13T21:37:54.602240197Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" Feb 13 21:37:54.602783 containerd[1496]: time="2025-02-13T21:37:54.602409278Z" level=info msg="StopPodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\"" Feb 13 21:37:54.602783 containerd[1496]: time="2025-02-13T21:37:54.602562869Z" level=info msg="TearDown network for sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" successfully" Feb 13 21:37:54.602783 containerd[1496]: time="2025-02-13T21:37:54.602585420Z" level=info msg="StopPodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" returns successfully" Feb 13 21:37:54.603967 containerd[1496]: time="2025-02-13T21:37:54.603593998Z" level=info msg="TearDown network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" successfully" Feb 13 21:37:54.603967 containerd[1496]: time="2025-02-13T21:37:54.603633377Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" returns successfully" Feb 13 21:37:54.603967 containerd[1496]: time="2025-02-13T21:37:54.603789132Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\"" Feb 13 21:37:54.603967 containerd[1496]: time="2025-02-13T21:37:54.603886246Z" level=info msg="TearDown network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" successfully" Feb 13 21:37:54.603967 containerd[1496]: time="2025-02-13T21:37:54.603903593Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" returns successfully" Feb 13 21:37:54.606235 containerd[1496]: time="2025-02-13T21:37:54.605865158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:9,}" Feb 13 21:37:54.606512 containerd[1496]: time="2025-02-13T21:37:54.606466608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:6,}" Feb 13 21:37:54.937906 systemd-networkd[1411]: cali22a6acc10f4: Link UP Feb 13 21:37:54.938659 systemd-networkd[1411]: cali22a6acc10f4: Gained carrier Feb 13 21:37:54.959078 kubelet[1916]: I0213 21:37:54.958429 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f2wv6" podStartSLOduration=4.541664353 podStartE2EDuration="27.958386236s" podCreationTimestamp="2025-02-13 21:37:27 +0000 UTC" firstStartedPulling="2025-02-13 21:37:30.625714818 +0000 UTC m=+3.952901188" lastFinishedPulling="2025-02-13 21:37:54.04243669 +0000 UTC m=+27.369623071" observedRunningTime="2025-02-13 21:37:54.636143915 +0000 UTC m=+27.963330306" watchObservedRunningTime="2025-02-13 21:37:54.958386236 +0000 UTC m=+28.285572616" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.687 [INFO][3057] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.729 [INFO][3057] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.24.66-k8s-csi--node--driver--7sxsz-eth0 csi-node-driver- calico-system aa37c304-b0cb-4ab2-957c-24161a763ce8 1182 0 2025-02-13 21:37:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.230.24.66 csi-node-driver-7sxsz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali22a6acc10f4 [] []}} ContainerID="2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" Namespace="calico-system" Pod="csi-node-driver-7sxsz" WorkloadEndpoint="10.230.24.66-k8s-csi--node--driver--7sxsz-" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.730 [INFO][3057] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" Namespace="calico-system" Pod="csi-node-driver-7sxsz" WorkloadEndpoint="10.230.24.66-k8s-csi--node--driver--7sxsz-eth0" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.843 [INFO][3082] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" HandleID="k8s-pod-network.2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" Workload="10.230.24.66-k8s-csi--node--driver--7sxsz-eth0" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.875 [INFO][3082] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" HandleID="k8s-pod-network.2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" Workload="10.230.24.66-k8s-csi--node--driver--7sxsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039ca90), Attrs:map[string]string{"namespace":"calico-system", "node":"10.230.24.66", "pod":"csi-node-driver-7sxsz", "timestamp":"2025-02-13 21:37:54.843456048 +0000 UTC"}, Hostname:"10.230.24.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.876 [INFO][3082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.876 [INFO][3082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.877 [INFO][3082] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.24.66' Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.880 [INFO][3082] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" host="10.230.24.66" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.888 [INFO][3082] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.24.66" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.895 [INFO][3082] ipam/ipam.go 489: Trying affinity for 192.168.1.64/26 host="10.230.24.66" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.897 [INFO][3082] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.64/26 host="10.230.24.66" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.900 [INFO][3082] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.64/26 host="10.230.24.66" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.901 [INFO][3082] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.64/26 handle="k8s-pod-network.2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" host="10.230.24.66" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.903 [INFO][3082] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.909 [INFO][3082] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.64/26 handle="k8s-pod-network.2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" host="10.230.24.66" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.919 [INFO][3082] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.65/26] block=192.168.1.64/26 handle="k8s-pod-network.2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" host="10.230.24.66" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.919 [INFO][3082] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.65/26] handle="k8s-pod-network.2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" host="10.230.24.66" Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.919 [INFO][3082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:37:54.961163 containerd[1496]: 2025-02-13 21:37:54.920 [INFO][3082] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.65/26] IPv6=[] ContainerID="2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" HandleID="k8s-pod-network.2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" Workload="10.230.24.66-k8s-csi--node--driver--7sxsz-eth0" Feb 13 21:37:54.962811 containerd[1496]: 2025-02-13 21:37:54.923 [INFO][3057] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" Namespace="calico-system" Pod="csi-node-driver-7sxsz" WorkloadEndpoint="10.230.24.66-k8s-csi--node--driver--7sxsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.66-k8s-csi--node--driver--7sxsz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa37c304-b0cb-4ab2-957c-24161a763ce8", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.66", ContainerID:"", Pod:"csi-node-driver-7sxsz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22a6acc10f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:37:54.962811 containerd[1496]: 2025-02-13 21:37:54.923 [INFO][3057] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.65/32] ContainerID="2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" Namespace="calico-system" Pod="csi-node-driver-7sxsz" WorkloadEndpoint="10.230.24.66-k8s-csi--node--driver--7sxsz-eth0" Feb 13 21:37:54.962811 containerd[1496]: 2025-02-13 21:37:54.923 [INFO][3057] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22a6acc10f4 ContainerID="2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" Namespace="calico-system" Pod="csi-node-driver-7sxsz" WorkloadEndpoint="10.230.24.66-k8s-csi--node--driver--7sxsz-eth0" Feb 13 21:37:54.962811 containerd[1496]: 2025-02-13 21:37:54.938 [INFO][3057] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" Namespace="calico-system" Pod="csi-node-driver-7sxsz" WorkloadEndpoint="10.230.24.66-k8s-csi--node--driver--7sxsz-eth0" Feb 13 21:37:54.962811 containerd[1496]: 2025-02-13 21:37:54.939 [INFO][3057] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" Namespace="calico-system" Pod="csi-node-driver-7sxsz" WorkloadEndpoint="10.230.24.66-k8s-csi--node--driver--7sxsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.66-k8s-csi--node--driver--7sxsz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa37c304-b0cb-4ab2-957c-24161a763ce8", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.66", ContainerID:"2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d", Pod:"csi-node-driver-7sxsz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22a6acc10f4", MAC:"e6:51:f6:75:5e:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:37:54.962811 containerd[1496]: 2025-02-13 21:37:54.959 [INFO][3057] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d" Namespace="calico-system" Pod="csi-node-driver-7sxsz" WorkloadEndpoint="10.230.24.66-k8s-csi--node--driver--7sxsz-eth0" Feb 13 21:37:54.996106 containerd[1496]: time="2025-02-13T21:37:54.995498570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:37:54.996106 containerd[1496]: time="2025-02-13T21:37:54.996018000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:37:54.997444 containerd[1496]: time="2025-02-13T21:37:54.997073602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:37:54.997444 containerd[1496]: time="2025-02-13T21:37:54.997301437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:37:55.025428 systemd[1]: Started cri-containerd-2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d.scope - libcontainer container 2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d. Feb 13 21:37:55.035555 systemd-networkd[1411]: cali961614d32f0: Link UP Feb 13 21:37:55.036922 systemd-networkd[1411]: cali961614d32f0: Gained carrier Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:54.690 [INFO][3067] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:54.730 [INFO][3067] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0 nginx-deployment-8587fbcb89- default d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb 1303 0 2025-02-13 21:37:46 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.24.66 nginx-deployment-8587fbcb89-xq22c eth0 default [] [] [kns.default ksa.default.default] cali961614d32f0 [] []}} ContainerID="236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" Namespace="default" Pod="nginx-deployment-8587fbcb89-xq22c" WorkloadEndpoint="10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-" Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:54.730 [INFO][3067] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" Namespace="default" Pod="nginx-deployment-8587fbcb89-xq22c" WorkloadEndpoint="10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0" Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:54.844 [INFO][3083] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" HandleID="k8s-pod-network.236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" Workload="10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0" Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:54.876 [INFO][3083] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" HandleID="k8s-pod-network.236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" Workload="10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc0b0), Attrs:map[string]string{"namespace":"default", "node":"10.230.24.66", "pod":"nginx-deployment-8587fbcb89-xq22c", "timestamp":"2025-02-13 21:37:54.844618811 +0000 UTC"}, Hostname:"10.230.24.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:54.876 [INFO][3083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:54.919 [INFO][3083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:54.919 [INFO][3083] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.24.66' Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:54.982 [INFO][3083] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" host="10.230.24.66" Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:54.990 [INFO][3083] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.24.66" Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:54.997 [INFO][3083] ipam/ipam.go 489: Trying affinity for 192.168.1.64/26 host="10.230.24.66" Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:55.000 [INFO][3083] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.64/26 host="10.230.24.66" Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:55.004 [INFO][3083] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.64/26 host="10.230.24.66" Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:55.004 [INFO][3083] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.64/26 handle="k8s-pod-network.236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" host="10.230.24.66" Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:55.009 [INFO][3083] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:55.017 [INFO][3083] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.64/26 handle="k8s-pod-network.236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" host="10.230.24.66" Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:55.026 [INFO][3083] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.66/26] block=192.168.1.64/26 handle="k8s-pod-network.236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" host="10.230.24.66" Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:55.026 [INFO][3083] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.66/26] handle="k8s-pod-network.236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" host="10.230.24.66" Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:55.026 [INFO][3083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:37:55.058095 containerd[1496]: 2025-02-13 21:37:55.026 [INFO][3083] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.66/26] IPv6=[] ContainerID="236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" HandleID="k8s-pod-network.236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" Workload="10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0" Feb 13 21:37:55.059258 containerd[1496]: 2025-02-13 21:37:55.030 [INFO][3067] cni-plugin/k8s.go 386: Populated endpoint ContainerID="236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" Namespace="default" Pod="nginx-deployment-8587fbcb89-xq22c" WorkloadEndpoint="10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb", ResourceVersion:"1303", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 37, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.66", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-xq22c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.1.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali961614d32f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:37:55.059258 containerd[1496]: 2025-02-13 21:37:55.030 [INFO][3067] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.66/32] ContainerID="236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" Namespace="default" Pod="nginx-deployment-8587fbcb89-xq22c" WorkloadEndpoint="10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0" Feb 13 21:37:55.059258 containerd[1496]: 2025-02-13 21:37:55.030 [INFO][3067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali961614d32f0 ContainerID="236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" Namespace="default" Pod="nginx-deployment-8587fbcb89-xq22c" WorkloadEndpoint="10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0" Feb 13 21:37:55.059258 containerd[1496]: 2025-02-13 21:37:55.037 [INFO][3067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" Namespace="default" Pod="nginx-deployment-8587fbcb89-xq22c" WorkloadEndpoint="10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0" Feb 13 21:37:55.059258 containerd[1496]: 2025-02-13 21:37:55.039 [INFO][3067] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" Namespace="default" Pod="nginx-deployment-8587fbcb89-xq22c" WorkloadEndpoint="10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb", ResourceVersion:"1303", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 37, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.66", ContainerID:"236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef", Pod:"nginx-deployment-8587fbcb89-xq22c", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.1.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali961614d32f0", MAC:"2a:06:27:50:1e:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:37:55.059258 containerd[1496]: 2025-02-13 21:37:55.050 [INFO][3067] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef" Namespace="default" Pod="nginx-deployment-8587fbcb89-xq22c" WorkloadEndpoint="10.230.24.66-k8s-nginx--deployment--8587fbcb89--xq22c-eth0" Feb 13 21:37:55.079666 containerd[1496]: time="2025-02-13T21:37:55.079434656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7sxsz,Uid:aa37c304-b0cb-4ab2-957c-24161a763ce8,Namespace:calico-system,Attempt:9,} returns sandbox id \"2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d\"" Feb 13 21:37:55.084208 containerd[1496]: time="2025-02-13T21:37:55.083906899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 21:37:55.098474 containerd[1496]: time="2025-02-13T21:37:55.098265495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:37:55.098793 containerd[1496]: time="2025-02-13T21:37:55.098550078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:37:55.098793 containerd[1496]: time="2025-02-13T21:37:55.098584829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:37:55.098793 containerd[1496]: time="2025-02-13T21:37:55.098710578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:37:55.124419 systemd[1]: Started cri-containerd-236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef.scope - libcontainer container 236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef. Feb 13 21:37:55.180285 containerd[1496]: time="2025-02-13T21:37:55.180230500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-xq22c,Uid:d307c13f-d9b3-46b9-8352-ab6bcd9eb0cb,Namespace:default,Attempt:6,} returns sandbox id \"236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef\"" Feb 13 21:37:55.254072 kubelet[1916]: E0213 21:37:55.253921 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:55.618864 kubelet[1916]: I0213 21:37:55.618699 1916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:37:56.254988 kubelet[1916]: E0213 21:37:56.254865 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:56.282245 kernel: bpftool[3324]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 21:37:56.392518 systemd-networkd[1411]: cali22a6acc10f4: Gained IPv6LL Feb 13 21:37:56.639592 systemd-networkd[1411]: vxlan.calico: Link UP Feb 13 21:37:56.639606 systemd-networkd[1411]: vxlan.calico: Gained carrier Feb 13 21:37:56.961876 containerd[1496]: time="2025-02-13T21:37:56.961817947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:56.963691 containerd[1496]: time="2025-02-13T21:37:56.963631029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 21:37:56.966204 containerd[1496]: time="2025-02-13T21:37:56.964864548Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:56.968286 containerd[1496]: time="2025-02-13T21:37:56.968248642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:37:56.968893 systemd-networkd[1411]: cali961614d32f0: Gained IPv6LL Feb 13 21:37:56.971800 containerd[1496]: time="2025-02-13T21:37:56.971766394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.887815186s" Feb 13 21:37:56.971945 containerd[1496]: time="2025-02-13T21:37:56.971917029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 21:37:56.974589 containerd[1496]: time="2025-02-13T21:37:56.974558150Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 21:37:56.976357 containerd[1496]: time="2025-02-13T21:37:56.976325612Z" level=info msg="CreateContainer within sandbox \"2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 21:37:57.004528 containerd[1496]: time="2025-02-13T21:37:57.004468398Z" level=info msg="CreateContainer within sandbox \"2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ec32a6411ba35e41b813201092a0b444c1017de0afc2e20f0a86e44f563440a7\"" Feb 13 21:37:57.005548 containerd[1496]: time="2025-02-13T21:37:57.005518515Z" level=info msg="StartContainer for \"ec32a6411ba35e41b813201092a0b444c1017de0afc2e20f0a86e44f563440a7\"" Feb 13 21:37:57.062692 systemd[1]: Started cri-containerd-ec32a6411ba35e41b813201092a0b444c1017de0afc2e20f0a86e44f563440a7.scope - libcontainer container ec32a6411ba35e41b813201092a0b444c1017de0afc2e20f0a86e44f563440a7. Feb 13 21:37:57.147727 containerd[1496]: time="2025-02-13T21:37:57.147653086Z" level=info msg="StartContainer for \"ec32a6411ba35e41b813201092a0b444c1017de0afc2e20f0a86e44f563440a7\" returns successfully" Feb 13 21:37:57.255694 kubelet[1916]: E0213 21:37:57.255451 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:57.736355 systemd-networkd[1411]: vxlan.calico: Gained IPv6LL Feb 13 21:37:58.256003 kubelet[1916]: E0213 21:37:58.255949 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:37:58.301201 kubelet[1916]: I0213 21:37:58.301076 1916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:37:58.330590 systemd[1]: run-containerd-runc-k8s.io-5f62b4edac1d57c08549ffb4c1ce786898c44df399e41f6cb17f6e60a5e775ba-runc.3edxVv.mount: Deactivated successfully. Feb 13 21:37:59.257298 kubelet[1916]: E0213 21:37:59.257196 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:00.258003 kubelet[1916]: E0213 21:38:00.257931 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:01.259833 kubelet[1916]: E0213 21:38:01.258957 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:01.301605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562753664.mount: Deactivated successfully. Feb 13 21:38:02.259971 kubelet[1916]: E0213 21:38:02.259912 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:03.096813 containerd[1496]: time="2025-02-13T21:38:03.096732815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:38:03.098381 containerd[1496]: time="2025-02-13T21:38:03.098320836Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 21:38:03.099502 containerd[1496]: time="2025-02-13T21:38:03.099441239Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:38:03.103108 containerd[1496]: time="2025-02-13T21:38:03.103029464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:38:03.104835 containerd[1496]: time="2025-02-13T21:38:03.104611885Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 6.129729361s" Feb 13 21:38:03.104835 containerd[1496]: time="2025-02-13T21:38:03.104681303Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 21:38:03.107361 containerd[1496]: time="2025-02-13T21:38:03.107258365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 21:38:03.116711 containerd[1496]: time="2025-02-13T21:38:03.116422588Z" level=info msg="CreateContainer within sandbox \"236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 21:38:03.154935 containerd[1496]: time="2025-02-13T21:38:03.154879569Z" level=info msg="CreateContainer within sandbox \"236e61d5eaa15df6b93990b2c679fc7cd73197beb3e440b1a3db5a567c76b6ef\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"29ac0fc34246d6d45a983116a7c8c8e6701dd0efb83faab7d0274d989cb6030b\"" Feb 13 21:38:03.155904 containerd[1496]: time="2025-02-13T21:38:03.155612135Z" level=info msg="StartContainer for \"29ac0fc34246d6d45a983116a7c8c8e6701dd0efb83faab7d0274d989cb6030b\"" Feb 13 21:38:03.232391 systemd[1]: Started cri-containerd-29ac0fc34246d6d45a983116a7c8c8e6701dd0efb83faab7d0274d989cb6030b.scope - libcontainer container 29ac0fc34246d6d45a983116a7c8c8e6701dd0efb83faab7d0274d989cb6030b. Feb 13 21:38:03.261004 kubelet[1916]: E0213 21:38:03.260935 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:03.285053 containerd[1496]: time="2025-02-13T21:38:03.284993010Z" level=info msg="StartContainer for \"29ac0fc34246d6d45a983116a7c8c8e6701dd0efb83faab7d0274d989cb6030b\" returns successfully" Feb 13 21:38:04.261804 kubelet[1916]: E0213 21:38:04.261704 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:04.803158 containerd[1496]: time="2025-02-13T21:38:04.803086903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:38:04.804472 containerd[1496]: time="2025-02-13T21:38:04.804216293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 21:38:04.805360 containerd[1496]: time="2025-02-13T21:38:04.805240794Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:38:04.808117 containerd[1496]: time="2025-02-13T21:38:04.808042593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:38:04.809763 containerd[1496]: time="2025-02-13T21:38:04.809225223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.701922359s" Feb 13 21:38:04.809763 containerd[1496]: time="2025-02-13T21:38:04.809271232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 21:38:04.812051 containerd[1496]: time="2025-02-13T21:38:04.812008768Z" level=info msg="CreateContainer within sandbox \"2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 21:38:04.833160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174089244.mount: Deactivated successfully. Feb 13 21:38:04.834102 containerd[1496]: time="2025-02-13T21:38:04.833501360Z" level=info msg="CreateContainer within sandbox \"2e17744b248029f3d5cc2ef25e0c4473e940490d634ab5a30532a3785f27ce6d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c99bb7bd910dbf5153cbe233e835de34604de134bc80fb206635d70c9835f303\"" Feb 13 21:38:04.836108 containerd[1496]: time="2025-02-13T21:38:04.836032549Z" level=info msg="StartContainer for \"c99bb7bd910dbf5153cbe233e835de34604de134bc80fb206635d70c9835f303\"" Feb 13 21:38:04.879383 systemd[1]: Started cri-containerd-c99bb7bd910dbf5153cbe233e835de34604de134bc80fb206635d70c9835f303.scope - libcontainer container c99bb7bd910dbf5153cbe233e835de34604de134bc80fb206635d70c9835f303. Feb 13 21:38:04.928896 containerd[1496]: time="2025-02-13T21:38:04.928827642Z" level=info msg="StartContainer for \"c99bb7bd910dbf5153cbe233e835de34604de134bc80fb206635d70c9835f303\" returns successfully" Feb 13 21:38:05.262690 kubelet[1916]: E0213 21:38:05.262619 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:05.372467 kubelet[1916]: I0213 21:38:05.372376 1916 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 21:38:05.372467 kubelet[1916]: I0213 21:38:05.372437 1916 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 21:38:05.749721 kubelet[1916]: I0213 21:38:05.749648 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7sxsz" podStartSLOduration=29.022082614 podStartE2EDuration="38.749597943s" podCreationTimestamp="2025-02-13 21:37:27 +0000 UTC" firstStartedPulling="2025-02-13 21:37:55.082822433 +0000 UTC m=+28.410008807" lastFinishedPulling="2025-02-13 21:38:04.810337766 +0000 UTC m=+38.137524136" observedRunningTime="2025-02-13 21:38:05.747235105 +0000 UTC m=+39.074421509" watchObservedRunningTime="2025-02-13 21:38:05.749597943 +0000 UTC m=+39.076784326" Feb 13 21:38:05.750359 kubelet[1916]: I0213 21:38:05.750137 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-xq22c" podStartSLOduration=11.826164446 podStartE2EDuration="19.75012782s" podCreationTimestamp="2025-02-13 21:37:46 +0000 UTC" firstStartedPulling="2025-02-13 21:37:55.182450304 +0000 UTC m=+28.509636678" lastFinishedPulling="2025-02-13 21:38:03.106413656 +0000 UTC m=+36.433600052" observedRunningTime="2025-02-13 21:38:03.726583349 +0000 UTC m=+37.053769745" watchObservedRunningTime="2025-02-13 21:38:05.75012782 +0000 UTC m=+39.077314203" Feb 13 21:38:06.263095 kubelet[1916]: E0213 21:38:06.263014 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:07.225596 kubelet[1916]: E0213 21:38:07.225522 1916 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:07.263653 kubelet[1916]: E0213 21:38:07.263562 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:08.264599 kubelet[1916]: E0213 21:38:08.264543 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:09.265448 kubelet[1916]: E0213 21:38:09.265375 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:10.266078 kubelet[1916]: E0213 21:38:10.265980 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:11.266724 kubelet[1916]: E0213 21:38:11.266615 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:12.267531 kubelet[1916]: E0213 21:38:12.267457 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:13.267728 kubelet[1916]: E0213 21:38:13.267630 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:14.268890 kubelet[1916]: E0213 21:38:14.268787 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:15.269171 kubelet[1916]: E0213 21:38:15.269090 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:16.156737 systemd[1]: Created slice kubepods-besteffort-pod8b77855b_0859_4b58_9f83_2113a49736d8.slice - libcontainer container kubepods-besteffort-pod8b77855b_0859_4b58_9f83_2113a49736d8.slice. Feb 13 21:38:16.255057 kubelet[1916]: I0213 21:38:16.254968 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/8b77855b-0859-4b58-9f83-2113a49736d8-data\") pod \"nfs-server-provisioner-0\" (UID: \"8b77855b-0859-4b58-9f83-2113a49736d8\") " pod="default/nfs-server-provisioner-0" Feb 13 21:38:16.255057 kubelet[1916]: I0213 21:38:16.255041 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrp59\" (UniqueName: \"kubernetes.io/projected/8b77855b-0859-4b58-9f83-2113a49736d8-kube-api-access-lrp59\") pod \"nfs-server-provisioner-0\" (UID: \"8b77855b-0859-4b58-9f83-2113a49736d8\") " pod="default/nfs-server-provisioner-0" Feb 13 21:38:16.269414 kubelet[1916]: E0213 21:38:16.269349 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:16.462634 containerd[1496]: time="2025-02-13T21:38:16.462398280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8b77855b-0859-4b58-9f83-2113a49736d8,Namespace:default,Attempt:0,}" Feb 13 21:38:16.667178 systemd-networkd[1411]: cali60e51b789ff: Link UP Feb 13 21:38:16.669060 systemd-networkd[1411]: cali60e51b789ff: Gained carrier Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.553 [INFO][3639] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.24.66-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 8b77855b-0859-4b58-9f83-2113a49736d8 1447 0 2025-02-13 21:38:16 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.230.24.66 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.66-k8s-nfs--server--provisioner--0-" Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.554 [INFO][3639] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.66-k8s-nfs--server--provisioner--0-eth0" Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.598 [INFO][3650] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" HandleID="k8s-pod-network.d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" Workload="10.230.24.66-k8s-nfs--server--provisioner--0-eth0" Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.620 [INFO][3650] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" HandleID="k8s-pod-network.d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" Workload="10.230.24.66-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332c80), Attrs:map[string]string{"namespace":"default", "node":"10.230.24.66", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 21:38:16.598168598 +0000 UTC"}, Hostname:"10.230.24.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.620 [INFO][3650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.620 [INFO][3650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.620 [INFO][3650] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.24.66' Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.625 [INFO][3650] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" host="10.230.24.66" Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.631 [INFO][3650] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.24.66" Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.637 [INFO][3650] ipam/ipam.go 489: Trying affinity for 192.168.1.64/26 host="10.230.24.66" Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.640 [INFO][3650] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.64/26 host="10.230.24.66" Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.643 [INFO][3650] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.64/26 host="10.230.24.66" Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.643 [INFO][3650] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.64/26 handle="k8s-pod-network.d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" host="10.230.24.66" Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.646 [INFO][3650] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.652 [INFO][3650] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.64/26 handle="k8s-pod-network.d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" host="10.230.24.66" Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.659 [INFO][3650] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.67/26] block=192.168.1.64/26 handle="k8s-pod-network.d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" host="10.230.24.66" Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.659 [INFO][3650] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.67/26] handle="k8s-pod-network.d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" host="10.230.24.66" Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.659 [INFO][3650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:38:16.686593 containerd[1496]: 2025-02-13 21:38:16.659 [INFO][3650] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.67/26] IPv6=[] ContainerID="d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" HandleID="k8s-pod-network.d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" Workload="10.230.24.66-k8s-nfs--server--provisioner--0-eth0" Feb 13 21:38:16.689656 containerd[1496]: 2025-02-13 21:38:16.662 [INFO][3639] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.66-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.66-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"8b77855b-0859-4b58-9f83-2113a49736d8", ResourceVersion:"1447", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.66", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.1.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:38:16.689656 containerd[1496]: 2025-02-13 21:38:16.662 [INFO][3639] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.67/32] ContainerID="d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.66-k8s-nfs--server--provisioner--0-eth0" Feb 13 21:38:16.689656 containerd[1496]: 2025-02-13 21:38:16.662 [INFO][3639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.66-k8s-nfs--server--provisioner--0-eth0" Feb 13 21:38:16.689656 containerd[1496]: 2025-02-13 21:38:16.668 [INFO][3639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.66-k8s-nfs--server--provisioner--0-eth0" Feb 13 21:38:16.689946 containerd[1496]: 2025-02-13 21:38:16.669 [INFO][3639] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.66-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.66-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"8b77855b-0859-4b58-9f83-2113a49736d8", ResourceVersion:"1447", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.66", ContainerID:"d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.1.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"52:c5:15:9b:c4:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:38:16.689946 containerd[1496]: 2025-02-13 21:38:16.679 [INFO][3639] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.24.66-k8s-nfs--server--provisioner--0-eth0" Feb 13 21:38:16.721461 containerd[1496]: time="2025-02-13T21:38:16.720485995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:38:16.721461 containerd[1496]: time="2025-02-13T21:38:16.720568851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:38:16.721461 containerd[1496]: time="2025-02-13T21:38:16.720593533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:38:16.721461 containerd[1496]: time="2025-02-13T21:38:16.720726164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:38:16.760448 systemd[1]: Started cri-containerd-d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e.scope - libcontainer container d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e. Feb 13 21:38:16.822703 containerd[1496]: time="2025-02-13T21:38:16.822628390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8b77855b-0859-4b58-9f83-2113a49736d8,Namespace:default,Attempt:0,} returns sandbox id \"d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e\"" Feb 13 21:38:16.824946 containerd[1496]: time="2025-02-13T21:38:16.824543661Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 21:38:17.270637 kubelet[1916]: E0213 21:38:17.270492 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:18.271348 kubelet[1916]: E0213 21:38:18.271268 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:18.600514 systemd-networkd[1411]: cali60e51b789ff: Gained IPv6LL Feb 13 21:38:19.273207 kubelet[1916]: E0213 21:38:19.272541 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:20.274346 kubelet[1916]: E0213 21:38:20.274289 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:20.649933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891331849.mount: Deactivated successfully. Feb 13 21:38:21.275572 kubelet[1916]: E0213 21:38:21.275497 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:22.276466 kubelet[1916]: E0213 21:38:22.276258 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:23.277465 kubelet[1916]: E0213 21:38:23.277378 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:23.607343 containerd[1496]: time="2025-02-13T21:38:23.607066471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:38:23.609103 containerd[1496]: time="2025-02-13T21:38:23.609036159Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Feb 13 21:38:23.610513 containerd[1496]: time="2025-02-13T21:38:23.610449094Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:38:23.614461 containerd[1496]: time="2025-02-13T21:38:23.614390563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:38:23.617074 containerd[1496]: time="2025-02-13T21:38:23.616036388Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.791424834s" Feb 13 21:38:23.617074 containerd[1496]: time="2025-02-13T21:38:23.616091024Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 21:38:23.620595 containerd[1496]: time="2025-02-13T21:38:23.620363343Z" level=info msg="CreateContainer within sandbox \"d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 21:38:23.655421 containerd[1496]: time="2025-02-13T21:38:23.654248396Z" level=info msg="CreateContainer within sandbox \"d06250cdf3c1450cd155163bd5324d92f33c3c8d5de03373aad7ba2990ad5e8e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ad2249adb145124c8aa55510ecba855260d8109001757e3137dbe3647ee469ba\"" Feb 13 21:38:23.656105 containerd[1496]: time="2025-02-13T21:38:23.656063355Z" level=info msg="StartContainer for \"ad2249adb145124c8aa55510ecba855260d8109001757e3137dbe3647ee469ba\"" Feb 13 21:38:23.704420 systemd[1]: Started cri-containerd-ad2249adb145124c8aa55510ecba855260d8109001757e3137dbe3647ee469ba.scope - libcontainer container ad2249adb145124c8aa55510ecba855260d8109001757e3137dbe3647ee469ba. Feb 13 21:38:23.749728 containerd[1496]: time="2025-02-13T21:38:23.748408720Z" level=info msg="StartContainer for \"ad2249adb145124c8aa55510ecba855260d8109001757e3137dbe3647ee469ba\" returns successfully" Feb 13 21:38:23.808331 kubelet[1916]: I0213 21:38:23.808239 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.013894392 podStartE2EDuration="7.808205558s" podCreationTimestamp="2025-02-13 21:38:16 +0000 UTC" firstStartedPulling="2025-02-13 21:38:16.824081989 +0000 UTC m=+50.151268369" lastFinishedPulling="2025-02-13 21:38:23.618393159 +0000 UTC m=+56.945579535" observedRunningTime="2025-02-13 21:38:23.807876431 +0000 UTC m=+57.135062845" watchObservedRunningTime="2025-02-13 21:38:23.808205558 +0000 UTC m=+57.135391938" Feb 13 21:38:24.278634 kubelet[1916]: E0213 21:38:24.278540 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:25.279060 kubelet[1916]: E0213 21:38:25.278887 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:26.279551 kubelet[1916]: E0213 21:38:26.279458 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:27.224782 kubelet[1916]: E0213 21:38:27.224709 1916 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:27.266427 containerd[1496]: time="2025-02-13T21:38:27.266372061Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\"" Feb 13 21:38:27.267397 containerd[1496]: time="2025-02-13T21:38:27.266553175Z" level=info msg="TearDown network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" successfully" Feb 13 21:38:27.267397 containerd[1496]: time="2025-02-13T21:38:27.266588826Z" level=info msg="StopPodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" returns successfully" Feb 13 21:38:27.272517 containerd[1496]: time="2025-02-13T21:38:27.272470505Z" level=info msg="RemovePodSandbox for \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\"" Feb 13 21:38:27.281200 containerd[1496]: time="2025-02-13T21:38:27.280415402Z" level=info msg="Forcibly stopping sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\"" Feb 13 21:38:27.281323 kubelet[1916]: E0213 21:38:27.280581 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:27.290308 containerd[1496]: time="2025-02-13T21:38:27.280697088Z" level=info msg="TearDown network for sandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" successfully" Feb 13 21:38:27.306415 containerd[1496]: time="2025-02-13T21:38:27.306298186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.306564 containerd[1496]: time="2025-02-13T21:38:27.306443204Z" level=info msg="RemovePodSandbox \"ac13fcd7c94914dfd91c2177630fe030141c651074f1cc1f84a8e5fca18cea58\" returns successfully" Feb 13 21:38:27.307225 containerd[1496]: time="2025-02-13T21:38:27.307171332Z" level=info msg="StopPodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\"" Feb 13 21:38:27.307354 containerd[1496]: time="2025-02-13T21:38:27.307329765Z" level=info msg="TearDown network for sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" successfully" Feb 13 21:38:27.307592 containerd[1496]: time="2025-02-13T21:38:27.307357344Z" level=info msg="StopPodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" returns successfully" Feb 13 21:38:27.307778 containerd[1496]: time="2025-02-13T21:38:27.307702457Z" level=info msg="RemovePodSandbox for \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\"" Feb 13 21:38:27.307778 containerd[1496]: time="2025-02-13T21:38:27.307740143Z" level=info msg="Forcibly stopping sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\"" Feb 13 21:38:27.308356 containerd[1496]: time="2025-02-13T21:38:27.307838847Z" level=info msg="TearDown network for sandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" successfully" Feb 13 21:38:27.310679 containerd[1496]: time="2025-02-13T21:38:27.310594250Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.310679 containerd[1496]: time="2025-02-13T21:38:27.310659085Z" level=info msg="RemovePodSandbox \"c760174036068c93ebc4bd9f0d009f703c87525199f56fdb7a878994295687e5\" returns successfully" Feb 13 21:38:27.311553 containerd[1496]: time="2025-02-13T21:38:27.311299951Z" level=info msg="StopPodSandbox for \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\"" Feb 13 21:38:27.311553 containerd[1496]: time="2025-02-13T21:38:27.311434706Z" level=info msg="TearDown network for sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\" successfully" Feb 13 21:38:27.311553 containerd[1496]: time="2025-02-13T21:38:27.311464887Z" level=info msg="StopPodSandbox for \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\" returns successfully" Feb 13 21:38:27.312169 containerd[1496]: time="2025-02-13T21:38:27.312126093Z" level=info msg="RemovePodSandbox for \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\"" Feb 13 21:38:27.312257 containerd[1496]: time="2025-02-13T21:38:27.312208938Z" level=info msg="Forcibly stopping sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\"" Feb 13 21:38:27.312397 containerd[1496]: time="2025-02-13T21:38:27.312299607Z" level=info msg="TearDown network for sandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\" successfully" Feb 13 21:38:27.319934 containerd[1496]: time="2025-02-13T21:38:27.319880711Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.320075 containerd[1496]: time="2025-02-13T21:38:27.319940712Z" level=info msg="RemovePodSandbox \"b7f5f3d94e14266275693476d59c86a6d1f3bfd2761fabadd2f559bb9b1083f3\" returns successfully" Feb 13 21:38:27.320318 containerd[1496]: time="2025-02-13T21:38:27.320274823Z" level=info msg="StopPodSandbox for \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\"" Feb 13 21:38:27.320418 containerd[1496]: time="2025-02-13T21:38:27.320389946Z" level=info msg="TearDown network for sandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\" successfully" Feb 13 21:38:27.320487 containerd[1496]: time="2025-02-13T21:38:27.320416933Z" level=info msg="StopPodSandbox for \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\" returns successfully" Feb 13 21:38:27.321002 containerd[1496]: time="2025-02-13T21:38:27.320960854Z" level=info msg="RemovePodSandbox for \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\"" Feb 13 21:38:27.321093 containerd[1496]: time="2025-02-13T21:38:27.320995113Z" level=info msg="Forcibly stopping sandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\"" Feb 13 21:38:27.321507 containerd[1496]: time="2025-02-13T21:38:27.321108013Z" level=info msg="TearDown network for sandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\" successfully" Feb 13 21:38:27.323551 containerd[1496]: time="2025-02-13T21:38:27.323516634Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.323636 containerd[1496]: time="2025-02-13T21:38:27.323565936Z" level=info msg="RemovePodSandbox \"c1f332bf533290b20fc56508e01af73c0dede0767b0a7372f4cf62e558328fa2\" returns successfully" Feb 13 21:38:27.323946 containerd[1496]: time="2025-02-13T21:38:27.323912622Z" level=info msg="StopPodSandbox for \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\"" Feb 13 21:38:27.324139 containerd[1496]: time="2025-02-13T21:38:27.324029195Z" level=info msg="TearDown network for sandbox \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\" successfully" Feb 13 21:38:27.324139 containerd[1496]: time="2025-02-13T21:38:27.324068895Z" level=info msg="StopPodSandbox for \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\" returns successfully" Feb 13 21:38:27.324711 containerd[1496]: time="2025-02-13T21:38:27.324680197Z" level=info msg="RemovePodSandbox for \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\"" Feb 13 21:38:27.324817 containerd[1496]: time="2025-02-13T21:38:27.324714481Z" level=info msg="Forcibly stopping sandbox \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\"" Feb 13 21:38:27.324868 containerd[1496]: time="2025-02-13T21:38:27.324829315Z" level=info msg="TearDown network for sandbox \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\" successfully" Feb 13 21:38:27.330198 containerd[1496]: time="2025-02-13T21:38:27.329343917Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.330198 containerd[1496]: time="2025-02-13T21:38:27.329415378Z" level=info msg="RemovePodSandbox \"27ce7cc21262ff1393b748fc4c26653307c4237c87819ec207a0b5374440ad43\" returns successfully" Feb 13 21:38:27.332029 containerd[1496]: time="2025-02-13T21:38:27.331983022Z" level=info msg="StopPodSandbox for \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\"" Feb 13 21:38:27.332197 containerd[1496]: time="2025-02-13T21:38:27.332130027Z" level=info msg="TearDown network for sandbox \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\" successfully" Feb 13 21:38:27.332268 containerd[1496]: time="2025-02-13T21:38:27.332202126Z" level=info msg="StopPodSandbox for \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\" returns successfully" Feb 13 21:38:27.336506 containerd[1496]: time="2025-02-13T21:38:27.336474405Z" level=info msg="RemovePodSandbox for \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\"" Feb 13 21:38:27.336594 containerd[1496]: time="2025-02-13T21:38:27.336510834Z" level=info msg="Forcibly stopping sandbox \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\"" Feb 13 21:38:27.336995 containerd[1496]: time="2025-02-13T21:38:27.336595841Z" level=info msg="TearDown network for sandbox \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\" successfully" Feb 13 21:38:27.339875 containerd[1496]: time="2025-02-13T21:38:27.339823782Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.339967 containerd[1496]: time="2025-02-13T21:38:27.339885398Z" level=info msg="RemovePodSandbox \"4f57ee5471815e920c1989eb31bea70bc0eda02066918441a47f4ae560a7b4cc\" returns successfully" Feb 13 21:38:27.340630 containerd[1496]: time="2025-02-13T21:38:27.340289301Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" Feb 13 21:38:27.340630 containerd[1496]: time="2025-02-13T21:38:27.340410204Z" level=info msg="TearDown network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" successfully" Feb 13 21:38:27.340630 containerd[1496]: time="2025-02-13T21:38:27.340457320Z" level=info msg="StopPodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" returns successfully" Feb 13 21:38:27.340940 containerd[1496]: time="2025-02-13T21:38:27.340910233Z" level=info msg="RemovePodSandbox for \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" Feb 13 21:38:27.341207 containerd[1496]: time="2025-02-13T21:38:27.341055840Z" level=info msg="Forcibly stopping sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\"" Feb 13 21:38:27.341297 containerd[1496]: time="2025-02-13T21:38:27.341156503Z" level=info msg="TearDown network for sandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" successfully" Feb 13 21:38:27.344318 containerd[1496]: time="2025-02-13T21:38:27.344283174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.344561 containerd[1496]: time="2025-02-13T21:38:27.344453386Z" level=info msg="RemovePodSandbox \"6ed18d272764a6db55922c8996e29073a6d15d4dcdaf7fed6eddef7d95956e16\" returns successfully" Feb 13 21:38:27.344937 containerd[1496]: time="2025-02-13T21:38:27.344907299Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\"" Feb 13 21:38:27.345309 containerd[1496]: time="2025-02-13T21:38:27.345163504Z" level=info msg="TearDown network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" successfully" Feb 13 21:38:27.345309 containerd[1496]: time="2025-02-13T21:38:27.345216603Z" level=info msg="StopPodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" returns successfully" Feb 13 21:38:27.346113 containerd[1496]: time="2025-02-13T21:38:27.345854538Z" level=info msg="RemovePodSandbox for \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\"" Feb 13 21:38:27.346113 containerd[1496]: time="2025-02-13T21:38:27.345888766Z" level=info msg="Forcibly stopping sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\"" Feb 13 21:38:27.346113 containerd[1496]: time="2025-02-13T21:38:27.345981787Z" level=info msg="TearDown network for sandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" successfully" Feb 13 21:38:27.348408 containerd[1496]: time="2025-02-13T21:38:27.348358430Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.348572 containerd[1496]: time="2025-02-13T21:38:27.348413901Z" level=info msg="RemovePodSandbox \"376097a19c8eddc707442459a2e62df4e872375256c262cf82ec4e0abe0660b7\" returns successfully" Feb 13 21:38:27.349015 containerd[1496]: time="2025-02-13T21:38:27.348768600Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\"" Feb 13 21:38:27.349015 containerd[1496]: time="2025-02-13T21:38:27.348883007Z" level=info msg="TearDown network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" successfully" Feb 13 21:38:27.349015 containerd[1496]: time="2025-02-13T21:38:27.348902555Z" level=info msg="StopPodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" returns successfully" Feb 13 21:38:27.349263 containerd[1496]: time="2025-02-13T21:38:27.349223168Z" level=info msg="RemovePodSandbox for \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\"" Feb 13 21:38:27.349263 containerd[1496]: time="2025-02-13T21:38:27.349266783Z" level=info msg="Forcibly stopping sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\"" Feb 13 21:38:27.349409 containerd[1496]: time="2025-02-13T21:38:27.349366091Z" level=info msg="TearDown network for sandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" successfully" Feb 13 21:38:27.351684 containerd[1496]: time="2025-02-13T21:38:27.351626548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.351830 containerd[1496]: time="2025-02-13T21:38:27.351684516Z" level=info msg="RemovePodSandbox \"8e47826eef9627f2cc33dbff6c93e69f1d8502158e49088e6e0000555d6a115a\" returns successfully" Feb 13 21:38:27.352714 containerd[1496]: time="2025-02-13T21:38:27.352359453Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\"" Feb 13 21:38:27.352714 containerd[1496]: time="2025-02-13T21:38:27.352476599Z" level=info msg="TearDown network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" successfully" Feb 13 21:38:27.352714 containerd[1496]: time="2025-02-13T21:38:27.352496045Z" level=info msg="StopPodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" returns successfully" Feb 13 21:38:27.353147 containerd[1496]: time="2025-02-13T21:38:27.353005911Z" level=info msg="RemovePodSandbox for \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\"" Feb 13 21:38:27.353147 containerd[1496]: time="2025-02-13T21:38:27.353035102Z" level=info msg="Forcibly stopping sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\"" Feb 13 21:38:27.354736 containerd[1496]: time="2025-02-13T21:38:27.354598960Z" level=info msg="TearDown network for sandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" successfully" Feb 13 21:38:27.357221 containerd[1496]: time="2025-02-13T21:38:27.357131829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.357221 containerd[1496]: time="2025-02-13T21:38:27.357204868Z" level=info msg="RemovePodSandbox \"a52176272c287e377890d86b4b5810b7779a5c2904edd50d6358519b80e04d60\" returns successfully" Feb 13 21:38:27.358123 containerd[1496]: time="2025-02-13T21:38:27.357762461Z" level=info msg="StopPodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\"" Feb 13 21:38:27.358123 containerd[1496]: time="2025-02-13T21:38:27.357870256Z" level=info msg="TearDown network for sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" successfully" Feb 13 21:38:27.358123 containerd[1496]: time="2025-02-13T21:38:27.357901446Z" level=info msg="StopPodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" returns successfully" Feb 13 21:38:27.358381 containerd[1496]: time="2025-02-13T21:38:27.358304761Z" level=info msg="RemovePodSandbox for \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\"" Feb 13 21:38:27.358381 containerd[1496]: time="2025-02-13T21:38:27.358343789Z" level=info msg="Forcibly stopping sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\"" Feb 13 21:38:27.358512 containerd[1496]: time="2025-02-13T21:38:27.358431834Z" level=info msg="TearDown network for sandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" successfully" Feb 13 21:38:27.361015 containerd[1496]: time="2025-02-13T21:38:27.360942458Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.361015 containerd[1496]: time="2025-02-13T21:38:27.360992265Z" level=info msg="RemovePodSandbox \"55a923bf6080db575108d89db81d409867b145fe71ac329a47b398c2742fc7c6\" returns successfully" Feb 13 21:38:27.361445 containerd[1496]: time="2025-02-13T21:38:27.361401440Z" level=info msg="StopPodSandbox for \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\"" Feb 13 21:38:27.361558 containerd[1496]: time="2025-02-13T21:38:27.361532239Z" level=info msg="TearDown network for sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\" successfully" Feb 13 21:38:27.361558 containerd[1496]: time="2025-02-13T21:38:27.361551806Z" level=info msg="StopPodSandbox for \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\" returns successfully" Feb 13 21:38:27.362239 containerd[1496]: time="2025-02-13T21:38:27.362168979Z" level=info msg="RemovePodSandbox for \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\"" Feb 13 21:38:27.362239 containerd[1496]: time="2025-02-13T21:38:27.362223060Z" level=info msg="Forcibly stopping sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\"" Feb 13 21:38:27.362354 containerd[1496]: time="2025-02-13T21:38:27.362310161Z" level=info msg="TearDown network for sandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\" successfully" Feb 13 21:38:27.368660 containerd[1496]: time="2025-02-13T21:38:27.368303695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.368660 containerd[1496]: time="2025-02-13T21:38:27.368355770Z" level=info msg="RemovePodSandbox \"4e253a7215270693d28ac7025c6ae5f26514109efc8ac532f29b3f11640d2eb0\" returns successfully" Feb 13 21:38:27.370734 containerd[1496]: time="2025-02-13T21:38:27.370692051Z" level=info msg="StopPodSandbox for \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\"" Feb 13 21:38:27.370851 containerd[1496]: time="2025-02-13T21:38:27.370818260Z" level=info msg="TearDown network for sandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\" successfully" Feb 13 21:38:27.370940 containerd[1496]: time="2025-02-13T21:38:27.370849637Z" level=info msg="StopPodSandbox for \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\" returns successfully" Feb 13 21:38:27.375466 containerd[1496]: time="2025-02-13T21:38:27.374631071Z" level=info msg="RemovePodSandbox for \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\"" Feb 13 21:38:27.375466 containerd[1496]: time="2025-02-13T21:38:27.374666362Z" level=info msg="Forcibly stopping sandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\"" Feb 13 21:38:27.375466 containerd[1496]: time="2025-02-13T21:38:27.374769658Z" level=info msg="TearDown network for sandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\" successfully" Feb 13 21:38:27.378319 containerd[1496]: time="2025-02-13T21:38:27.378285435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.378550 containerd[1496]: time="2025-02-13T21:38:27.378491189Z" level=info msg="RemovePodSandbox \"b4c7767ed330038091ed0cd28a66297c95ac1b14e974709def642488d0f5fcf7\" returns successfully" Feb 13 21:38:27.379054 containerd[1496]: time="2025-02-13T21:38:27.379013604Z" level=info msg="StopPodSandbox for \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\"" Feb 13 21:38:27.379728 containerd[1496]: time="2025-02-13T21:38:27.379701258Z" level=info msg="TearDown network for sandbox \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\" successfully" Feb 13 21:38:27.379937 containerd[1496]: time="2025-02-13T21:38:27.379869466Z" level=info msg="StopPodSandbox for \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\" returns successfully" Feb 13 21:38:27.381233 containerd[1496]: time="2025-02-13T21:38:27.380476704Z" level=info msg="RemovePodSandbox for \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\"" Feb 13 21:38:27.381233 containerd[1496]: time="2025-02-13T21:38:27.380521140Z" level=info msg="Forcibly stopping sandbox \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\"" Feb 13 21:38:27.381233 containerd[1496]: time="2025-02-13T21:38:27.380613008Z" level=info msg="TearDown network for sandbox \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\" successfully" Feb 13 21:38:27.382943 containerd[1496]: time="2025-02-13T21:38:27.382910196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.383227 containerd[1496]: time="2025-02-13T21:38:27.382957900Z" level=info msg="RemovePodSandbox \"493370e403efd7caddcbffc152072e9e9266235fcd0dc6513ad6ab66c3751ba7\" returns successfully" Feb 13 21:38:27.383651 containerd[1496]: time="2025-02-13T21:38:27.383415829Z" level=info msg="StopPodSandbox for \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\"" Feb 13 21:38:27.383651 containerd[1496]: time="2025-02-13T21:38:27.383537414Z" level=info msg="TearDown network for sandbox \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\" successfully" Feb 13 21:38:27.383651 containerd[1496]: time="2025-02-13T21:38:27.383558616Z" level=info msg="StopPodSandbox for \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\" returns successfully" Feb 13 21:38:27.384417 containerd[1496]: time="2025-02-13T21:38:27.384142259Z" level=info msg="RemovePodSandbox for \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\"" Feb 13 21:38:27.384417 containerd[1496]: time="2025-02-13T21:38:27.384194075Z" level=info msg="Forcibly stopping sandbox \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\"" Feb 13 21:38:27.384417 containerd[1496]: time="2025-02-13T21:38:27.384287493Z" level=info msg="TearDown network for sandbox \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\" successfully" Feb 13 21:38:27.386975 containerd[1496]: time="2025-02-13T21:38:27.386934947Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:38:27.387248 containerd[1496]: time="2025-02-13T21:38:27.386982374Z" level=info msg="RemovePodSandbox \"1fc4d13b0838af62696cf968cec17dcffbcfda9d279bb9ec9a38143c3b67f8cd\" returns successfully" Feb 13 21:38:28.281387 kubelet[1916]: E0213 21:38:28.281317 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:28.329956 systemd[1]: run-containerd-runc-k8s.io-5f62b4edac1d57c08549ffb4c1ce786898c44df399e41f6cb17f6e60a5e775ba-runc.bncqQO.mount: Deactivated successfully. Feb 13 21:38:29.282440 kubelet[1916]: E0213 21:38:29.282379 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:30.282789 kubelet[1916]: E0213 21:38:30.282706 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:31.284012 kubelet[1916]: E0213 21:38:31.283923 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:32.285003 kubelet[1916]: E0213 21:38:32.284923 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:33.285774 kubelet[1916]: E0213 21:38:33.285685 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:33.690798 systemd[1]: Created slice kubepods-besteffort-pod0be41e69_7cb3_4284_b275_e08cdcdf7549.slice - libcontainer container kubepods-besteffort-pod0be41e69_7cb3_4284_b275_e08cdcdf7549.slice. Feb 13 21:38:33.859942 kubelet[1916]: I0213 21:38:33.859830 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spkm5\" (UniqueName: \"kubernetes.io/projected/0be41e69-7cb3-4284-b275-e08cdcdf7549-kube-api-access-spkm5\") pod \"test-pod-1\" (UID: \"0be41e69-7cb3-4284-b275-e08cdcdf7549\") " pod="default/test-pod-1" Feb 13 21:38:33.860221 kubelet[1916]: I0213 21:38:33.859963 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d4598149-bc4e-4d5b-bd32-88d4273709b7\" (UniqueName: \"kubernetes.io/nfs/0be41e69-7cb3-4284-b275-e08cdcdf7549-pvc-d4598149-bc4e-4d5b-bd32-88d4273709b7\") pod \"test-pod-1\" (UID: \"0be41e69-7cb3-4284-b275-e08cdcdf7549\") " pod="default/test-pod-1" Feb 13 21:38:34.008366 kernel: FS-Cache: Loaded Feb 13 21:38:34.087582 kernel: RPC: Registered named UNIX socket transport module. Feb 13 21:38:34.087808 kernel: RPC: Registered udp transport module. Feb 13 21:38:34.088446 kernel: RPC: Registered tcp transport module. Feb 13 21:38:34.089540 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 21:38:34.090680 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 21:38:34.286738 kubelet[1916]: E0213 21:38:34.286393 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:34.369567 kernel: NFS: Registering the id_resolver key type Feb 13 21:38:34.376388 kernel: Key type id_resolver registered Feb 13 21:38:34.376462 kernel: Key type id_legacy registered Feb 13 21:38:34.430012 nfsidmap[3870]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Feb 13 21:38:34.439229 nfsidmap[3873]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Feb 13 21:38:34.597705 containerd[1496]: time="2025-02-13T21:38:34.596859278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0be41e69-7cb3-4284-b275-e08cdcdf7549,Namespace:default,Attempt:0,}" Feb 13 21:38:34.791657 systemd-networkd[1411]: cali5ec59c6bf6e: Link UP Feb 13 21:38:34.793606 systemd-networkd[1411]: cali5ec59c6bf6e: Gained carrier Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.680 [INFO][3876] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.24.66-k8s-test--pod--1-eth0 default 0be41e69-7cb3-4284-b275-e08cdcdf7549 1519 0 2025-02-13 21:38:18 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.24.66 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.66-k8s-test--pod--1-" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.680 [INFO][3876] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.66-k8s-test--pod--1-eth0" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.731 [INFO][3888] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" HandleID="k8s-pod-network.d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" Workload="10.230.24.66-k8s-test--pod--1-eth0" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.746 [INFO][3888] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" HandleID="k8s-pod-network.d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" Workload="10.230.24.66-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334d90), Attrs:map[string]string{"namespace":"default", "node":"10.230.24.66", "pod":"test-pod-1", "timestamp":"2025-02-13 21:38:34.731061444 +0000 UTC"}, Hostname:"10.230.24.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.746 [INFO][3888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.746 [INFO][3888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.746 [INFO][3888] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.24.66' Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.749 [INFO][3888] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" host="10.230.24.66" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.755 [INFO][3888] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.24.66" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.760 [INFO][3888] ipam/ipam.go 489: Trying affinity for 192.168.1.64/26 host="10.230.24.66" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.763 [INFO][3888] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.64/26 host="10.230.24.66" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.767 [INFO][3888] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.64/26 host="10.230.24.66" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.768 [INFO][3888] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.64/26 handle="k8s-pod-network.d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" host="10.230.24.66" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.771 [INFO][3888] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643 Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.776 [INFO][3888] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.64/26 handle="k8s-pod-network.d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" host="10.230.24.66" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.783 [INFO][3888] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.68/26] block=192.168.1.64/26 handle="k8s-pod-network.d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" host="10.230.24.66" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.783 [INFO][3888] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.68/26] handle="k8s-pod-network.d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" host="10.230.24.66" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.783 [INFO][3888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.783 [INFO][3888] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.68/26] IPv6=[] ContainerID="d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" HandleID="k8s-pod-network.d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" Workload="10.230.24.66-k8s-test--pod--1-eth0" Feb 13 21:38:34.813372 containerd[1496]: 2025-02-13 21:38:34.787 [INFO][3876] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.66-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.66-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"0be41e69-7cb3-4284-b275-e08cdcdf7549", ResourceVersion:"1519", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.66", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.1.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:38:34.815116 containerd[1496]: 2025-02-13 21:38:34.787 [INFO][3876] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.68/32] ContainerID="d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.66-k8s-test--pod--1-eth0" Feb 13 21:38:34.815116 containerd[1496]: 2025-02-13 21:38:34.787 [INFO][3876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.66-k8s-test--pod--1-eth0" Feb 13 21:38:34.815116 containerd[1496]: 2025-02-13 21:38:34.792 [INFO][3876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.66-k8s-test--pod--1-eth0" Feb 13 21:38:34.815116 containerd[1496]: 2025-02-13 21:38:34.793 [INFO][3876] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.66-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.24.66-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"0be41e69-7cb3-4284-b275-e08cdcdf7549", ResourceVersion:"1519", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.24.66", ContainerID:"d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.1.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"da:a4:99:0e:ae:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:38:34.815116 containerd[1496]: 2025-02-13 21:38:34.805 [INFO][3876] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.24.66-k8s-test--pod--1-eth0" Feb 13 21:38:34.847258 containerd[1496]: time="2025-02-13T21:38:34.847103053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:38:34.847258 containerd[1496]: time="2025-02-13T21:38:34.847197491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:38:34.847258 containerd[1496]: time="2025-02-13T21:38:34.847218704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:38:34.849067 containerd[1496]: time="2025-02-13T21:38:34.847338124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:38:34.872463 systemd[1]: Started cri-containerd-d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643.scope - libcontainer container d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643. Feb 13 21:38:34.933309 containerd[1496]: time="2025-02-13T21:38:34.933245136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0be41e69-7cb3-4284-b275-e08cdcdf7549,Namespace:default,Attempt:0,} returns sandbox id \"d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643\"" Feb 13 21:38:34.942245 containerd[1496]: time="2025-02-13T21:38:34.942099882Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 21:38:35.286820 kubelet[1916]: E0213 21:38:35.286744 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:35.367877 containerd[1496]: time="2025-02-13T21:38:35.367320621Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 21:38:35.371207 containerd[1496]: time="2025-02-13T21:38:35.370955602Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:38:35.371766 containerd[1496]: time="2025-02-13T21:38:35.371722990Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 429.581647ms" Feb 13 21:38:35.371852 containerd[1496]: time="2025-02-13T21:38:35.371768560Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 21:38:35.375555 containerd[1496]: time="2025-02-13T21:38:35.375182793Z" level=info msg="CreateContainer within sandbox \"d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 21:38:35.398341 containerd[1496]: time="2025-02-13T21:38:35.398268790Z" level=info msg="CreateContainer within sandbox \"d08aeff193c2feb80803d29b65fc3f445bc1d1da78c60dd092c1f37ce9642643\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"bc796aa8169d427e34fba329d8fb00180136e30137f1f430d63d39d0dab06c06\"" Feb 13 21:38:35.399427 containerd[1496]: time="2025-02-13T21:38:35.399256934Z" level=info msg="StartContainer for \"bc796aa8169d427e34fba329d8fb00180136e30137f1f430d63d39d0dab06c06\"" Feb 13 21:38:35.451680 systemd[1]: Started cri-containerd-bc796aa8169d427e34fba329d8fb00180136e30137f1f430d63d39d0dab06c06.scope - libcontainer container bc796aa8169d427e34fba329d8fb00180136e30137f1f430d63d39d0dab06c06. Feb 13 21:38:35.488603 containerd[1496]: time="2025-02-13T21:38:35.488533003Z" level=info msg="StartContainer for \"bc796aa8169d427e34fba329d8fb00180136e30137f1f430d63d39d0dab06c06\" returns successfully" Feb 13 21:38:35.944604 systemd-networkd[1411]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 21:38:36.287051 kubelet[1916]: E0213 21:38:36.286969 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:37.287878 kubelet[1916]: E0213 21:38:37.287802 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:38.288800 kubelet[1916]: E0213 21:38:38.288678 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:39.289634 kubelet[1916]: E0213 21:38:39.289486 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:40.290307 kubelet[1916]: E0213 21:38:40.290226 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:41.290450 kubelet[1916]: E0213 21:38:41.290369 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:42.291059 kubelet[1916]: E0213 21:38:42.290972 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:43.291972 kubelet[1916]: E0213 21:38:43.291888 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:44.292660 kubelet[1916]: E0213 21:38:44.292549 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 21:38:45.293412 kubelet[1916]: E0213 21:38:45.293324 1916 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"