Jan 30 14:55:43.032919 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 14:55:43.032974 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 14:55:43.032990 kernel: BIOS-provided physical RAM map: Jan 30 14:55:43.033021 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 14:55:43.033033 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 14:55:43.033044 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 14:55:43.033056 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 30 14:55:43.033067 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 30 14:55:43.033078 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 14:55:43.033089 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 14:55:43.033101 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 14:55:43.033112 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 14:55:43.033128 kernel: NX (Execute Disable) protection: active Jan 30 14:55:43.033140 kernel: APIC: Static calls initialized Jan 30 14:55:43.033153 kernel: SMBIOS 2.8 present. Jan 30 14:55:43.033166 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 30 14:55:43.033185 kernel: Hypervisor detected: KVM Jan 30 14:55:43.033201 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 14:55:43.033214 kernel: kvm-clock: using sched offset of 4469521371 cycles Jan 30 14:55:43.033227 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 14:55:43.033239 kernel: tsc: Detected 2499.998 MHz processor Jan 30 14:55:43.033253 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 14:55:43.033265 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 14:55:43.033277 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 30 14:55:43.033290 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 14:55:43.033302 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 14:55:43.033318 kernel: Using GB pages for direct mapping Jan 30 14:55:43.033331 kernel: ACPI: Early table checksum verification disabled Jan 30 14:55:43.033343 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 30 14:55:43.033355 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:55:43.033368 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:55:43.033380 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:55:43.033392 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 30 14:55:43.033404 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:55:43.033416 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:55:43.033433 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:55:43.033445 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:55:43.033458 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 30 14:55:43.033470 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 30 14:55:43.033482 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 30 14:55:43.033501 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 30 14:55:43.033514 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 30 14:55:43.033531 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 30 14:55:43.033544 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 30 14:55:43.033556 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 14:55:43.033569 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 14:55:43.033581 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 30 14:55:43.033594 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 30 14:55:43.033606 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 30 14:55:43.033619 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 30 14:55:43.033636 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 30 14:55:43.033648 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 30 14:55:43.033661 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 30 14:55:43.033674 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 30 14:55:43.033698 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 30 14:55:43.033712 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 30 14:55:43.033725 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 30 14:55:43.033737 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 30 14:55:43.033750 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 30 14:55:43.033768 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 30 14:55:43.033781 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 14:55:43.033794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 14:55:43.033807 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 30 14:55:43.033820 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 30 14:55:43.033833 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 30 14:55:43.033845 kernel: Zone ranges: Jan 30 14:55:43.033858 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 14:55:43.033870 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 30 14:55:43.033883 kernel: Normal empty Jan 30 14:55:43.033901 kernel: Movable zone start for each node Jan 30 14:55:43.033913 kernel: Early memory node ranges Jan 30 14:55:43.033926 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 14:55:43.033939 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 30 14:55:43.033951 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 30 14:55:43.033964 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 14:55:43.033976 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 14:55:43.033989 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 30 14:55:43.034019 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 14:55:43.034041 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 14:55:43.034054 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 14:55:43.034066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 14:55:43.034079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 14:55:43.034092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 14:55:43.034104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 14:55:43.034117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 14:55:43.034129 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 14:55:43.034142 kernel: TSC deadline timer available Jan 30 14:55:43.034159 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 30 14:55:43.034172 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 14:55:43.034185 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 14:55:43.034197 kernel: Booting paravirtualized kernel on KVM Jan 30 14:55:43.034210 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 14:55:43.034223 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 14:55:43.034236 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 14:55:43.034248 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 14:55:43.034261 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 14:55:43.034278 kernel: kvm-guest: PV spinlocks enabled Jan 30 14:55:43.034291 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 14:55:43.034306 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 14:55:43.034319 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:55:43.034332 kernel: random: crng init done Jan 30 14:55:43.034345 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:55:43.034358 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 14:55:43.034370 kernel: Fallback order for Node 0: 0 Jan 30 14:55:43.034388 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 30 14:55:43.034401 kernel: Policy zone: DMA32 Jan 30 14:55:43.034413 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:55:43.034426 kernel: software IO TLB: area num 16. Jan 30 14:55:43.034439 kernel: Memory: 1899480K/2096616K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 196876K reserved, 0K cma-reserved) Jan 30 14:55:43.034452 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 14:55:43.034465 kernel: Kernel/User page tables isolation: enabled Jan 30 14:55:43.034478 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 14:55:43.034490 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 14:55:43.034507 kernel: Dynamic Preempt: voluntary Jan 30 14:55:43.034520 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:55:43.034534 kernel: rcu: RCU event tracing is enabled. Jan 30 14:55:43.034547 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 14:55:43.034560 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:55:43.034584 kernel: Rude variant of Tasks RCU enabled. Jan 30 14:55:43.034602 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:55:43.034623 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:55:43.034637 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 14:55:43.034650 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 30 14:55:43.034663 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:55:43.034676 kernel: Console: colour VGA+ 80x25 Jan 30 14:55:43.034708 kernel: printk: console [tty0] enabled Jan 30 14:55:43.034723 kernel: printk: console [ttyS0] enabled Jan 30 14:55:43.034736 kernel: ACPI: Core revision 20230628 Jan 30 14:55:43.034749 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 14:55:43.034762 kernel: x2apic enabled Jan 30 14:55:43.034780 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 14:55:43.034794 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 30 14:55:43.034807 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 30 14:55:43.034821 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 14:55:43.034834 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 14:55:43.034847 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 14:55:43.034860 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 14:55:43.034873 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 14:55:43.034886 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 14:55:43.034899 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 14:55:43.034918 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 14:55:43.034931 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 14:55:43.034944 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 14:55:43.034957 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 14:55:43.034970 kernel: MMIO Stale Data: Unknown: No mitigations Jan 30 14:55:43.034983 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 30 14:55:43.034996 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 14:55:43.035104 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 14:55:43.035119 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 14:55:43.035132 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 14:55:43.035146 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 14:55:43.035166 kernel: Freeing SMP alternatives memory: 32K Jan 30 14:55:43.035179 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:55:43.035194 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:55:43.035207 kernel: landlock: Up and running. Jan 30 14:55:43.035220 kernel: SELinux: Initializing. Jan 30 14:55:43.035233 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 14:55:43.035246 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 14:55:43.035259 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 30 14:55:43.035272 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:55:43.035286 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:55:43.035304 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 14:55:43.035317 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 30 14:55:43.035331 kernel: signal: max sigframe size: 1776 Jan 30 14:55:43.035344 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:55:43.035358 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:55:43.035371 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 14:55:43.035384 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:55:43.035397 kernel: smpboot: x86: Booting SMP configuration: Jan 30 14:55:43.035410 kernel: .... node #0, CPUs: #1 Jan 30 14:55:43.035428 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 30 14:55:43.035442 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:55:43.035455 kernel: smpboot: Max logical packages: 16 Jan 30 14:55:43.035468 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 30 14:55:43.035481 kernel: devtmpfs: initialized Jan 30 14:55:43.035494 kernel: x86/mm: Memory block size: 128MB Jan 30 14:55:43.035507 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:55:43.035521 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 14:55:43.035534 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:55:43.035552 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:55:43.035565 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:55:43.035579 kernel: audit: type=2000 audit(1738248941.721:1): state=initialized audit_enabled=0 res=1 Jan 30 14:55:43.035592 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:55:43.035605 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 14:55:43.035618 kernel: cpuidle: using governor menu Jan 30 14:55:43.035631 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:55:43.035644 kernel: dca service started, version 1.12.1 Jan 30 14:55:43.035657 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 14:55:43.035675 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 14:55:43.035707 kernel: PCI: Using configuration type 1 for base access Jan 30 14:55:43.035722 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 14:55:43.035735 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:55:43.035748 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:55:43.035762 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:55:43.035775 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:55:43.035788 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:55:43.035801 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:55:43.035820 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:55:43.035834 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:55:43.035847 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:55:43.035860 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 14:55:43.035873 kernel: ACPI: Interpreter enabled Jan 30 14:55:43.035886 kernel: ACPI: PM: (supports S0 S5) Jan 30 14:55:43.035900 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 14:55:43.035913 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 14:55:43.035926 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 14:55:43.035944 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 14:55:43.035958 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 14:55:43.036250 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:55:43.036436 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 14:55:43.036606 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 14:55:43.036627 kernel: PCI host bridge to bus 0000:00 Jan 30 14:55:43.036837 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 14:55:43.037019 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 14:55:43.037180 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 14:55:43.037334 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 30 14:55:43.037490 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 14:55:43.037645 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 30 14:55:43.037815 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 14:55:43.038058 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 14:55:43.038277 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 30 14:55:43.038453 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 30 14:55:43.038625 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 30 14:55:43.038812 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 30 14:55:43.038982 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 14:55:43.039203 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 14:55:43.039387 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 30 14:55:43.039593 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 14:55:43.039809 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 30 14:55:43.042264 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 14:55:43.042466 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 30 14:55:43.042670 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 14:55:43.042865 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 30 14:55:43.044131 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 14:55:43.044313 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 30 14:55:43.044530 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 14:55:43.047061 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 30 14:55:43.047290 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 14:55:43.047471 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 30 14:55:43.047735 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 14:55:43.048123 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 30 14:55:43.048380 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 14:55:43.048555 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 30 14:55:43.048739 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 30 14:55:43.048908 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 30 14:55:43.050149 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 30 14:55:43.050371 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 14:55:43.050543 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 14:55:43.050728 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 30 14:55:43.050902 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 30 14:55:43.052181 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 14:55:43.052362 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 14:55:43.052576 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 14:55:43.052761 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 30 14:55:43.052951 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 30 14:55:43.054176 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 14:55:43.054356 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 14:55:43.054571 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 30 14:55:43.054765 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 30 14:55:43.054949 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 30 14:55:43.056160 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 30 14:55:43.056344 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 14:55:43.056545 kernel: pci_bus 0000:02: extended config space not accessible Jan 30 14:55:43.056770 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 30 14:55:43.056969 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 30 14:55:43.059197 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 30 14:55:43.059387 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 14:55:43.059607 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 14:55:43.059805 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 30 14:55:43.059983 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 30 14:55:43.060202 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 14:55:43.060372 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 14:55:43.060596 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 14:55:43.060789 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 30 14:55:43.060973 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 30 14:55:43.062680 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 14:55:43.062872 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 14:55:43.063069 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 30 14:55:43.063240 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 14:55:43.063416 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 14:55:43.063588 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 30 14:55:43.063776 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 14:55:43.063947 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 14:55:43.065219 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 30 14:55:43.065405 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 14:55:43.065572 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 14:55:43.065757 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 30 14:55:43.065935 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 14:55:43.067165 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 14:55:43.067338 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 30 14:55:43.067506 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 14:55:43.067673 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 14:55:43.067706 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 14:55:43.067721 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 14:55:43.067735 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 14:55:43.067748 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 14:55:43.067770 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 14:55:43.067784 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 14:55:43.067798 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 14:55:43.067811 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 14:55:43.067825 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 14:55:43.067838 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 14:55:43.067851 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 14:55:43.067865 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 14:55:43.067878 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 14:55:43.067897 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 14:55:43.067911 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 14:55:43.067924 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 14:55:43.067938 kernel: iommu: Default domain type: Translated Jan 30 14:55:43.067951 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 14:55:43.067972 kernel: PCI: Using ACPI for IRQ routing Jan 30 14:55:43.067985 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 14:55:43.067998 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 14:55:43.069050 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 30 14:55:43.069257 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 14:55:43.069451 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 14:55:43.069617 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 14:55:43.069638 kernel: vgaarb: loaded Jan 30 14:55:43.069652 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 14:55:43.069666 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:55:43.069691 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:55:43.069707 kernel: pnp: PnP ACPI init Jan 30 14:55:43.069906 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 14:55:43.069929 kernel: pnp: PnP ACPI: found 5 devices Jan 30 14:55:43.069943 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 14:55:43.069957 kernel: NET: Registered PF_INET protocol family Jan 30 14:55:43.069970 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:55:43.069984 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 14:55:43.069997 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:55:43.070031 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 14:55:43.070052 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 14:55:43.070067 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 14:55:43.070080 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 14:55:43.070094 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 14:55:43.070122 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:55:43.070135 kernel: NET: Registered PF_XDP protocol family Jan 30 14:55:43.070315 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 30 14:55:43.070480 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 14:55:43.070678 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 14:55:43.070865 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 14:55:43.073076 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 14:55:43.073252 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 14:55:43.073420 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 14:55:43.073587 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 14:55:43.073778 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 14:55:43.073949 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 14:55:43.075158 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 14:55:43.075329 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 14:55:43.075496 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 14:55:43.080068 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 14:55:43.080283 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 14:55:43.080460 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 30 14:55:43.080670 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 30 14:55:43.080869 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 14:55:43.081054 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 30 14:55:43.081225 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 30 14:55:43.081392 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 30 14:55:43.081561 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 14:55:43.081751 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 30 14:55:43.081921 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 30 14:55:43.083184 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 14:55:43.083366 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 14:55:43.083539 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 30 14:55:43.083728 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 30 14:55:43.083903 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 14:55:43.084104 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 14:55:43.084287 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 30 14:55:43.084461 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 30 14:55:43.084654 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 14:55:43.084963 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 14:55:43.088948 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 30 14:55:43.089199 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 30 14:55:43.089390 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 14:55:43.089565 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 14:55:43.089757 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 30 14:55:43.089942 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 30 14:55:43.090132 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 14:55:43.090304 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 14:55:43.090482 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 30 14:55:43.090655 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 30 14:55:43.090859 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 14:55:43.091077 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 14:55:43.091255 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 30 14:55:43.091436 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 30 14:55:43.091617 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 14:55:43.091826 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 14:55:43.091993 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 14:55:43.092178 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 14:55:43.092334 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 14:55:43.092519 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 30 14:55:43.092675 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 14:55:43.092845 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 30 14:55:43.093050 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 30 14:55:43.093219 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 30 14:55:43.093383 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 14:55:43.093558 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 30 14:55:43.093766 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 30 14:55:43.093934 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 30 14:55:43.094208 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 14:55:43.094402 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 30 14:55:43.094565 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 30 14:55:43.094738 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 14:55:43.094920 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 30 14:55:43.095110 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 30 14:55:43.095273 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 14:55:43.095455 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 30 14:55:43.095616 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 30 14:55:43.095790 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 14:55:43.095993 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 30 14:55:43.096207 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 30 14:55:43.096368 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 14:55:43.096555 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 30 14:55:43.096733 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 30 14:55:43.096895 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 14:55:43.097095 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 30 14:55:43.097261 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 30 14:55:43.097432 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 14:55:43.097455 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 14:55:43.097470 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:55:43.097484 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 14:55:43.097499 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 30 14:55:43.097513 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 14:55:43.097527 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 30 14:55:43.097542 kernel: Initialise system trusted keyrings Jan 30 14:55:43.097556 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 14:55:43.097577 kernel: Key type asymmetric registered Jan 30 14:55:43.097591 kernel: Asymmetric key parser 'x509' registered Jan 30 14:55:43.097605 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 14:55:43.097619 kernel: io scheduler mq-deadline registered Jan 30 14:55:43.097633 kernel: io scheduler kyber registered Jan 30 14:55:43.097647 kernel: io scheduler bfq registered Jan 30 14:55:43.097832 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 30 14:55:43.098023 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 30 14:55:43.098201 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:55:43.098384 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 30 14:55:43.098568 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 30 14:55:43.098772 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:55:43.098947 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 30 14:55:43.099173 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 30 14:55:43.099353 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:55:43.099528 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 30 14:55:43.099712 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 30 14:55:43.099886 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:55:43.100167 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 30 14:55:43.100341 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 30 14:55:43.100521 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:55:43.100740 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 30 14:55:43.100910 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 30 14:55:43.101119 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:55:43.101292 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 30 14:55:43.101460 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 30 14:55:43.101654 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:55:43.101849 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 30 14:55:43.102073 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 30 14:55:43.102256 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:55:43.102278 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 14:55:43.102294 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 14:55:43.102308 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 14:55:43.102330 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:55:43.102345 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 14:55:43.102359 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 14:55:43.102373 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 14:55:43.102387 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 14:55:43.102402 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 14:55:43.102595 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 14:55:43.102793 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 14:55:43.102962 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T14:55:42 UTC (1738248942) Jan 30 14:55:43.103158 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 14:55:43.103183 kernel: intel_pstate: CPU model not supported Jan 30 14:55:43.103197 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:55:43.103223 kernel: Segment Routing with IPv6 Jan 30 14:55:43.103237 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:55:43.103252 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:55:43.103266 kernel: Key type dns_resolver registered Jan 30 14:55:43.103289 kernel: IPI shorthand broadcast: enabled Jan 30 14:55:43.103310 kernel: sched_clock: Marking stable (1252013473, 239061294)->(1618303985, -127229218) Jan 30 14:55:43.103325 kernel: registered taskstats version 1 Jan 30 14:55:43.103346 kernel: Loading compiled-in X.509 certificates Jan 30 14:55:43.103360 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 14:55:43.103374 kernel: Key type .fscrypt registered Jan 30 14:55:43.103388 kernel: Key type fscrypt-provisioning registered Jan 30 14:55:43.103411 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:55:43.103425 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:55:43.103444 kernel: ima: No architecture policies found Jan 30 14:55:43.103459 kernel: clk: Disabling unused clocks Jan 30 14:55:43.103473 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 14:55:43.103487 kernel: Write protecting the kernel read-only data: 38912k Jan 30 14:55:43.103501 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 14:55:43.103515 kernel: Run /init as init process Jan 30 14:55:43.103529 kernel: with arguments: Jan 30 14:55:43.103543 kernel: /init Jan 30 14:55:43.103558 kernel: with environment: Jan 30 14:55:43.103576 kernel: HOME=/ Jan 30 14:55:43.103590 kernel: TERM=linux Jan 30 14:55:43.103604 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:55:43.103634 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:55:43.103654 systemd[1]: Detected virtualization kvm. Jan 30 14:55:43.103669 systemd[1]: Detected architecture x86-64. Jan 30 14:55:43.103694 systemd[1]: Running in initrd. Jan 30 14:55:43.103710 systemd[1]: No hostname configured, using default hostname. Jan 30 14:55:43.103731 systemd[1]: Hostname set to . Jan 30 14:55:43.103747 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:55:43.103761 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:55:43.103776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:55:43.103791 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:55:43.103806 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:55:43.103822 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:55:43.103837 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:55:43.103857 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:55:43.103874 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:55:43.103889 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:55:43.103904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:55:43.103920 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:55:43.103934 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:55:43.103949 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:55:43.103969 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:55:43.103984 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:55:43.103999 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:55:43.104060 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:55:43.104076 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:55:43.104091 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:55:43.104106 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:55:43.104122 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:55:43.104144 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:55:43.104159 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:55:43.104175 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:55:43.104190 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:55:43.104204 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:55:43.104219 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:55:43.104234 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:55:43.104249 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:55:43.104264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:55:43.104285 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:55:43.104348 systemd-journald[201]: Collecting audit messages is disabled. Jan 30 14:55:43.104392 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:55:43.104407 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:55:43.104430 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:55:43.104445 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:55:43.104459 kernel: Bridge firewalling registered Jan 30 14:55:43.104490 systemd-journald[201]: Journal started Jan 30 14:55:43.104524 systemd-journald[201]: Runtime Journal (/run/log/journal/60c980f99b814229940eeab3d7999246) is 4.7M, max 37.9M, 33.2M free. Jan 30 14:55:43.048368 systemd-modules-load[202]: Inserted module 'overlay' Jan 30 14:55:43.152624 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:55:43.088016 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 30 14:55:43.153712 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:55:43.154931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:55:43.156432 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:55:43.172242 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:55:43.175473 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:55:43.179196 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:55:43.187796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:55:43.205877 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:55:43.209801 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:55:43.211823 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:55:43.212829 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:55:43.225252 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:55:43.228195 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:55:43.242506 dracut-cmdline[237]: dracut-dracut-053 Jan 30 14:55:43.246379 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 14:55:43.284234 systemd-resolved[238]: Positive Trust Anchors: Jan 30 14:55:43.284294 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:55:43.284339 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:55:43.293649 systemd-resolved[238]: Defaulting to hostname 'linux'. Jan 30 14:55:43.296602 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:55:43.297734 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:55:43.352142 kernel: SCSI subsystem initialized Jan 30 14:55:43.364057 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:55:43.378030 kernel: iscsi: registered transport (tcp) Jan 30 14:55:43.404426 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:55:43.404519 kernel: QLogic iSCSI HBA Driver Jan 30 14:55:43.462315 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:55:43.467192 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:55:43.514058 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:55:43.514138 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:55:43.515274 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:55:43.565076 kernel: raid6: sse2x4 gen() 13176 MB/s Jan 30 14:55:43.583047 kernel: raid6: sse2x2 gen() 9509 MB/s Jan 30 14:55:43.601815 kernel: raid6: sse2x1 gen() 9546 MB/s Jan 30 14:55:43.601873 kernel: raid6: using algorithm sse2x4 gen() 13176 MB/s Jan 30 14:55:43.620869 kernel: raid6: .... xor() 7508 MB/s, rmw enabled Jan 30 14:55:43.620921 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 14:55:43.647037 kernel: xor: automatically using best checksumming function avx Jan 30 14:55:43.822084 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:55:43.836407 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:55:43.844193 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:55:43.876800 systemd-udevd[421]: Using default interface naming scheme 'v255'. Jan 30 14:55:43.884154 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:55:43.893204 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:55:43.913811 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 30 14:55:43.953046 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:55:43.959241 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:55:44.071068 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:55:44.090549 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:55:44.125551 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:55:44.128771 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:55:44.130029 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:55:44.132652 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:55:44.141749 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:55:44.169848 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:55:44.216045 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 30 14:55:44.303955 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 14:55:44.304008 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 14:55:44.304308 kernel: libata version 3.00 loaded. Jan 30 14:55:44.304332 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:55:44.304366 kernel: GPT:17805311 != 125829119 Jan 30 14:55:44.304385 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:55:44.304404 kernel: GPT:17805311 != 125829119 Jan 30 14:55:44.304421 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:55:44.304440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:55:44.304458 kernel: AVX version of gcm_enc/dec engaged. Jan 30 14:55:44.304476 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 14:55:44.334265 kernel: AES CTR mode by8 optimization enabled Jan 30 14:55:44.334295 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 14:55:44.334327 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 14:55:44.335141 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 14:55:44.335542 kernel: scsi host0: ahci Jan 30 14:55:44.335792 kernel: ACPI: bus type USB registered Jan 30 14:55:44.336117 kernel: usbcore: registered new interface driver usbfs Jan 30 14:55:44.336159 kernel: usbcore: registered new interface driver hub Jan 30 14:55:44.336178 kernel: usbcore: registered new device driver usb Jan 30 14:55:44.336196 kernel: scsi host1: ahci Jan 30 14:55:44.336440 kernel: scsi host2: ahci Jan 30 14:55:44.337251 kernel: scsi host3: ahci Jan 30 14:55:44.337497 kernel: scsi host4: ahci Jan 30 14:55:44.337731 kernel: scsi host5: ahci Jan 30 14:55:44.337935 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 30 14:55:44.337958 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 30 14:55:44.337985 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 30 14:55:44.338024 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 30 14:55:44.338046 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 30 14:55:44.338065 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 30 14:55:44.254346 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:55:44.456515 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (464) Jan 30 14:55:44.456569 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (475) Jan 30 14:55:44.254537 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:55:44.256913 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:55:44.257681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:55:44.257868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:55:44.258604 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:55:44.271383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:55:44.422912 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 14:55:44.458238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:55:44.472641 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 14:55:44.480392 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 14:55:44.486492 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 14:55:44.487388 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 14:55:44.501257 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:55:44.505575 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:55:44.509940 disk-uuid[557]: Primary Header is updated. Jan 30 14:55:44.509940 disk-uuid[557]: Secondary Entries is updated. Jan 30 14:55:44.509940 disk-uuid[557]: Secondary Header is updated. Jan 30 14:55:44.517033 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:55:44.524109 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:55:44.554172 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:55:44.643061 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 14:55:44.643123 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 14:55:44.645897 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 14:55:44.646406 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 14:55:44.649905 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 14:55:44.656053 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 14:55:44.680033 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 30 14:55:44.709191 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 30 14:55:44.709447 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 14:55:44.709671 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 30 14:55:44.709883 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 30 14:55:44.710136 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 30 14:55:44.710344 kernel: hub 1-0:1.0: USB hub found Jan 30 14:55:44.710571 kernel: hub 1-0:1.0: 4 ports detected Jan 30 14:55:44.710788 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 14:55:44.711595 kernel: hub 2-0:1.0: USB hub found Jan 30 14:55:44.712983 kernel: hub 2-0:1.0: 4 ports detected Jan 30 14:55:44.943059 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 14:55:45.084073 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:55:45.090339 kernel: usbcore: registered new interface driver usbhid Jan 30 14:55:45.090397 kernel: usbhid: USB HID core driver Jan 30 14:55:45.099124 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 30 14:55:45.099183 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 30 14:55:45.526378 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:55:45.526457 disk-uuid[558]: The operation has completed successfully. Jan 30 14:55:45.580442 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:55:45.580595 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:55:45.606282 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:55:45.612119 sh[584]: Success Jan 30 14:55:45.629039 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 30 14:55:45.692559 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:55:45.702159 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:55:45.705925 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:55:45.736312 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 14:55:45.736387 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:55:45.738467 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:55:45.741936 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:55:45.741987 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:55:45.753990 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:55:45.755455 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:55:45.765209 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:55:45.768917 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:55:45.787879 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:55:45.787948 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:55:45.787970 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:55:45.793019 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:55:45.807860 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:55:45.809101 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:55:45.815601 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:55:45.822252 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:55:45.908202 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:55:45.920221 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:55:45.950367 systemd-networkd[768]: lo: Link UP Jan 30 14:55:45.950379 systemd-networkd[768]: lo: Gained carrier Jan 30 14:55:45.952855 systemd-networkd[768]: Enumeration completed Jan 30 14:55:45.952999 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:55:45.953672 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:55:45.953678 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:55:45.955378 systemd-networkd[768]: eth0: Link UP Jan 30 14:55:45.955384 systemd-networkd[768]: eth0: Gained carrier Jan 30 14:55:45.955396 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:55:45.958970 systemd[1]: Reached target network.target - Network. Jan 30 14:55:45.989156 ignition[673]: Ignition 2.20.0 Jan 30 14:55:45.989188 ignition[673]: Stage: fetch-offline Jan 30 14:55:45.991347 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:55:45.989264 ignition[673]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:55:45.989283 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:55:45.989449 ignition[673]: parsed url from cmdline: "" Jan 30 14:55:45.989457 ignition[673]: no config URL provided Jan 30 14:55:45.989467 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:55:45.989482 ignition[673]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:55:45.989499 ignition[673]: failed to fetch config: resource requires networking Jan 30 14:55:45.989790 ignition[673]: Ignition finished successfully Jan 30 14:55:45.999227 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:55:46.011369 systemd-networkd[768]: eth0: DHCPv4 address 10.244.11.234/30, gateway 10.244.11.233 acquired from 10.244.11.233 Jan 30 14:55:46.020875 ignition[776]: Ignition 2.20.0 Jan 30 14:55:46.020892 ignition[776]: Stage: fetch Jan 30 14:55:46.021181 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:55:46.021202 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:55:46.023270 ignition[776]: parsed url from cmdline: "" Jan 30 14:55:46.023278 ignition[776]: no config URL provided Jan 30 14:55:46.023289 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:55:46.023306 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:55:46.023493 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 14:55:46.023560 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 14:55:46.023708 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 14:55:46.039332 ignition[776]: GET result: OK Jan 30 14:55:46.039716 ignition[776]: parsing config with SHA512: ecf914806bd9333f0d62d2ca419d3b7f196d23462dc4e7db3874ae7c6b5c54fcfc11b86a0f3fa060cf14e1d9f324a922d2a2a6045c68712fc21930f191bcdc92 Jan 30 14:55:46.043413 unknown[776]: fetched base config from "system" Jan 30 14:55:46.043431 unknown[776]: fetched base config from "system" Jan 30 14:55:46.043751 ignition[776]: fetch: fetch complete Jan 30 14:55:46.043441 unknown[776]: fetched user config from "openstack" Jan 30 14:55:46.043760 ignition[776]: fetch: fetch passed Jan 30 14:55:46.045821 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:55:46.043828 ignition[776]: Ignition finished successfully Jan 30 14:55:46.055277 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:55:46.071488 ignition[783]: Ignition 2.20.0 Jan 30 14:55:46.071508 ignition[783]: Stage: kargs Jan 30 14:55:46.071734 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:55:46.071755 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:55:46.075041 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:55:46.072669 ignition[783]: kargs: kargs passed Jan 30 14:55:46.072739 ignition[783]: Ignition finished successfully Jan 30 14:55:46.083217 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:55:46.100811 ignition[789]: Ignition 2.20.0 Jan 30 14:55:46.100832 ignition[789]: Stage: disks Jan 30 14:55:46.101093 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:55:46.101113 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:55:46.103155 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:55:46.102020 ignition[789]: disks: disks passed Jan 30 14:55:46.104724 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:55:46.102098 ignition[789]: Ignition finished successfully Jan 30 14:55:46.105815 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:55:46.107293 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:55:46.108535 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:55:46.110145 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:55:46.119298 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:55:46.138330 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 14:55:46.142328 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:55:46.148117 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:55:46.261032 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 14:55:46.262132 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:55:46.263370 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:55:46.270151 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:55:46.275073 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:55:46.276274 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 14:55:46.283225 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 14:55:46.284552 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:55:46.305672 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (805) Jan 30 14:55:46.305892 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:55:46.305918 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:55:46.306684 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:55:46.306706 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:55:46.284595 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:55:46.293906 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:55:46.313481 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:55:46.316538 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:55:46.373038 initrd-setup-root[831]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:55:46.382666 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:55:46.389976 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:55:46.397706 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:55:46.518221 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:55:46.526143 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:55:46.532264 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:55:46.543047 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:55:46.575369 ignition[922]: INFO : Ignition 2.20.0 Jan 30 14:55:46.576417 ignition[922]: INFO : Stage: mount Jan 30 14:55:46.576417 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:55:46.576417 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:55:46.578906 ignition[922]: INFO : mount: mount passed Jan 30 14:55:46.578906 ignition[922]: INFO : Ignition finished successfully Jan 30 14:55:46.579580 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:55:46.581296 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:55:46.735117 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:55:47.251334 systemd-networkd[768]: eth0: Gained IPv6LL Jan 30 14:55:48.117632 systemd-networkd[768]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:2fa:24:19ff:fef4:bea/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:2fa:24:19ff:fef4:bea/64 assigned by NDisc. Jan 30 14:55:48.117651 systemd-networkd[768]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 30 14:55:53.480791 coreos-metadata[807]: Jan 30 14:55:53.480 WARN failed to locate config-drive, using the metadata service API instead Jan 30 14:55:53.497096 coreos-metadata[807]: Jan 30 14:55:53.496 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 14:55:53.513260 coreos-metadata[807]: Jan 30 14:55:53.513 INFO Fetch successful Jan 30 14:55:53.515065 coreos-metadata[807]: Jan 30 14:55:53.514 INFO wrote hostname srv-ek463.gb1.brightbox.com to /sysroot/etc/hostname Jan 30 14:55:53.517077 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 14:55:53.517270 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 14:55:53.526182 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:55:53.541275 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:55:53.563067 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Jan 30 14:55:53.568973 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:55:53.569035 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:55:53.571713 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:55:53.576040 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:55:53.579180 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:55:53.604952 ignition[957]: INFO : Ignition 2.20.0 Jan 30 14:55:53.604952 ignition[957]: INFO : Stage: files Jan 30 14:55:53.606687 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:55:53.606687 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:55:53.606687 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:55:53.609468 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:55:53.609468 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:55:53.611374 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:55:53.611374 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:55:53.613225 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:55:53.611807 unknown[957]: wrote ssh authorized keys file for user: core Jan 30 14:55:53.615201 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:55:53.615201 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:55:53.615201 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:55:53.615201 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:55:53.615201 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:55:53.615201 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:55:53.615201 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:55:53.615201 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 14:55:54.204318 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 14:55:55.433599 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:55:55.437607 ignition[957]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:55:55.437607 ignition[957]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:55:55.437607 ignition[957]: INFO : files: files passed Jan 30 14:55:55.437607 ignition[957]: INFO : Ignition finished successfully Jan 30 14:55:55.438168 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:55:55.448340 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:55:55.454144 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:55:55.458926 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:55:55.468264 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:55:55.482043 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:55:55.482043 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:55:55.484322 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:55:55.486580 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:55:55.488290 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:55:55.495341 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:55:55.529138 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:55:55.530167 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:55:55.531501 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:55:55.532869 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:55:55.534542 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:55:55.541294 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:55:55.560403 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:55:55.565197 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:55:55.588643 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:55:55.589618 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:55:55.591239 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:55:55.592729 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:55:55.592905 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:55:55.594694 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:55:55.595674 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:55:55.597132 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:55:55.598431 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:55:55.599817 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:55:55.601304 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:55:55.602790 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:55:55.604480 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:55:55.605862 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:55:55.607343 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:55:55.608645 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:55:55.608840 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:55:55.610516 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:55:55.611500 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:55:55.612892 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:55:55.613072 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:55:55.614434 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:55:55.614619 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:55:55.616732 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:55:55.616911 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:55:55.618548 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:55:55.618704 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:55:55.626331 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:55:55.627078 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:55:55.627321 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:55:55.630298 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:55:55.634369 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:55:55.634652 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:55:55.637276 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:55:55.638148 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:55:55.647192 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:55:55.647367 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:55:55.655372 ignition[1010]: INFO : Ignition 2.20.0 Jan 30 14:55:55.656909 ignition[1010]: INFO : Stage: umount Jan 30 14:55:55.656909 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:55:55.656909 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:55:55.660818 ignition[1010]: INFO : umount: umount passed Jan 30 14:55:55.660818 ignition[1010]: INFO : Ignition finished successfully Jan 30 14:55:55.661486 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:55:55.661661 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:55:55.664042 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:55:55.664195 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:55:55.668088 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:55:55.668168 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:55:55.669358 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:55:55.669425 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:55:55.674233 systemd[1]: Stopped target network.target - Network. Jan 30 14:55:55.674829 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:55:55.674906 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:55:55.675779 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:55:55.676385 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:55:55.681115 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:55:55.687639 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:55:55.689325 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:55:55.690750 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:55:55.690833 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:55:55.692113 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:55:55.692190 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:55:55.693378 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:55:55.693471 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:55:55.694729 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:55:55.694796 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:55:55.696435 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:55:55.698027 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:55:55.700982 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:55:55.701759 systemd-networkd[768]: eth0: DHCPv6 lease lost Jan 30 14:55:55.706395 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:55:55.706910 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:55:55.708438 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:55:55.708633 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:55:55.712045 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:55:55.712130 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:55:55.722211 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:55:55.723344 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:55:55.723419 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:55:55.724223 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:55:55.724293 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:55:55.725087 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:55:55.725151 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:55:55.726504 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:55:55.726570 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:55:55.728253 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:55:55.742243 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:55:55.743263 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:55:55.744319 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:55:55.744582 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:55:55.746617 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:55:55.746738 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:55:55.748262 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:55:55.748321 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:55:55.749756 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:55:55.749829 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:55:55.751907 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:55:55.751974 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:55:55.753394 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:55:55.753478 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:55:55.761236 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:55:55.762019 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:55:55.762098 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:55:55.767468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:55:55.767558 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:55:55.775363 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:55:55.776529 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:55:55.777831 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:55:55.777978 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:55:55.779539 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:55:55.780908 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:55:55.780990 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:55:55.792229 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:55:55.801526 systemd[1]: Switching root. Jan 30 14:55:55.835738 systemd-journald[201]: Journal stopped Jan 30 14:55:57.226839 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 30 14:55:57.226939 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:55:57.226965 kernel: SELinux: policy capability open_perms=1 Jan 30 14:55:57.226994 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:55:57.229096 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:55:57.229133 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:55:57.229161 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:55:57.229191 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:55:57.229213 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:55:57.229235 systemd[1]: Successfully loaded SELinux policy in 47.939ms. Jan 30 14:55:57.229268 kernel: audit: type=1403 audit(1738248956.057:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:55:57.229291 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.798ms. Jan 30 14:55:57.229315 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:55:57.229337 systemd[1]: Detected virtualization kvm. Jan 30 14:55:57.229359 systemd[1]: Detected architecture x86-64. Jan 30 14:55:57.229385 systemd[1]: Detected first boot. Jan 30 14:55:57.229409 systemd[1]: Hostname set to . Jan 30 14:55:57.229448 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:55:57.229473 zram_generator::config[1053]: No configuration found. Jan 30 14:55:57.229495 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:55:57.229523 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:55:57.229551 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:55:57.229573 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:55:57.229595 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:55:57.229617 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:55:57.229638 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:55:57.229659 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:55:57.229687 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:55:57.229718 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:55:57.229746 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:55:57.229768 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:55:57.229795 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:55:57.229816 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:55:57.229838 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:55:57.229859 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:55:57.229881 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:55:57.229903 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:55:57.229924 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 14:55:57.229951 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:55:57.229973 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:55:57.229995 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:55:57.230104 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:55:57.230128 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:55:57.230150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:55:57.230179 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:55:57.230202 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:55:57.230234 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:55:57.230257 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:55:57.230279 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:55:57.230302 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:55:57.230323 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:55:57.230344 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:55:57.230366 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:55:57.230387 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:55:57.230416 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:55:57.230450 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:55:57.230473 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:55:57.230495 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:55:57.230517 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:55:57.230540 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:55:57.230562 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:55:57.230584 systemd[1]: Reached target machines.target - Containers. Jan 30 14:55:57.230613 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:55:57.230636 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:55:57.230658 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:55:57.230680 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:55:57.230702 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:55:57.230724 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:55:57.230766 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:55:57.230799 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:55:57.230833 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:55:57.230868 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:55:57.230890 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:55:57.230912 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:55:57.230934 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:55:57.230955 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:55:57.230994 kernel: fuse: init (API version 7.39) Jan 30 14:55:57.231015 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:55:57.231202 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:55:57.231233 kernel: ACPI: bus type drm_connector registered Jan 30 14:55:57.231255 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:55:57.231283 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:55:57.231304 kernel: loop: module loaded Jan 30 14:55:57.231326 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:55:57.231347 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:55:57.231375 systemd[1]: Stopped verity-setup.service. Jan 30 14:55:57.231448 systemd-journald[1139]: Collecting audit messages is disabled. Jan 30 14:55:57.231504 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:55:57.231528 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:55:57.231557 systemd-journald[1139]: Journal started Jan 30 14:55:57.231593 systemd-journald[1139]: Runtime Journal (/run/log/journal/60c980f99b814229940eeab3d7999246) is 4.7M, max 37.9M, 33.2M free. Jan 30 14:55:56.851689 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:55:56.873486 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 14:55:56.874179 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:55:57.238084 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:55:57.243248 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:55:57.244155 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:55:57.244945 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:55:57.245877 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:55:57.246749 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:55:57.249718 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:55:57.250929 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:55:57.251158 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:55:57.252478 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:55:57.252684 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:55:57.253783 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:55:57.253977 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:55:57.256613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:55:57.256819 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:55:57.257966 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:55:57.258266 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:55:57.259362 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:55:57.260538 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:55:57.260733 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:55:57.261794 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:55:57.263163 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:55:57.264304 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:55:57.280816 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:55:57.289102 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:55:57.300115 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:55:57.301202 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:55:57.301261 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:55:57.305173 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:55:57.315218 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:55:57.326731 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:55:57.327645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:55:57.338021 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:55:57.343244 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:55:57.344834 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:55:57.348223 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:55:57.349072 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:55:57.352217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:55:57.357213 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:55:57.361521 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:55:57.366484 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:55:57.377872 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:55:57.379095 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:55:57.440995 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:55:57.443452 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:55:57.444529 systemd-journald[1139]: Time spent on flushing to /var/log/journal/60c980f99b814229940eeab3d7999246 is 68.036ms for 1124 entries. Jan 30 14:55:57.444529 systemd-journald[1139]: System Journal (/var/log/journal/60c980f99b814229940eeab3d7999246) is 8.0M, max 584.8M, 576.8M free. Jan 30 14:55:57.544416 systemd-journald[1139]: Received client request to flush runtime journal. Jan 30 14:55:57.544513 kernel: loop0: detected capacity change from 0 to 218376 Jan 30 14:55:57.544569 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:55:57.544609 kernel: loop1: detected capacity change from 0 to 138184 Jan 30 14:55:57.457243 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:55:57.506605 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:55:57.549565 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:55:57.554804 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:55:57.557417 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:55:57.587548 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:55:57.605232 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:55:57.624050 kernel: loop2: detected capacity change from 0 to 8 Jan 30 14:55:57.644608 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:55:57.657666 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:55:57.666589 kernel: loop3: detected capacity change from 0 to 141000 Jan 30 14:55:57.701664 udevadm[1208]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 14:55:57.712329 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 30 14:55:57.712362 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 30 14:55:57.729242 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:55:57.738756 kernel: loop4: detected capacity change from 0 to 218376 Jan 30 14:55:57.774084 kernel: loop5: detected capacity change from 0 to 138184 Jan 30 14:55:57.818044 kernel: loop6: detected capacity change from 0 to 8 Jan 30 14:55:57.834225 kernel: loop7: detected capacity change from 0 to 141000 Jan 30 14:55:57.872924 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 14:55:57.875066 (sd-merge)[1211]: Merged extensions into '/usr'. Jan 30 14:55:57.884277 systemd[1]: Reloading requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:55:57.884299 systemd[1]: Reloading... Jan 30 14:55:58.022055 zram_generator::config[1234]: No configuration found. Jan 30 14:55:58.148736 ldconfig[1181]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:55:58.283751 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:55:58.355822 systemd[1]: Reloading finished in 467 ms. Jan 30 14:55:58.381791 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:55:58.387313 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:55:58.402217 systemd[1]: Starting ensure-sysext.service... Jan 30 14:55:58.411251 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:55:58.431205 systemd[1]: Reloading requested from client PID 1293 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:55:58.431230 systemd[1]: Reloading... Jan 30 14:55:58.457742 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:55:58.458900 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:55:58.460681 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:55:58.461616 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Jan 30 14:55:58.462124 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Jan 30 14:55:58.469479 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:55:58.469626 systemd-tmpfiles[1294]: Skipping /boot Jan 30 14:55:58.496350 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:55:58.496369 systemd-tmpfiles[1294]: Skipping /boot Jan 30 14:55:58.546034 zram_generator::config[1321]: No configuration found. Jan 30 14:55:58.731453 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:55:58.804179 systemd[1]: Reloading finished in 372 ms. Jan 30 14:55:58.826227 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:55:58.832761 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:55:58.845243 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 14:55:58.855953 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:55:58.862239 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:55:58.875267 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:55:58.880309 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:55:58.884567 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:55:58.907479 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:55:58.911349 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:55:58.911646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:55:58.917368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:55:58.921252 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:55:58.929331 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:55:58.930368 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:55:58.930537 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:55:58.939066 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:55:58.939773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:55:58.940519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:55:58.941081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:55:58.947699 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:55:58.952542 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:55:58.953110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:55:58.959911 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:55:58.961153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:55:58.963239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:55:58.967660 systemd[1]: Finished ensure-sysext.service. Jan 30 14:55:58.977245 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 14:55:58.990205 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:55:58.990517 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:55:59.004037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:55:59.004665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:55:59.015376 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:55:59.015642 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:55:59.017422 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:55:59.021288 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:55:59.021536 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:55:59.024098 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:55:59.028895 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:55:59.030252 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:55:59.040311 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:55:59.043078 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:55:59.052698 systemd-udevd[1389]: Using default interface naming scheme 'v255'. Jan 30 14:55:59.058631 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:55:59.067527 augenrules[1420]: No rules Jan 30 14:55:59.068686 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:55:59.069313 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 14:55:59.079191 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:55:59.100116 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:55:59.109256 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:55:59.214774 systemd-networkd[1432]: lo: Link UP Jan 30 14:55:59.214789 systemd-networkd[1432]: lo: Gained carrier Jan 30 14:55:59.215816 systemd-networkd[1432]: Enumeration completed Jan 30 14:55:59.215945 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:55:59.227215 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:55:59.245500 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 14:55:59.246771 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:55:59.272888 systemd-resolved[1382]: Positive Trust Anchors: Jan 30 14:55:59.272916 systemd-resolved[1382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:55:59.272962 systemd-resolved[1382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:55:59.280858 systemd-resolved[1382]: Using system hostname 'srv-ek463.gb1.brightbox.com'. Jan 30 14:55:59.290247 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:55:59.294270 systemd[1]: Reached target network.target - Network. Jan 30 14:55:59.294929 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:55:59.298350 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 14:55:59.327033 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1433) Jan 30 14:55:59.386088 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:55:59.414049 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 14:55:59.423026 kernel: ACPI: button: Power Button [PWRF] Jan 30 14:55:59.484325 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:55:59.484339 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:55:59.487587 systemd-networkd[1432]: eth0: Link UP Jan 30 14:55:59.487602 systemd-networkd[1432]: eth0: Gained carrier Jan 30 14:55:59.487623 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:55:59.499312 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 14:55:59.508608 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 14:55:59.508677 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 14:55:59.508986 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 14:55:59.506520 systemd-networkd[1432]: eth0: DHCPv4 address 10.244.11.234/30, gateway 10.244.11.233 acquired from 10.244.11.233 Jan 30 14:55:59.508711 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Jan 30 14:55:59.530380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 14:55:59.539095 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:55:59.578696 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:55:59.648527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:55:59.748624 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:55:59.808149 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:55:59.817301 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:55:59.840274 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:56:00.574291 systemd-resolved[1382]: Clock change detected. Flushing caches. Jan 30 14:56:00.574468 systemd-timesyncd[1405]: Contacted time server 129.250.35.251:123 (0.flatcar.pool.ntp.org). Jan 30 14:56:00.574561 systemd-timesyncd[1405]: Initial clock synchronization to Thu 2025-01-30 14:56:00.574225 UTC. Jan 30 14:56:00.610663 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:56:00.611860 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:56:00.612608 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:56:00.613499 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:56:00.614487 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:56:00.615581 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:56:00.616475 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:56:00.617266 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:56:00.618030 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:56:00.618100 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:56:00.618739 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:56:00.621165 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:56:00.624045 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:56:00.634524 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:56:00.637298 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:56:00.638730 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:56:00.639574 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:56:00.640235 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:56:00.640919 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:56:00.640973 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:56:00.652010 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:56:00.655326 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:56:00.656490 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:56:00.658285 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:56:00.667207 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:56:00.678534 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:56:00.680160 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:56:00.686298 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:56:00.693616 jq[1478]: false Jan 30 14:56:00.695379 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:56:00.702321 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:56:00.721313 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:56:00.723564 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:56:00.726391 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:56:00.729294 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:56:00.741775 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:56:00.745281 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:56:00.754544 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:56:00.756146 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:56:00.759746 extend-filesystems[1479]: Found loop4 Jan 30 14:56:00.759746 extend-filesystems[1479]: Found loop5 Jan 30 14:56:00.759746 extend-filesystems[1479]: Found loop6 Jan 30 14:56:00.759746 extend-filesystems[1479]: Found loop7 Jan 30 14:56:00.759746 extend-filesystems[1479]: Found vda Jan 30 14:56:00.759746 extend-filesystems[1479]: Found vda1 Jan 30 14:56:00.759746 extend-filesystems[1479]: Found vda2 Jan 30 14:56:00.759746 extend-filesystems[1479]: Found vda3 Jan 30 14:56:00.759746 extend-filesystems[1479]: Found usr Jan 30 14:56:00.759746 extend-filesystems[1479]: Found vda4 Jan 30 14:56:00.759746 extend-filesystems[1479]: Found vda6 Jan 30 14:56:00.759746 extend-filesystems[1479]: Found vda7 Jan 30 14:56:00.759746 extend-filesystems[1479]: Found vda9 Jan 30 14:56:00.759746 extend-filesystems[1479]: Checking size of /dev/vda9 Jan 30 14:56:00.833183 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 30 14:56:00.756666 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:56:00.835304 extend-filesystems[1479]: Resized partition /dev/vda9 Jan 30 14:56:00.799177 dbus-daemon[1477]: [system] SELinux support is enabled Jan 30 14:56:00.756893 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:56:00.839794 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:56:00.828348 dbus-daemon[1477]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1432 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 14:56:00.845450 jq[1487]: true Jan 30 14:56:00.800567 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:56:00.846921 update_engine[1486]: I20250130 14:56:00.823902 1486 main.cc:92] Flatcar Update Engine starting Jan 30 14:56:00.846921 update_engine[1486]: I20250130 14:56:00.841256 1486 update_check_scheduler.cc:74] Next update check in 2m31s Jan 30 14:56:00.812658 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:56:00.812703 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:56:00.853428 jq[1503]: true Jan 30 14:56:00.817774 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:56:00.817807 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:56:00.838796 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:56:00.839649 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:56:00.847996 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:56:00.856534 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:56:00.857246 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 14:56:00.860277 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:56:00.898346 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1437) Jan 30 14:56:00.934729 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 14:56:00.934781 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 14:56:00.942683 systemd-logind[1485]: New seat seat0. Jan 30 14:56:00.946624 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:56:01.018092 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:56:01.019660 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:56:01.039369 systemd[1]: Starting sshkeys.service... Jan 30 14:56:01.081114 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 14:56:01.092419 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:56:01.102533 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:56:01.140395 extend-filesystems[1507]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 14:56:01.140395 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 14:56:01.140395 extend-filesystems[1507]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 14:56:01.143306 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Jan 30 14:56:01.149939 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:56:01.150515 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:56:01.174539 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:56:01.252298 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 14:56:01.252494 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 14:56:01.253978 dbus-daemon[1477]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1513 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 14:56:01.274530 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 14:56:01.299541 polkitd[1549]: Started polkitd version 121 Jan 30 14:56:01.317406 polkitd[1549]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 14:56:01.317504 polkitd[1549]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 14:56:01.321228 polkitd[1549]: Finished loading, compiling and executing 2 rules Jan 30 14:56:01.323180 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 14:56:01.323402 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 14:56:01.325110 polkitd[1549]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 14:56:01.361205 systemd-hostnamed[1513]: Hostname set to (static) Jan 30 14:56:01.390498 containerd[1508]: time="2025-01-30T14:56:01.388477706Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 14:56:01.423676 containerd[1508]: time="2025-01-30T14:56:01.423568110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:56:01.426681 containerd[1508]: time="2025-01-30T14:56:01.426632178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:56:01.426810 containerd[1508]: time="2025-01-30T14:56:01.426784825Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:56:01.426908 containerd[1508]: time="2025-01-30T14:56:01.426884801Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:56:01.427340 containerd[1508]: time="2025-01-30T14:56:01.427311701Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:56:01.427484 containerd[1508]: time="2025-01-30T14:56:01.427457927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:56:01.427695 containerd[1508]: time="2025-01-30T14:56:01.427665681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:56:01.427787 containerd[1508]: time="2025-01-30T14:56:01.427764684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:56:01.428139 containerd[1508]: time="2025-01-30T14:56:01.428108931Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:56:01.428273 containerd[1508]: time="2025-01-30T14:56:01.428248017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:56:01.428370 containerd[1508]: time="2025-01-30T14:56:01.428344582Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:56:01.428474 containerd[1508]: time="2025-01-30T14:56:01.428450360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:56:01.428681 containerd[1508]: time="2025-01-30T14:56:01.428655059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:56:01.429208 containerd[1508]: time="2025-01-30T14:56:01.429181432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:56:01.429732 containerd[1508]: time="2025-01-30T14:56:01.429402770Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:56:01.429732 containerd[1508]: time="2025-01-30T14:56:01.429451389Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:56:01.429732 containerd[1508]: time="2025-01-30T14:56:01.429596569Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:56:01.429732 containerd[1508]: time="2025-01-30T14:56:01.429677579Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:56:01.434223 containerd[1508]: time="2025-01-30T14:56:01.434170218Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:56:01.434457 containerd[1508]: time="2025-01-30T14:56:01.434409494Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:56:01.434937 containerd[1508]: time="2025-01-30T14:56:01.434585123Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:56:01.434937 containerd[1508]: time="2025-01-30T14:56:01.434622949Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:56:01.434937 containerd[1508]: time="2025-01-30T14:56:01.434647311Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:56:01.434937 containerd[1508]: time="2025-01-30T14:56:01.434886372Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:56:01.435626 containerd[1508]: time="2025-01-30T14:56:01.435594380Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:56:01.435922 containerd[1508]: time="2025-01-30T14:56:01.435894770Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:56:01.436024 containerd[1508]: time="2025-01-30T14:56:01.436000189Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436136034Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436168457Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436190194Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436209394Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436232074Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436253791Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436282671Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436306029Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436326225Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436367144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436390510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436409472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436442064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.436956 containerd[1508]: time="2025-01-30T14:56:01.436466197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.437482 containerd[1508]: time="2025-01-30T14:56:01.436486568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.437482 containerd[1508]: time="2025-01-30T14:56:01.436505298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.437482 containerd[1508]: time="2025-01-30T14:56:01.436524405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.437482 containerd[1508]: time="2025-01-30T14:56:01.436544104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.437482 containerd[1508]: time="2025-01-30T14:56:01.436565290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.437482 containerd[1508]: time="2025-01-30T14:56:01.436584932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.437482 containerd[1508]: time="2025-01-30T14:56:01.436603547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.437482 containerd[1508]: time="2025-01-30T14:56:01.436645331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.437482 containerd[1508]: time="2025-01-30T14:56:01.436673259Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:56:01.437482 containerd[1508]: time="2025-01-30T14:56:01.436710811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.437482 containerd[1508]: time="2025-01-30T14:56:01.436734939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.437482 containerd[1508]: time="2025-01-30T14:56:01.436754055Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:56:01.438843 containerd[1508]: time="2025-01-30T14:56:01.438680173Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:56:01.438983 containerd[1508]: time="2025-01-30T14:56:01.438954981Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:56:01.439201 containerd[1508]: time="2025-01-30T14:56:01.439126584Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:56:01.439201 containerd[1508]: time="2025-01-30T14:56:01.439155612Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:56:01.439401 containerd[1508]: time="2025-01-30T14:56:01.439172426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.439566 containerd[1508]: time="2025-01-30T14:56:01.439375777Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:56:01.439566 containerd[1508]: time="2025-01-30T14:56:01.439518137Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:56:01.439566 containerd[1508]: time="2025-01-30T14:56:01.439541597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:56:01.440419 containerd[1508]: time="2025-01-30T14:56:01.440224003Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:56:01.440419 containerd[1508]: time="2025-01-30T14:56:01.440356614Z" level=info msg="Connect containerd service" Jan 30 14:56:01.441084 containerd[1508]: time="2025-01-30T14:56:01.440805747Z" level=info msg="using legacy CRI server" Jan 30 14:56:01.441084 containerd[1508]: time="2025-01-30T14:56:01.440847776Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:56:01.441084 containerd[1508]: time="2025-01-30T14:56:01.441044079Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:56:01.442684 containerd[1508]: time="2025-01-30T14:56:01.442602870Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:56:01.442971 containerd[1508]: time="2025-01-30T14:56:01.442904586Z" level=info msg="Start subscribing containerd event" Jan 30 14:56:01.443167 containerd[1508]: time="2025-01-30T14:56:01.443099770Z" level=info msg="Start recovering state" Jan 30 14:56:01.443448 containerd[1508]: time="2025-01-30T14:56:01.443401040Z" level=info msg="Start event monitor" Jan 30 14:56:01.443679 containerd[1508]: time="2025-01-30T14:56:01.443546753Z" level=info msg="Start snapshots syncer" Jan 30 14:56:01.443679 containerd[1508]: time="2025-01-30T14:56:01.443577453Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:56:01.443679 containerd[1508]: time="2025-01-30T14:56:01.443616093Z" level=info msg="Start streaming server" Jan 30 14:56:01.444900 containerd[1508]: time="2025-01-30T14:56:01.444775226Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:56:01.445233 containerd[1508]: time="2025-01-30T14:56:01.445182427Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:56:01.446310 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:56:01.447831 containerd[1508]: time="2025-01-30T14:56:01.447650497Z" level=info msg="containerd successfully booted in 0.061828s" Jan 30 14:56:01.485928 systemd-networkd[1432]: eth0: Gained IPv6LL Jan 30 14:56:01.489652 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:56:01.492580 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:56:01.500881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:56:01.510437 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:56:01.543033 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:56:01.708602 sshd_keygen[1509]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:56:01.749266 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:56:01.760643 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:56:01.777293 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:56:01.777826 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:56:01.788571 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:56:01.806726 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:56:01.822879 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:56:01.826796 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 14:56:01.827899 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:56:02.486533 systemd-networkd[1432]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:2fa:24:19ff:fef4:bea/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:2fa:24:19ff:fef4:bea/64 assigned by NDisc. Jan 30 14:56:02.486549 systemd-networkd[1432]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 30 14:56:02.497983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:56:02.504372 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:56:02.529848 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:56:02.537384 systemd[1]: Started sshd@0-10.244.11.234:22-139.178.89.65:57614.service - OpenSSH per-connection server daemon (139.178.89.65:57614). Jan 30 14:56:03.136174 kubelet[1595]: E0130 14:56:03.136059 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:56:03.138499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:56:03.138753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:56:03.139389 systemd[1]: kubelet.service: Consumed 1.068s CPU time. Jan 30 14:56:03.450596 sshd[1599]: Accepted publickey for core from 139.178.89.65 port 57614 ssh2: RSA SHA256:BMORWh0f1Je5qeeIggsgWW6ty4h12TKl0sZx5GXb8BA Jan 30 14:56:03.452887 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:56:03.470650 systemd-logind[1485]: New session 1 of user core. Jan 30 14:56:03.472300 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:56:03.482647 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:56:03.517450 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:56:03.526638 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:56:03.538033 (systemd)[1609]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:56:03.681328 systemd[1609]: Queued start job for default target default.target. Jan 30 14:56:03.692517 systemd[1609]: Created slice app.slice - User Application Slice. Jan 30 14:56:03.692565 systemd[1609]: Reached target paths.target - Paths. Jan 30 14:56:03.692590 systemd[1609]: Reached target timers.target - Timers. Jan 30 14:56:03.694942 systemd[1609]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:56:03.718579 systemd[1609]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:56:03.718783 systemd[1609]: Reached target sockets.target - Sockets. Jan 30 14:56:03.718811 systemd[1609]: Reached target basic.target - Basic System. Jan 30 14:56:03.718898 systemd[1609]: Reached target default.target - Main User Target. Jan 30 14:56:03.718965 systemd[1609]: Startup finished in 170ms. Jan 30 14:56:03.719048 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:56:03.734466 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:56:04.367582 systemd[1]: Started sshd@1-10.244.11.234:22-139.178.89.65:57618.service - OpenSSH per-connection server daemon (139.178.89.65:57618). Jan 30 14:56:05.257202 sshd[1622]: Accepted publickey for core from 139.178.89.65 port 57618 ssh2: RSA SHA256:BMORWh0f1Je5qeeIggsgWW6ty4h12TKl0sZx5GXb8BA Jan 30 14:56:05.259320 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:56:05.266053 systemd-logind[1485]: New session 2 of user core. Jan 30 14:56:05.279434 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:56:05.874533 sshd[1624]: Connection closed by 139.178.89.65 port 57618 Jan 30 14:56:05.874301 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Jan 30 14:56:05.879589 systemd[1]: sshd@1-10.244.11.234:22-139.178.89.65:57618.service: Deactivated successfully. Jan 30 14:56:05.882149 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:56:05.883521 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:56:05.884917 systemd-logind[1485]: Removed session 2. Jan 30 14:56:06.030510 systemd[1]: Started sshd@2-10.244.11.234:22-139.178.89.65:57628.service - OpenSSH per-connection server daemon (139.178.89.65:57628). Jan 30 14:56:06.861881 agetty[1588]: failed to open credentials directory Jan 30 14:56:06.862214 agetty[1589]: failed to open credentials directory Jan 30 14:56:06.877987 login[1588]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 14:56:06.879992 login[1589]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 14:56:06.885771 systemd-logind[1485]: New session 4 of user core. Jan 30 14:56:06.897569 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:56:06.903078 systemd-logind[1485]: New session 3 of user core. Jan 30 14:56:06.907324 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:56:06.931696 sshd[1629]: Accepted publickey for core from 139.178.89.65 port 57628 ssh2: RSA SHA256:BMORWh0f1Je5qeeIggsgWW6ty4h12TKl0sZx5GXb8BA Jan 30 14:56:06.933043 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:56:06.944415 systemd-logind[1485]: New session 5 of user core. Jan 30 14:56:06.951364 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:56:07.545971 sshd[1657]: Connection closed by 139.178.89.65 port 57628 Jan 30 14:56:07.546883 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Jan 30 14:56:07.551487 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:56:07.552946 systemd[1]: sshd@2-10.244.11.234:22-139.178.89.65:57628.service: Deactivated successfully. Jan 30 14:56:07.555626 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:56:07.557240 systemd-logind[1485]: Removed session 5. Jan 30 14:56:07.784747 coreos-metadata[1476]: Jan 30 14:56:07.784 WARN failed to locate config-drive, using the metadata service API instead Jan 30 14:56:07.809905 coreos-metadata[1476]: Jan 30 14:56:07.809 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 14:56:07.815450 coreos-metadata[1476]: Jan 30 14:56:07.815 INFO Fetch failed with 404: resource not found Jan 30 14:56:07.815557 coreos-metadata[1476]: Jan 30 14:56:07.815 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 14:56:07.816211 coreos-metadata[1476]: Jan 30 14:56:07.816 INFO Fetch successful Jan 30 14:56:07.816368 coreos-metadata[1476]: Jan 30 14:56:07.816 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 14:56:07.831510 coreos-metadata[1476]: Jan 30 14:56:07.831 INFO Fetch successful Jan 30 14:56:07.831733 coreos-metadata[1476]: Jan 30 14:56:07.831 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 14:56:07.845191 coreos-metadata[1476]: Jan 30 14:56:07.845 INFO Fetch successful Jan 30 14:56:07.845368 coreos-metadata[1476]: Jan 30 14:56:07.845 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 14:56:07.859685 coreos-metadata[1476]: Jan 30 14:56:07.859 INFO Fetch successful Jan 30 14:56:07.859857 coreos-metadata[1476]: Jan 30 14:56:07.859 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 14:56:07.876594 coreos-metadata[1476]: Jan 30 14:56:07.876 INFO Fetch successful Jan 30 14:56:07.900209 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:56:07.901144 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:56:08.248719 coreos-metadata[1534]: Jan 30 14:56:08.248 WARN failed to locate config-drive, using the metadata service API instead Jan 30 14:56:08.271301 coreos-metadata[1534]: Jan 30 14:56:08.271 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 14:56:08.297806 coreos-metadata[1534]: Jan 30 14:56:08.297 INFO Fetch successful Jan 30 14:56:08.298207 coreos-metadata[1534]: Jan 30 14:56:08.298 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 14:56:08.324396 coreos-metadata[1534]: Jan 30 14:56:08.324 INFO Fetch successful Jan 30 14:56:08.326596 unknown[1534]: wrote ssh authorized keys file for user: core Jan 30 14:56:08.353370 update-ssh-keys[1671]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:56:08.354059 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:56:08.356298 systemd[1]: Finished sshkeys.service. Jan 30 14:56:08.359660 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:56:08.360000 systemd[1]: Startup finished in 1.434s (kernel) + 13.300s (initrd) + 11.619s (userspace) = 26.354s. Jan 30 14:56:13.226516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:56:13.233343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:56:13.469428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:56:13.485619 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:56:13.546647 kubelet[1682]: E0130 14:56:13.546517 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:56:13.550703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:56:13.550996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:56:17.703347 systemd[1]: Started sshd@3-10.244.11.234:22-139.178.89.65:36842.service - OpenSSH per-connection server daemon (139.178.89.65:36842). Jan 30 14:56:18.613648 sshd[1690]: Accepted publickey for core from 139.178.89.65 port 36842 ssh2: RSA SHA256:BMORWh0f1Je5qeeIggsgWW6ty4h12TKl0sZx5GXb8BA Jan 30 14:56:18.615481 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:56:18.621725 systemd-logind[1485]: New session 6 of user core. Jan 30 14:56:18.629317 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:56:19.235207 sshd[1692]: Connection closed by 139.178.89.65 port 36842 Jan 30 14:56:19.234482 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Jan 30 14:56:19.238823 systemd[1]: sshd@3-10.244.11.234:22-139.178.89.65:36842.service: Deactivated successfully. Jan 30 14:56:19.239516 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:56:19.241238 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:56:19.243043 systemd-logind[1485]: Removed session 6. Jan 30 14:56:19.396473 systemd[1]: Started sshd@4-10.244.11.234:22-139.178.89.65:36846.service - OpenSSH per-connection server daemon (139.178.89.65:36846). Jan 30 14:56:20.285460 sshd[1697]: Accepted publickey for core from 139.178.89.65 port 36846 ssh2: RSA SHA256:BMORWh0f1Je5qeeIggsgWW6ty4h12TKl0sZx5GXb8BA Jan 30 14:56:20.287315 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:56:20.294544 systemd-logind[1485]: New session 7 of user core. Jan 30 14:56:20.304348 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:56:20.897951 sshd[1699]: Connection closed by 139.178.89.65 port 36846 Jan 30 14:56:20.898953 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Jan 30 14:56:20.905249 systemd[1]: sshd@4-10.244.11.234:22-139.178.89.65:36846.service: Deactivated successfully. Jan 30 14:56:20.908172 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:56:20.909594 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:56:20.911485 systemd-logind[1485]: Removed session 7. Jan 30 14:56:21.057525 systemd[1]: Started sshd@5-10.244.11.234:22-139.178.89.65:36856.service - OpenSSH per-connection server daemon (139.178.89.65:36856). Jan 30 14:56:21.947240 sshd[1704]: Accepted publickey for core from 139.178.89.65 port 36856 ssh2: RSA SHA256:BMORWh0f1Je5qeeIggsgWW6ty4h12TKl0sZx5GXb8BA Jan 30 14:56:21.949273 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:56:21.957431 systemd-logind[1485]: New session 8 of user core. Jan 30 14:56:21.964385 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:56:22.562753 sshd[1706]: Connection closed by 139.178.89.65 port 36856 Jan 30 14:56:22.563661 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jan 30 14:56:22.567553 systemd[1]: sshd@5-10.244.11.234:22-139.178.89.65:36856.service: Deactivated successfully. Jan 30 14:56:22.569877 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:56:22.571668 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:56:22.573120 systemd-logind[1485]: Removed session 8. Jan 30 14:56:22.721405 systemd[1]: Started sshd@6-10.244.11.234:22-139.178.89.65:39926.service - OpenSSH per-connection server daemon (139.178.89.65:39926). Jan 30 14:56:23.615489 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 39926 ssh2: RSA SHA256:BMORWh0f1Je5qeeIggsgWW6ty4h12TKl0sZx5GXb8BA Jan 30 14:56:23.617745 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:56:23.619343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:56:23.629503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:56:23.635308 systemd-logind[1485]: New session 9 of user core. Jan 30 14:56:23.650420 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:56:23.816352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:56:23.825386 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:56:23.883636 kubelet[1722]: E0130 14:56:23.882823 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:56:23.885237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:56:23.885507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:56:24.106300 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:56:24.106804 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:56:24.129196 sudo[1730]: pam_unix(sudo:session): session closed for user root Jan 30 14:56:24.273181 sshd[1716]: Connection closed by 139.178.89.65 port 39926 Jan 30 14:56:24.274575 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jan 30 14:56:24.280513 systemd[1]: sshd@6-10.244.11.234:22-139.178.89.65:39926.service: Deactivated successfully. Jan 30 14:56:24.282849 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:56:24.283887 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:56:24.285823 systemd-logind[1485]: Removed session 9. Jan 30 14:56:24.436468 systemd[1]: Started sshd@7-10.244.11.234:22-139.178.89.65:39928.service - OpenSSH per-connection server daemon (139.178.89.65:39928). Jan 30 14:56:25.333174 sshd[1735]: Accepted publickey for core from 139.178.89.65 port 39928 ssh2: RSA SHA256:BMORWh0f1Je5qeeIggsgWW6ty4h12TKl0sZx5GXb8BA Jan 30 14:56:25.335882 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:56:25.343922 systemd-logind[1485]: New session 10 of user core. Jan 30 14:56:25.351303 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:56:25.811923 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:56:25.812547 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:56:25.818246 sudo[1739]: pam_unix(sudo:session): session closed for user root Jan 30 14:56:25.826539 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 14:56:25.826976 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:56:25.846600 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 14:56:25.897296 augenrules[1761]: No rules Jan 30 14:56:25.898895 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:56:25.899272 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 14:56:25.901120 sudo[1738]: pam_unix(sudo:session): session closed for user root Jan 30 14:56:26.044957 sshd[1737]: Connection closed by 139.178.89.65 port 39928 Jan 30 14:56:26.045798 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Jan 30 14:56:26.051252 systemd[1]: sshd@7-10.244.11.234:22-139.178.89.65:39928.service: Deactivated successfully. Jan 30 14:56:26.053306 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:56:26.054339 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:56:26.055873 systemd-logind[1485]: Removed session 10. Jan 30 14:56:26.198047 systemd[1]: Started sshd@8-10.244.11.234:22-139.178.89.65:39932.service - OpenSSH per-connection server daemon (139.178.89.65:39932). Jan 30 14:56:27.103877 sshd[1769]: Accepted publickey for core from 139.178.89.65 port 39932 ssh2: RSA SHA256:BMORWh0f1Je5qeeIggsgWW6ty4h12TKl0sZx5GXb8BA Jan 30 14:56:27.105811 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:56:27.112377 systemd-logind[1485]: New session 11 of user core. Jan 30 14:56:27.126329 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:56:27.580053 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:56:27.580557 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:56:28.333344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:56:28.341402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:56:28.387190 systemd[1]: Reloading requested from client PID 1805 ('systemctl') (unit session-11.scope)... Jan 30 14:56:28.387224 systemd[1]: Reloading... Jan 30 14:56:28.533140 zram_generator::config[1847]: No configuration found. Jan 30 14:56:28.709203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:56:28.822597 systemd[1]: Reloading finished in 434 ms. Jan 30 14:56:28.901750 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:56:28.901901 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:56:28.902495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:56:28.908457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:56:29.079331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:56:29.087669 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:56:29.180569 kubelet[1911]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:56:29.180569 kubelet[1911]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 14:56:29.180569 kubelet[1911]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:56:29.181297 kubelet[1911]: I0130 14:56:29.180689 1911 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:56:30.024099 kubelet[1911]: I0130 14:56:30.024026 1911 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 14:56:30.024099 kubelet[1911]: I0130 14:56:30.024093 1911 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:56:30.026005 kubelet[1911]: I0130 14:56:30.024661 1911 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 14:56:30.055249 kubelet[1911]: I0130 14:56:30.055192 1911 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:56:30.067895 kubelet[1911]: E0130 14:56:30.067840 1911 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:56:30.068159 kubelet[1911]: I0130 14:56:30.068134 1911 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:56:30.074058 kubelet[1911]: I0130 14:56:30.074017 1911 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:56:30.074667 kubelet[1911]: I0130 14:56:30.074615 1911 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:56:30.075034 kubelet[1911]: I0130 14:56:30.074762 1911 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.244.11.234","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:56:30.075393 kubelet[1911]: I0130 14:56:30.075370 1911 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:56:30.075502 kubelet[1911]: I0130 14:56:30.075485 1911 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 14:56:30.075813 kubelet[1911]: I0130 14:56:30.075792 1911 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:56:30.080054 kubelet[1911]: I0130 14:56:30.080030 1911 kubelet.go:446] "Attempting to sync node with API server" Jan 30 14:56:30.080217 kubelet[1911]: I0130 14:56:30.080196 1911 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:56:30.080351 kubelet[1911]: I0130 14:56:30.080331 1911 kubelet.go:352] "Adding apiserver pod source" Jan 30 14:56:30.080478 kubelet[1911]: I0130 14:56:30.080458 1911 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:56:30.085149 kubelet[1911]: E0130 14:56:30.084977 1911 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:30.085149 kubelet[1911]: E0130 14:56:30.085115 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:30.086208 kubelet[1911]: I0130 14:56:30.086177 1911 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 14:56:30.087235 kubelet[1911]: I0130 14:56:30.087187 1911 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:56:30.088127 kubelet[1911]: W0130 14:56:30.088089 1911 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:56:30.090662 kubelet[1911]: I0130 14:56:30.090619 1911 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 14:56:30.090757 kubelet[1911]: I0130 14:56:30.090682 1911 server.go:1287] "Started kubelet" Jan 30 14:56:30.091028 kubelet[1911]: I0130 14:56:30.090976 1911 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:56:30.092578 kubelet[1911]: I0130 14:56:30.092521 1911 server.go:490] "Adding debug handlers to kubelet server" Jan 30 14:56:30.096305 kubelet[1911]: I0130 14:56:30.096083 1911 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:56:30.100125 kubelet[1911]: I0130 14:56:30.099909 1911 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:56:30.102575 kubelet[1911]: I0130 14:56:30.102541 1911 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:56:30.113444 kubelet[1911]: I0130 14:56:30.103286 1911 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:56:30.113610 kubelet[1911]: I0130 14:56:30.113586 1911 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 14:56:30.114039 kubelet[1911]: E0130 14:56:30.113860 1911 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.234\" not found" Jan 30 14:56:30.114172 kubelet[1911]: E0130 14:56:30.104652 1911 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.244.11.234.181f803f3188d1ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.244.11.234,UID:10.244.11.234,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.244.11.234,},FirstTimestamp:2025-01-30 14:56:30.090645997 +0000 UTC m=+0.994455367,LastTimestamp:2025-01-30 14:56:30.090645997 +0000 UTC m=+0.994455367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.244.11.234,}" Jan 30 14:56:30.115668 kubelet[1911]: I0130 14:56:30.114461 1911 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:56:30.115668 kubelet[1911]: I0130 14:56:30.114578 1911 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:56:30.116284 kubelet[1911]: E0130 14:56:30.116255 1911 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:56:30.122237 kubelet[1911]: E0130 14:56:30.122117 1911 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.244.11.234.181f803f330f6582 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.244.11.234,UID:10.244.11.234,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.244.11.234,},FirstTimestamp:2025-01-30 14:56:30.116242818 +0000 UTC m=+1.020052209,LastTimestamp:2025-01-30 14:56:30.116242818 +0000 UTC m=+1.020052209,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.244.11.234,}" Jan 30 14:56:30.122373 kubelet[1911]: W0130 14:56:30.122273 1911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 14:56:30.122373 kubelet[1911]: E0130 14:56:30.122339 1911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 14:56:30.122651 kubelet[1911]: W0130 14:56:30.122404 1911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.244.11.234" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 14:56:30.122651 kubelet[1911]: E0130 14:56:30.122428 1911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.244.11.234\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 14:56:30.122651 kubelet[1911]: E0130 14:56:30.122518 1911 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.244.11.234\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 30 14:56:30.124290 kubelet[1911]: W0130 14:56:30.123470 1911 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 14:56:30.124290 kubelet[1911]: E0130 14:56:30.123498 1911 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 30 14:56:30.126087 kubelet[1911]: I0130 14:56:30.124997 1911 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:56:30.126087 kubelet[1911]: I0130 14:56:30.125136 1911 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:56:30.136706 kubelet[1911]: I0130 14:56:30.136671 1911 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:56:30.152582 kubelet[1911]: I0130 14:56:30.152537 1911 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 14:56:30.153147 kubelet[1911]: I0130 14:56:30.153128 1911 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 14:56:30.153381 kubelet[1911]: I0130 14:56:30.153361 1911 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:56:30.154200 kubelet[1911]: E0130 14:56:30.152954 1911 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.244.11.234.181f803f35201c13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.244.11.234,UID:10.244.11.234,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.244.11.234 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.244.11.234,},FirstTimestamp:2025-01-30 14:56:30.150892563 +0000 UTC m=+1.054701944,LastTimestamp:2025-01-30 14:56:30.150892563 +0000 UTC m=+1.054701944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.244.11.234,}" Jan 30 14:56:30.162713 kubelet[1911]: I0130 14:56:30.161185 1911 policy_none.go:49] "None policy: Start" Jan 30 14:56:30.162713 kubelet[1911]: I0130 14:56:30.161234 1911 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 14:56:30.162713 kubelet[1911]: I0130 14:56:30.161265 1911 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:56:30.177984 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:56:30.194752 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:56:30.201961 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:56:30.209979 kubelet[1911]: I0130 14:56:30.209634 1911 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:56:30.211221 kubelet[1911]: I0130 14:56:30.210940 1911 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:56:30.211221 kubelet[1911]: I0130 14:56:30.211106 1911 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:56:30.215090 kubelet[1911]: I0130 14:56:30.214858 1911 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:56:30.218046 kubelet[1911]: E0130 14:56:30.218005 1911 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 14:56:30.218359 kubelet[1911]: E0130 14:56:30.218225 1911 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.244.11.234\" not found" Jan 30 14:56:30.230786 kubelet[1911]: I0130 14:56:30.230723 1911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:56:30.232417 kubelet[1911]: I0130 14:56:30.232391 1911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:56:30.232760 kubelet[1911]: I0130 14:56:30.232579 1911 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 14:56:30.233653 kubelet[1911]: I0130 14:56:30.232881 1911 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 14:56:30.233653 kubelet[1911]: I0130 14:56:30.232905 1911 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 14:56:30.233653 kubelet[1911]: E0130 14:56:30.233091 1911 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 14:56:30.315588 kubelet[1911]: I0130 14:56:30.315420 1911 kubelet_node_status.go:76] "Attempting to register node" node="10.244.11.234" Jan 30 14:56:30.324288 kubelet[1911]: I0130 14:56:30.324249 1911 kubelet_node_status.go:79] "Successfully registered node" node="10.244.11.234" Jan 30 14:56:30.324400 kubelet[1911]: E0130 14:56:30.324299 1911 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.244.11.234\": node \"10.244.11.234\" not found" Jan 30 14:56:30.339916 kubelet[1911]: E0130 14:56:30.339877 1911 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.234\" not found" Jan 30 14:56:30.441056 kubelet[1911]: E0130 14:56:30.440978 1911 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.234\" not found" Jan 30 14:56:30.541705 kubelet[1911]: E0130 14:56:30.541627 1911 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.234\" not found" Jan 30 14:56:30.642323 kubelet[1911]: E0130 14:56:30.642140 1911 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.234\" not found" Jan 30 14:56:30.743267 kubelet[1911]: E0130 14:56:30.743165 1911 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.234\" not found" Jan 30 14:56:30.844027 kubelet[1911]: E0130 14:56:30.843961 1911 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.234\" not found" Jan 30 14:56:30.944952 kubelet[1911]: E0130 14:56:30.944764 1911 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.234\" not found" Jan 30 14:56:31.027574 kubelet[1911]: I0130 14:56:31.027454 1911 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 14:56:31.027861 kubelet[1911]: W0130 14:56:31.027826 1911 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 14:56:31.045773 kubelet[1911]: E0130 14:56:31.045688 1911 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.234\" not found" Jan 30 14:56:31.086351 kubelet[1911]: E0130 14:56:31.086268 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:31.132890 sudo[1772]: pam_unix(sudo:session): session closed for user root Jan 30 14:56:31.147602 kubelet[1911]: I0130 14:56:31.147410 1911 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 14:56:31.148278 containerd[1508]: time="2025-01-30T14:56:31.147822597Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:56:31.148791 kubelet[1911]: I0130 14:56:31.148104 1911 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 14:56:31.278026 sshd[1771]: Connection closed by 139.178.89.65 port 39932 Jan 30 14:56:31.277018 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jan 30 14:56:31.281443 systemd[1]: sshd@8-10.244.11.234:22-139.178.89.65:39932.service: Deactivated successfully. Jan 30 14:56:31.284107 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:56:31.286244 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:56:31.287756 systemd-logind[1485]: Removed session 11. Jan 30 14:56:32.087186 kubelet[1911]: I0130 14:56:32.087118 1911 apiserver.go:52] "Watching apiserver" Jan 30 14:56:32.087864 kubelet[1911]: E0130 14:56:32.087135 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:32.093402 kubelet[1911]: E0130 14:56:32.093363 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:32.108728 systemd[1]: Created slice kubepods-besteffort-pod004ed704_fd2e_44e5_8232_5bba29e49433.slice - libcontainer container kubepods-besteffort-pod004ed704_fd2e_44e5_8232_5bba29e49433.slice. Jan 30 14:56:32.115707 kubelet[1911]: I0130 14:56:32.115655 1911 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:56:32.127845 kubelet[1911]: I0130 14:56:32.127053 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58080993-5978-429b-9a5e-ecf0de7e680e-tigera-ca-bundle\") pod \"calico-node-gpcp4\" (UID: \"58080993-5978-429b-9a5e-ecf0de7e680e\") " pod="calico-system/calico-node-gpcp4" Jan 30 14:56:32.127845 kubelet[1911]: I0130 14:56:32.127117 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/58080993-5978-429b-9a5e-ecf0de7e680e-cni-bin-dir\") pod \"calico-node-gpcp4\" (UID: \"58080993-5978-429b-9a5e-ecf0de7e680e\") " pod="calico-system/calico-node-gpcp4" Jan 30 14:56:32.127845 kubelet[1911]: I0130 14:56:32.127150 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/58080993-5978-429b-9a5e-ecf0de7e680e-cni-net-dir\") pod \"calico-node-gpcp4\" (UID: \"58080993-5978-429b-9a5e-ecf0de7e680e\") " pod="calico-system/calico-node-gpcp4" Jan 30 14:56:32.127845 kubelet[1911]: I0130 14:56:32.127178 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvh27\" (UniqueName: \"kubernetes.io/projected/58080993-5978-429b-9a5e-ecf0de7e680e-kube-api-access-lvh27\") pod \"calico-node-gpcp4\" (UID: \"58080993-5978-429b-9a5e-ecf0de7e680e\") " pod="calico-system/calico-node-gpcp4" Jan 30 14:56:32.127845 kubelet[1911]: I0130 14:56:32.127207 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/004ed704-fd2e-44e5-8232-5bba29e49433-kube-proxy\") pod \"kube-proxy-2cp6r\" (UID: \"004ed704-fd2e-44e5-8232-5bba29e49433\") " pod="kube-system/kube-proxy-2cp6r" Jan 30 14:56:32.128206 kubelet[1911]: I0130 14:56:32.127233 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58080993-5978-429b-9a5e-ecf0de7e680e-xtables-lock\") pod \"calico-node-gpcp4\" (UID: \"58080993-5978-429b-9a5e-ecf0de7e680e\") " pod="calico-system/calico-node-gpcp4" Jan 30 14:56:32.128206 kubelet[1911]: I0130 14:56:32.127259 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/58080993-5978-429b-9a5e-ecf0de7e680e-var-lib-calico\") pod \"calico-node-gpcp4\" (UID: \"58080993-5978-429b-9a5e-ecf0de7e680e\") " pod="calico-system/calico-node-gpcp4" Jan 30 14:56:32.128206 kubelet[1911]: I0130 14:56:32.127284 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11876523-7753-443e-b7b7-8d73fa03192e-kubelet-dir\") pod \"csi-node-driver-gqq8s\" (UID: \"11876523-7753-443e-b7b7-8d73fa03192e\") " pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:32.128206 kubelet[1911]: I0130 14:56:32.127310 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11876523-7753-443e-b7b7-8d73fa03192e-registration-dir\") pod \"csi-node-driver-gqq8s\" (UID: \"11876523-7753-443e-b7b7-8d73fa03192e\") " pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:32.128206 kubelet[1911]: I0130 14:56:32.127334 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/004ed704-fd2e-44e5-8232-5bba29e49433-xtables-lock\") pod \"kube-proxy-2cp6r\" (UID: \"004ed704-fd2e-44e5-8232-5bba29e49433\") " pod="kube-system/kube-proxy-2cp6r" Jan 30 14:56:32.128424 kubelet[1911]: I0130 14:56:32.127359 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/58080993-5978-429b-9a5e-ecf0de7e680e-policysync\") pod \"calico-node-gpcp4\" (UID: \"58080993-5978-429b-9a5e-ecf0de7e680e\") " pod="calico-system/calico-node-gpcp4" Jan 30 14:56:32.128424 kubelet[1911]: I0130 14:56:32.127384 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/58080993-5978-429b-9a5e-ecf0de7e680e-node-certs\") pod \"calico-node-gpcp4\" (UID: \"58080993-5978-429b-9a5e-ecf0de7e680e\") " pod="calico-system/calico-node-gpcp4" Jan 30 14:56:32.128424 kubelet[1911]: I0130 14:56:32.127410 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/58080993-5978-429b-9a5e-ecf0de7e680e-cni-log-dir\") pod \"calico-node-gpcp4\" (UID: \"58080993-5978-429b-9a5e-ecf0de7e680e\") " pod="calico-system/calico-node-gpcp4" Jan 30 14:56:32.128424 kubelet[1911]: I0130 14:56:32.127437 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8hmq\" (UniqueName: \"kubernetes.io/projected/11876523-7753-443e-b7b7-8d73fa03192e-kube-api-access-b8hmq\") pod \"csi-node-driver-gqq8s\" (UID: \"11876523-7753-443e-b7b7-8d73fa03192e\") " pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:32.128424 kubelet[1911]: I0130 14:56:32.127480 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58080993-5978-429b-9a5e-ecf0de7e680e-lib-modules\") pod \"calico-node-gpcp4\" (UID: \"58080993-5978-429b-9a5e-ecf0de7e680e\") " pod="calico-system/calico-node-gpcp4" Jan 30 14:56:32.128797 kubelet[1911]: I0130 14:56:32.127509 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/58080993-5978-429b-9a5e-ecf0de7e680e-var-run-calico\") pod \"calico-node-gpcp4\" (UID: \"58080993-5978-429b-9a5e-ecf0de7e680e\") " pod="calico-system/calico-node-gpcp4" Jan 30 14:56:32.128797 kubelet[1911]: I0130 14:56:32.127538 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/58080993-5978-429b-9a5e-ecf0de7e680e-flexvol-driver-host\") pod \"calico-node-gpcp4\" (UID: \"58080993-5978-429b-9a5e-ecf0de7e680e\") " pod="calico-system/calico-node-gpcp4" Jan 30 14:56:32.128797 kubelet[1911]: I0130 14:56:32.127562 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/11876523-7753-443e-b7b7-8d73fa03192e-varrun\") pod \"csi-node-driver-gqq8s\" (UID: \"11876523-7753-443e-b7b7-8d73fa03192e\") " pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:32.128797 kubelet[1911]: I0130 14:56:32.127587 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11876523-7753-443e-b7b7-8d73fa03192e-socket-dir\") pod \"csi-node-driver-gqq8s\" (UID: \"11876523-7753-443e-b7b7-8d73fa03192e\") " pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:32.128797 kubelet[1911]: I0130 14:56:32.127662 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/004ed704-fd2e-44e5-8232-5bba29e49433-lib-modules\") pod \"kube-proxy-2cp6r\" (UID: \"004ed704-fd2e-44e5-8232-5bba29e49433\") " pod="kube-system/kube-proxy-2cp6r" Jan 30 14:56:32.130642 kubelet[1911]: I0130 14:56:32.127701 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wx47\" (UniqueName: \"kubernetes.io/projected/004ed704-fd2e-44e5-8232-5bba29e49433-kube-api-access-8wx47\") pod \"kube-proxy-2cp6r\" (UID: \"004ed704-fd2e-44e5-8232-5bba29e49433\") " pod="kube-system/kube-proxy-2cp6r" Jan 30 14:56:32.129210 systemd[1]: Created slice kubepods-besteffort-pod58080993_5978_429b_9a5e_ecf0de7e680e.slice - libcontainer container kubepods-besteffort-pod58080993_5978_429b_9a5e_ecf0de7e680e.slice. Jan 30 14:56:32.232507 kubelet[1911]: E0130 14:56:32.232323 1911 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:56:32.232507 kubelet[1911]: W0130 14:56:32.232355 1911 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:56:32.232507 kubelet[1911]: E0130 14:56:32.232411 1911 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:56:32.232776 kubelet[1911]: E0130 14:56:32.232724 1911 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:56:32.232776 kubelet[1911]: W0130 14:56:32.232741 1911 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:56:32.232776 kubelet[1911]: E0130 14:56:32.232757 1911 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:56:32.240028 kubelet[1911]: E0130 14:56:32.238127 1911 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:56:32.240028 kubelet[1911]: W0130 14:56:32.238152 1911 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:56:32.240028 kubelet[1911]: E0130 14:56:32.238169 1911 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:56:32.259619 kubelet[1911]: E0130 14:56:32.259170 1911 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:56:32.259619 kubelet[1911]: W0130 14:56:32.259197 1911 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:56:32.259619 kubelet[1911]: E0130 14:56:32.259247 1911 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:56:32.259619 kubelet[1911]: E0130 14:56:32.259577 1911 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:56:32.259619 kubelet[1911]: W0130 14:56:32.259601 1911 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:56:32.259619 kubelet[1911]: E0130 14:56:32.259616 1911 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:56:32.261591 kubelet[1911]: E0130 14:56:32.261570 1911 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:56:32.261732 kubelet[1911]: W0130 14:56:32.261708 1911 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:56:32.261854 kubelet[1911]: E0130 14:56:32.261830 1911 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:56:32.428174 containerd[1508]: time="2025-01-30T14:56:32.426494593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2cp6r,Uid:004ed704-fd2e-44e5-8232-5bba29e49433,Namespace:kube-system,Attempt:0,}" Jan 30 14:56:32.433161 containerd[1508]: time="2025-01-30T14:56:32.433032016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gpcp4,Uid:58080993-5978-429b-9a5e-ecf0de7e680e,Namespace:calico-system,Attempt:0,}" Jan 30 14:56:32.514726 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 14:56:33.087829 kubelet[1911]: E0130 14:56:33.087715 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:33.233978 kubelet[1911]: E0130 14:56:33.233483 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:33.381783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1507402976.mount: Deactivated successfully. Jan 30 14:56:33.388285 containerd[1508]: time="2025-01-30T14:56:33.388212525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:56:33.389786 containerd[1508]: time="2025-01-30T14:56:33.389745695Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:56:33.390859 containerd[1508]: time="2025-01-30T14:56:33.390778492Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 14:56:33.391983 containerd[1508]: time="2025-01-30T14:56:33.391929386Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:56:33.393719 containerd[1508]: time="2025-01-30T14:56:33.393626445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:56:33.397419 containerd[1508]: time="2025-01-30T14:56:33.396540061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:56:33.400524 containerd[1508]: time="2025-01-30T14:56:33.400483425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 973.700798ms" Jan 30 14:56:33.402972 containerd[1508]: time="2025-01-30T14:56:33.402936403Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 969.757837ms" Jan 30 14:56:33.535499 containerd[1508]: time="2025-01-30T14:56:33.533608768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:56:33.536498 containerd[1508]: time="2025-01-30T14:56:33.536265957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:56:33.536498 containerd[1508]: time="2025-01-30T14:56:33.536298341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:56:33.536498 containerd[1508]: time="2025-01-30T14:56:33.536437712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:56:33.537298 containerd[1508]: time="2025-01-30T14:56:33.535620228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:56:33.541092 containerd[1508]: time="2025-01-30T14:56:33.538149391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:56:33.541092 containerd[1508]: time="2025-01-30T14:56:33.538176872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:56:33.541092 containerd[1508]: time="2025-01-30T14:56:33.538285895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:56:33.632344 systemd[1]: Started cri-containerd-5cfdb41e5de9d8e7549bf564659a953f43c6e83b14d3711c9331be6c11cbe01c.scope - libcontainer container 5cfdb41e5de9d8e7549bf564659a953f43c6e83b14d3711c9331be6c11cbe01c. Jan 30 14:56:33.644248 systemd[1]: Started cri-containerd-8ada2fa114ee9f8f465e10c82a835c93c40f694c1053e0a9cb893e0839d83425.scope - libcontainer container 8ada2fa114ee9f8f465e10c82a835c93c40f694c1053e0a9cb893e0839d83425. Jan 30 14:56:33.679866 containerd[1508]: time="2025-01-30T14:56:33.679100707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gpcp4,Uid:58080993-5978-429b-9a5e-ecf0de7e680e,Namespace:calico-system,Attempt:0,} returns sandbox id \"5cfdb41e5de9d8e7549bf564659a953f43c6e83b14d3711c9331be6c11cbe01c\"" Jan 30 14:56:33.683764 containerd[1508]: time="2025-01-30T14:56:33.682940415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 14:56:33.695806 containerd[1508]: time="2025-01-30T14:56:33.695748224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2cp6r,Uid:004ed704-fd2e-44e5-8232-5bba29e49433,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ada2fa114ee9f8f465e10c82a835c93c40f694c1053e0a9cb893e0839d83425\"" Jan 30 14:56:34.088173 kubelet[1911]: E0130 14:56:34.088118 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:34.955236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438174915.mount: Deactivated successfully. Jan 30 14:56:35.088341 kubelet[1911]: E0130 14:56:35.088272 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:35.092853 containerd[1508]: time="2025-01-30T14:56:35.091830476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:35.094090 containerd[1508]: time="2025-01-30T14:56:35.093968663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 14:56:35.094952 containerd[1508]: time="2025-01-30T14:56:35.094916735Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:35.098988 containerd[1508]: time="2025-01-30T14:56:35.097692806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:35.098988 containerd[1508]: time="2025-01-30T14:56:35.098765814Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.415759449s" Jan 30 14:56:35.098988 containerd[1508]: time="2025-01-30T14:56:35.098804947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 14:56:35.100914 containerd[1508]: time="2025-01-30T14:56:35.100882429Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 14:56:35.102320 containerd[1508]: time="2025-01-30T14:56:35.102245436Z" level=info msg="CreateContainer within sandbox \"5cfdb41e5de9d8e7549bf564659a953f43c6e83b14d3711c9331be6c11cbe01c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 14:56:35.146928 containerd[1508]: time="2025-01-30T14:56:35.146760244Z" level=info msg="CreateContainer within sandbox \"5cfdb41e5de9d8e7549bf564659a953f43c6e83b14d3711c9331be6c11cbe01c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"088e6e09e19b1ae5b1a0ba6088fddcb1ffef8294b35c35c0e1900e0550777e0d\"" Jan 30 14:56:35.148349 containerd[1508]: time="2025-01-30T14:56:35.148219571Z" level=info msg="StartContainer for \"088e6e09e19b1ae5b1a0ba6088fddcb1ffef8294b35c35c0e1900e0550777e0d\"" Jan 30 14:56:35.186292 systemd[1]: Started cri-containerd-088e6e09e19b1ae5b1a0ba6088fddcb1ffef8294b35c35c0e1900e0550777e0d.scope - libcontainer container 088e6e09e19b1ae5b1a0ba6088fddcb1ffef8294b35c35c0e1900e0550777e0d. Jan 30 14:56:35.230263 containerd[1508]: time="2025-01-30T14:56:35.230224629Z" level=info msg="StartContainer for \"088e6e09e19b1ae5b1a0ba6088fddcb1ffef8294b35c35c0e1900e0550777e0d\" returns successfully" Jan 30 14:56:35.233534 kubelet[1911]: E0130 14:56:35.233414 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:35.247434 systemd[1]: cri-containerd-088e6e09e19b1ae5b1a0ba6088fddcb1ffef8294b35c35c0e1900e0550777e0d.scope: Deactivated successfully. Jan 30 14:56:35.361353 containerd[1508]: time="2025-01-30T14:56:35.361221669Z" level=info msg="shim disconnected" id=088e6e09e19b1ae5b1a0ba6088fddcb1ffef8294b35c35c0e1900e0550777e0d namespace=k8s.io Jan 30 14:56:35.361728 containerd[1508]: time="2025-01-30T14:56:35.361694452Z" level=warning msg="cleaning up after shim disconnected" id=088e6e09e19b1ae5b1a0ba6088fddcb1ffef8294b35c35c0e1900e0550777e0d namespace=k8s.io Jan 30 14:56:35.361889 containerd[1508]: time="2025-01-30T14:56:35.361863591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:56:36.089240 kubelet[1911]: E0130 14:56:36.089178 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:36.574487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261339047.mount: Deactivated successfully. Jan 30 14:56:37.090272 kubelet[1911]: E0130 14:56:37.090194 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:37.234281 kubelet[1911]: E0130 14:56:37.234200 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:37.310120 containerd[1508]: time="2025-01-30T14:56:37.309536039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:37.310992 containerd[1508]: time="2025-01-30T14:56:37.310922749Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909474" Jan 30 14:56:37.312786 containerd[1508]: time="2025-01-30T14:56:37.312711392Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:37.316347 containerd[1508]: time="2025-01-30T14:56:37.316285757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:37.318557 containerd[1508]: time="2025-01-30T14:56:37.318455607Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.217253779s" Jan 30 14:56:37.318557 containerd[1508]: time="2025-01-30T14:56:37.318511731Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 14:56:37.320739 containerd[1508]: time="2025-01-30T14:56:37.320463119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 14:56:37.321745 containerd[1508]: time="2025-01-30T14:56:37.321708243Z" level=info msg="CreateContainer within sandbox \"8ada2fa114ee9f8f465e10c82a835c93c40f694c1053e0a9cb893e0839d83425\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:56:37.356111 containerd[1508]: time="2025-01-30T14:56:37.355092860Z" level=info msg="CreateContainer within sandbox \"8ada2fa114ee9f8f465e10c82a835c93c40f694c1053e0a9cb893e0839d83425\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9067cb453c6defc6eb0cc479637dfe5c356c749e17c8869f2c22c37c4c5f01cc\"" Jan 30 14:56:37.356808 containerd[1508]: time="2025-01-30T14:56:37.356759876Z" level=info msg="StartContainer for \"9067cb453c6defc6eb0cc479637dfe5c356c749e17c8869f2c22c37c4c5f01cc\"" Jan 30 14:56:37.397948 systemd[1]: Started cri-containerd-9067cb453c6defc6eb0cc479637dfe5c356c749e17c8869f2c22c37c4c5f01cc.scope - libcontainer container 9067cb453c6defc6eb0cc479637dfe5c356c749e17c8869f2c22c37c4c5f01cc. Jan 30 14:56:37.439921 containerd[1508]: time="2025-01-30T14:56:37.439845617Z" level=info msg="StartContainer for \"9067cb453c6defc6eb0cc479637dfe5c356c749e17c8869f2c22c37c4c5f01cc\" returns successfully" Jan 30 14:56:38.091281 kubelet[1911]: E0130 14:56:38.091185 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:38.306792 kubelet[1911]: I0130 14:56:38.306671 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2cp6r" podStartSLOduration=4.684419081 podStartE2EDuration="8.306628465s" podCreationTimestamp="2025-01-30 14:56:30 +0000 UTC" firstStartedPulling="2025-01-30 14:56:33.697207253 +0000 UTC m=+4.601016630" lastFinishedPulling="2025-01-30 14:56:37.319416639 +0000 UTC m=+8.223226014" observedRunningTime="2025-01-30 14:56:38.305368599 +0000 UTC m=+9.209178000" watchObservedRunningTime="2025-01-30 14:56:38.306628465 +0000 UTC m=+9.210437848" Jan 30 14:56:39.091761 kubelet[1911]: E0130 14:56:39.091673 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:39.236096 kubelet[1911]: E0130 14:56:39.235326 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:40.092456 kubelet[1911]: E0130 14:56:40.092341 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:41.093778 kubelet[1911]: E0130 14:56:41.093226 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:41.235302 kubelet[1911]: E0130 14:56:41.234501 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:42.093848 kubelet[1911]: E0130 14:56:42.093732 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:42.531049 containerd[1508]: time="2025-01-30T14:56:42.530972284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:42.532444 containerd[1508]: time="2025-01-30T14:56:42.532329510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 14:56:42.533692 containerd[1508]: time="2025-01-30T14:56:42.533272054Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:42.536241 containerd[1508]: time="2025-01-30T14:56:42.536200186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:42.537461 containerd[1508]: time="2025-01-30T14:56:42.537418430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.216907511s" Jan 30 14:56:42.537557 containerd[1508]: time="2025-01-30T14:56:42.537471571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 14:56:42.541101 containerd[1508]: time="2025-01-30T14:56:42.541046869Z" level=info msg="CreateContainer within sandbox \"5cfdb41e5de9d8e7549bf564659a953f43c6e83b14d3711c9331be6c11cbe01c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:56:42.559639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258590933.mount: Deactivated successfully. Jan 30 14:56:42.561321 containerd[1508]: time="2025-01-30T14:56:42.560396785Z" level=info msg="CreateContainer within sandbox \"5cfdb41e5de9d8e7549bf564659a953f43c6e83b14d3711c9331be6c11cbe01c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cf1de1f5b2a48f1fce0dac2284ef213ece3c9599ad497a02886ca3d9adb65a8b\"" Jan 30 14:56:42.562103 containerd[1508]: time="2025-01-30T14:56:42.561904139Z" level=info msg="StartContainer for \"cf1de1f5b2a48f1fce0dac2284ef213ece3c9599ad497a02886ca3d9adb65a8b\"" Jan 30 14:56:42.611431 systemd[1]: Started cri-containerd-cf1de1f5b2a48f1fce0dac2284ef213ece3c9599ad497a02886ca3d9adb65a8b.scope - libcontainer container cf1de1f5b2a48f1fce0dac2284ef213ece3c9599ad497a02886ca3d9adb65a8b. Jan 30 14:56:42.658094 containerd[1508]: time="2025-01-30T14:56:42.655603033Z" level=info msg="StartContainer for \"cf1de1f5b2a48f1fce0dac2284ef213ece3c9599ad497a02886ca3d9adb65a8b\" returns successfully" Jan 30 14:56:43.093957 kubelet[1911]: E0130 14:56:43.093871 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:43.235767 kubelet[1911]: E0130 14:56:43.234487 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:43.494613 containerd[1508]: time="2025-01-30T14:56:43.494534656Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:56:43.497139 systemd[1]: cri-containerd-cf1de1f5b2a48f1fce0dac2284ef213ece3c9599ad497a02886ca3d9adb65a8b.scope: Deactivated successfully. Jan 30 14:56:43.507961 kubelet[1911]: I0130 14:56:43.506872 1911 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 14:56:43.552973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf1de1f5b2a48f1fce0dac2284ef213ece3c9599ad497a02886ca3d9adb65a8b-rootfs.mount: Deactivated successfully. Jan 30 14:56:43.736968 containerd[1508]: time="2025-01-30T14:56:43.736868093Z" level=info msg="shim disconnected" id=cf1de1f5b2a48f1fce0dac2284ef213ece3c9599ad497a02886ca3d9adb65a8b namespace=k8s.io Jan 30 14:56:43.736968 containerd[1508]: time="2025-01-30T14:56:43.736966408Z" level=warning msg="cleaning up after shim disconnected" id=cf1de1f5b2a48f1fce0dac2284ef213ece3c9599ad497a02886ca3d9adb65a8b namespace=k8s.io Jan 30 14:56:43.737612 containerd[1508]: time="2025-01-30T14:56:43.736984194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:56:43.753149 containerd[1508]: time="2025-01-30T14:56:43.752358602Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:56:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:56:44.095512 kubelet[1911]: E0130 14:56:44.095132 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:44.298148 containerd[1508]: time="2025-01-30T14:56:44.297778366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 14:56:45.095477 kubelet[1911]: E0130 14:56:45.095357 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:45.243959 systemd[1]: Created slice kubepods-besteffort-pod11876523_7753_443e_b7b7_8d73fa03192e.slice - libcontainer container kubepods-besteffort-pod11876523_7753_443e_b7b7_8d73fa03192e.slice. Jan 30 14:56:45.246984 containerd[1508]: time="2025-01-30T14:56:45.246940461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:0,}" Jan 30 14:56:45.338092 containerd[1508]: time="2025-01-30T14:56:45.337306319Z" level=error msg="Failed to destroy network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:45.339827 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5-shm.mount: Deactivated successfully. Jan 30 14:56:45.340456 containerd[1508]: time="2025-01-30T14:56:45.340280114Z" level=error msg="encountered an error cleaning up failed sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:45.340456 containerd[1508]: time="2025-01-30T14:56:45.340394197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:45.341180 kubelet[1911]: E0130 14:56:45.340956 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:45.341180 kubelet[1911]: E0130 14:56:45.341058 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:45.341180 kubelet[1911]: E0130 14:56:45.341121 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:45.342659 kubelet[1911]: E0130 14:56:45.342292 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:46.009191 update_engine[1486]: I20250130 14:56:46.007586 1486 update_attempter.cc:509] Updating boot flags... Jan 30 14:56:46.080290 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2401) Jan 30 14:56:46.095960 kubelet[1911]: E0130 14:56:46.095867 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:46.191247 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2402) Jan 30 14:56:46.307338 kubelet[1911]: I0130 14:56:46.307150 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5" Jan 30 14:56:46.310125 containerd[1508]: time="2025-01-30T14:56:46.309691520Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:56:46.310125 containerd[1508]: time="2025-01-30T14:56:46.309984915Z" level=info msg="Ensure that sandbox bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5 in task-service has been cleanup successfully" Jan 30 14:56:46.314142 containerd[1508]: time="2025-01-30T14:56:46.313158772Z" level=info msg="TearDown network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" successfully" Jan 30 14:56:46.314142 containerd[1508]: time="2025-01-30T14:56:46.313184356Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" returns successfully" Jan 30 14:56:46.313734 systemd[1]: run-netns-cni\x2d6dafb52d\x2d7a26\x2d8ed8\x2d7870\x2db879d5b6136c.mount: Deactivated successfully. Jan 30 14:56:46.319318 containerd[1508]: time="2025-01-30T14:56:46.317483974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:1,}" Jan 30 14:56:46.436933 containerd[1508]: time="2025-01-30T14:56:46.436317241Z" level=error msg="Failed to destroy network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:46.438086 containerd[1508]: time="2025-01-30T14:56:46.438027914Z" level=error msg="encountered an error cleaning up failed sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:46.439198 containerd[1508]: time="2025-01-30T14:56:46.439159452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:46.439828 kubelet[1911]: E0130 14:56:46.439593 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:46.439828 kubelet[1911]: E0130 14:56:46.439725 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:46.439828 kubelet[1911]: E0130 14:56:46.439771 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:46.440014 kubelet[1911]: E0130 14:56:46.439889 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:46.440895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0-shm.mount: Deactivated successfully. Jan 30 14:56:47.097104 kubelet[1911]: E0130 14:56:47.096976 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:47.323966 kubelet[1911]: I0130 14:56:47.321914 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0" Jan 30 14:56:47.324287 containerd[1508]: time="2025-01-30T14:56:47.323828332Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" Jan 30 14:56:47.327746 containerd[1508]: time="2025-01-30T14:56:47.324817282Z" level=info msg="Ensure that sandbox d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0 in task-service has been cleanup successfully" Jan 30 14:56:47.327746 containerd[1508]: time="2025-01-30T14:56:47.325281074Z" level=info msg="TearDown network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" successfully" Jan 30 14:56:47.327746 containerd[1508]: time="2025-01-30T14:56:47.325305246Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" returns successfully" Jan 30 14:56:47.328945 containerd[1508]: time="2025-01-30T14:56:47.328602739Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:56:47.328945 containerd[1508]: time="2025-01-30T14:56:47.328723856Z" level=info msg="TearDown network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" successfully" Jan 30 14:56:47.328945 containerd[1508]: time="2025-01-30T14:56:47.328743750Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" returns successfully" Jan 30 14:56:47.329195 systemd[1]: run-netns-cni\x2dc0b4b860\x2d4dfb\x2d584a\x2d5268\x2de8f7f2e93546.mount: Deactivated successfully. Jan 30 14:56:47.330378 containerd[1508]: time="2025-01-30T14:56:47.329794553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:2,}" Jan 30 14:56:47.444134 containerd[1508]: time="2025-01-30T14:56:47.443715765Z" level=error msg="Failed to destroy network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:47.446510 containerd[1508]: time="2025-01-30T14:56:47.446453829Z" level=error msg="encountered an error cleaning up failed sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:47.446614 containerd[1508]: time="2025-01-30T14:56:47.446578678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:47.447191 kubelet[1911]: E0130 14:56:47.446965 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:47.447191 kubelet[1911]: E0130 14:56:47.447086 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:47.447191 kubelet[1911]: E0130 14:56:47.447150 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:47.447722 kubelet[1911]: E0130 14:56:47.447439 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:47.449429 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba-shm.mount: Deactivated successfully. Jan 30 14:56:48.098239 kubelet[1911]: E0130 14:56:48.098138 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:48.326765 kubelet[1911]: I0130 14:56:48.326663 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba" Jan 30 14:56:48.328688 containerd[1508]: time="2025-01-30T14:56:48.328181794Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\"" Jan 30 14:56:48.328688 containerd[1508]: time="2025-01-30T14:56:48.328462709Z" level=info msg="Ensure that sandbox 9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba in task-service has been cleanup successfully" Jan 30 14:56:48.330101 containerd[1508]: time="2025-01-30T14:56:48.329395856Z" level=info msg="TearDown network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" successfully" Jan 30 14:56:48.330101 containerd[1508]: time="2025-01-30T14:56:48.329446924Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" returns successfully" Jan 30 14:56:48.330101 containerd[1508]: time="2025-01-30T14:56:48.329783273Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" Jan 30 14:56:48.330101 containerd[1508]: time="2025-01-30T14:56:48.329884578Z" level=info msg="TearDown network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" successfully" Jan 30 14:56:48.330101 containerd[1508]: time="2025-01-30T14:56:48.329902638Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" returns successfully" Jan 30 14:56:48.331016 containerd[1508]: time="2025-01-30T14:56:48.330531723Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:56:48.331016 containerd[1508]: time="2025-01-30T14:56:48.330631012Z" level=info msg="TearDown network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" successfully" Jan 30 14:56:48.331016 containerd[1508]: time="2025-01-30T14:56:48.330649165Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" returns successfully" Jan 30 14:56:48.331574 containerd[1508]: time="2025-01-30T14:56:48.331405310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:3,}" Jan 30 14:56:48.332185 systemd[1]: run-netns-cni\x2dc14d2c10\x2dfcd0\x2da3db\x2d7f84\x2dc5fb147ddd68.mount: Deactivated successfully. Jan 30 14:56:48.507312 containerd[1508]: time="2025-01-30T14:56:48.507236233Z" level=error msg="Failed to destroy network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:48.512569 containerd[1508]: time="2025-01-30T14:56:48.510588988Z" level=error msg="encountered an error cleaning up failed sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:48.512569 containerd[1508]: time="2025-01-30T14:56:48.510692594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:48.511477 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d-shm.mount: Deactivated successfully. Jan 30 14:56:48.512890 kubelet[1911]: E0130 14:56:48.511161 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:48.512890 kubelet[1911]: E0130 14:56:48.511276 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:48.512890 kubelet[1911]: E0130 14:56:48.511327 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:48.513038 kubelet[1911]: E0130 14:56:48.511398 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:49.095849 systemd[1]: Created slice kubepods-besteffort-pod69380d9b_ecad_417e_b5ed_aa8051a80de1.slice - libcontainer container kubepods-besteffort-pod69380d9b_ecad_417e_b5ed_aa8051a80de1.slice. Jan 30 14:56:49.099246 kubelet[1911]: E0130 14:56:49.099212 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:49.135880 kubelet[1911]: I0130 14:56:49.135833 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8wjl\" (UniqueName: \"kubernetes.io/projected/69380d9b-ecad-417e-b5ed-aa8051a80de1-kube-api-access-r8wjl\") pod \"nginx-deployment-7fcdb87857-4jcjz\" (UID: \"69380d9b-ecad-417e-b5ed-aa8051a80de1\") " pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:49.342152 kubelet[1911]: I0130 14:56:49.341280 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d" Jan 30 14:56:49.342381 containerd[1508]: time="2025-01-30T14:56:49.342337621Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\"" Jan 30 14:56:49.345210 containerd[1508]: time="2025-01-30T14:56:49.342604422Z" level=info msg="Ensure that sandbox 51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d in task-service has been cleanup successfully" Jan 30 14:56:49.345210 containerd[1508]: time="2025-01-30T14:56:49.342801408Z" level=info msg="TearDown network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" successfully" Jan 30 14:56:49.345210 containerd[1508]: time="2025-01-30T14:56:49.342822460Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" returns successfully" Jan 30 14:56:49.344938 systemd[1]: run-netns-cni\x2d06278ff5\x2d70bd\x2d4529\x2de90e\x2d29454caa803c.mount: Deactivated successfully. Jan 30 14:56:49.347553 containerd[1508]: time="2025-01-30T14:56:49.347027088Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\"" Jan 30 14:56:49.347553 containerd[1508]: time="2025-01-30T14:56:49.347162713Z" level=info msg="TearDown network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" successfully" Jan 30 14:56:49.347553 containerd[1508]: time="2025-01-30T14:56:49.347182815Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" returns successfully" Jan 30 14:56:49.348289 containerd[1508]: time="2025-01-30T14:56:49.347782457Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" Jan 30 14:56:49.348289 containerd[1508]: time="2025-01-30T14:56:49.347894814Z" level=info msg="TearDown network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" successfully" Jan 30 14:56:49.348289 containerd[1508]: time="2025-01-30T14:56:49.347913725Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" returns successfully" Jan 30 14:56:49.349207 containerd[1508]: time="2025-01-30T14:56:49.348473883Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:56:49.349207 containerd[1508]: time="2025-01-30T14:56:49.348626503Z" level=info msg="TearDown network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" successfully" Jan 30 14:56:49.349207 containerd[1508]: time="2025-01-30T14:56:49.348647210Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" returns successfully" Jan 30 14:56:49.349626 containerd[1508]: time="2025-01-30T14:56:49.349385465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:4,}" Jan 30 14:56:49.402825 containerd[1508]: time="2025-01-30T14:56:49.402717214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:0,}" Jan 30 14:56:49.534780 containerd[1508]: time="2025-01-30T14:56:49.534685417Z" level=error msg="Failed to destroy network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:49.535288 containerd[1508]: time="2025-01-30T14:56:49.535146736Z" level=error msg="encountered an error cleaning up failed sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:49.535288 containerd[1508]: time="2025-01-30T14:56:49.535226953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:49.535576 kubelet[1911]: E0130 14:56:49.535515 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:49.535655 kubelet[1911]: E0130 14:56:49.535610 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:49.535655 kubelet[1911]: E0130 14:56:49.535643 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:49.536087 kubelet[1911]: E0130 14:56:49.535713 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:49.564005 containerd[1508]: time="2025-01-30T14:56:49.563842594Z" level=error msg="Failed to destroy network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:49.564560 containerd[1508]: time="2025-01-30T14:56:49.564358703Z" level=error msg="encountered an error cleaning up failed sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:49.564560 containerd[1508]: time="2025-01-30T14:56:49.564437305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:49.565486 kubelet[1911]: E0130 14:56:49.564749 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:49.565486 kubelet[1911]: E0130 14:56:49.565236 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:49.565486 kubelet[1911]: E0130 14:56:49.565421 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:49.565971 kubelet[1911]: E0130 14:56:49.565818 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-4jcjz_default(69380d9b-ecad-417e-b5ed-aa8051a80de1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-4jcjz_default(69380d9b-ecad-417e-b5ed-aa8051a80de1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-4jcjz" podUID="69380d9b-ecad-417e-b5ed-aa8051a80de1" Jan 30 14:56:50.080558 kubelet[1911]: E0130 14:56:50.080450 1911 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:50.100376 kubelet[1911]: E0130 14:56:50.100286 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:50.337421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1-shm.mount: Deactivated successfully. Jan 30 14:56:50.337838 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27-shm.mount: Deactivated successfully. Jan 30 14:56:50.349943 kubelet[1911]: I0130 14:56:50.349657 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27" Jan 30 14:56:50.351089 containerd[1508]: time="2025-01-30T14:56:50.351031004Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\"" Jan 30 14:56:50.352426 kubelet[1911]: I0130 14:56:50.352100 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1" Jan 30 14:56:50.353024 containerd[1508]: time="2025-01-30T14:56:50.352993044Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\"" Jan 30 14:56:50.353232 containerd[1508]: time="2025-01-30T14:56:50.353172500Z" level=info msg="Ensure that sandbox 0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27 in task-service has been cleanup successfully" Jan 30 14:56:50.353441 containerd[1508]: time="2025-01-30T14:56:50.353302658Z" level=info msg="Ensure that sandbox e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1 in task-service has been cleanup successfully" Jan 30 14:56:50.356797 containerd[1508]: time="2025-01-30T14:56:50.356692933Z" level=info msg="TearDown network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" successfully" Jan 30 14:56:50.356797 containerd[1508]: time="2025-01-30T14:56:50.356757807Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" returns successfully" Jan 30 14:56:50.356812 systemd[1]: run-netns-cni\x2d83136544\x2d82de\x2dd98e\x2ddf82\x2d0a98083812b3.mount: Deactivated successfully. Jan 30 14:56:50.359789 containerd[1508]: time="2025-01-30T14:56:50.359167961Z" level=info msg="TearDown network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" successfully" Jan 30 14:56:50.359789 containerd[1508]: time="2025-01-30T14:56:50.359212732Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" returns successfully" Jan 30 14:56:50.360718 systemd[1]: run-netns-cni\x2d232c75b4\x2dc424\x2d7adc\x2d93f6\x2d8a97a3ae6a63.mount: Deactivated successfully. Jan 30 14:56:50.362794 containerd[1508]: time="2025-01-30T14:56:50.362541048Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\"" Jan 30 14:56:50.364785 containerd[1508]: time="2025-01-30T14:56:50.364119290Z" level=info msg="TearDown network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" successfully" Jan 30 14:56:50.364785 containerd[1508]: time="2025-01-30T14:56:50.364146257Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" returns successfully" Jan 30 14:56:50.364785 containerd[1508]: time="2025-01-30T14:56:50.364348841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:1,}" Jan 30 14:56:50.365389 containerd[1508]: time="2025-01-30T14:56:50.365354155Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\"" Jan 30 14:56:50.365567 containerd[1508]: time="2025-01-30T14:56:50.365523008Z" level=info msg="TearDown network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" successfully" Jan 30 14:56:50.365567 containerd[1508]: time="2025-01-30T14:56:50.365561324Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" returns successfully" Jan 30 14:56:50.366175 containerd[1508]: time="2025-01-30T14:56:50.366145207Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" Jan 30 14:56:50.366396 containerd[1508]: time="2025-01-30T14:56:50.366369476Z" level=info msg="TearDown network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" successfully" Jan 30 14:56:50.366561 containerd[1508]: time="2025-01-30T14:56:50.366522268Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" returns successfully" Jan 30 14:56:50.367247 containerd[1508]: time="2025-01-30T14:56:50.367162473Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:56:50.367481 containerd[1508]: time="2025-01-30T14:56:50.367453729Z" level=info msg="TearDown network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" successfully" Jan 30 14:56:50.367821 containerd[1508]: time="2025-01-30T14:56:50.367794318Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" returns successfully" Jan 30 14:56:50.369266 containerd[1508]: time="2025-01-30T14:56:50.369143057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:5,}" Jan 30 14:56:50.635247 containerd[1508]: time="2025-01-30T14:56:50.633219135Z" level=error msg="Failed to destroy network for sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:50.635247 containerd[1508]: time="2025-01-30T14:56:50.633793102Z" level=error msg="encountered an error cleaning up failed sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:50.635247 containerd[1508]: time="2025-01-30T14:56:50.633900107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:50.636266 kubelet[1911]: E0130 14:56:50.634230 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:50.636266 kubelet[1911]: E0130 14:56:50.634320 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:50.636266 kubelet[1911]: E0130 14:56:50.634363 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:50.636435 kubelet[1911]: E0130 14:56:50.634429 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-4jcjz_default(69380d9b-ecad-417e-b5ed-aa8051a80de1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-4jcjz_default(69380d9b-ecad-417e-b5ed-aa8051a80de1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-4jcjz" podUID="69380d9b-ecad-417e-b5ed-aa8051a80de1" Jan 30 14:56:50.658083 containerd[1508]: time="2025-01-30T14:56:50.658010094Z" level=error msg="Failed to destroy network for sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:50.658942 containerd[1508]: time="2025-01-30T14:56:50.658905800Z" level=error msg="encountered an error cleaning up failed sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:50.659565 containerd[1508]: time="2025-01-30T14:56:50.659514995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:50.660006 kubelet[1911]: E0130 14:56:50.659961 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:50.660932 kubelet[1911]: E0130 14:56:50.660459 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:50.660932 kubelet[1911]: E0130 14:56:50.660529 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:50.660932 kubelet[1911]: E0130 14:56:50.660631 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:51.100957 kubelet[1911]: E0130 14:56:51.100833 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:51.337232 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159-shm.mount: Deactivated successfully. Jan 30 14:56:51.337896 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618-shm.mount: Deactivated successfully. Jan 30 14:56:51.359666 kubelet[1911]: I0130 14:56:51.358693 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159" Jan 30 14:56:51.359817 containerd[1508]: time="2025-01-30T14:56:51.359389130Z" level=info msg="StopPodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\"" Jan 30 14:56:51.360768 containerd[1508]: time="2025-01-30T14:56:51.360389354Z" level=info msg="Ensure that sandbox 2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159 in task-service has been cleanup successfully" Jan 30 14:56:51.360939 containerd[1508]: time="2025-01-30T14:56:51.360910813Z" level=info msg="TearDown network for sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" successfully" Jan 30 14:56:51.361066 containerd[1508]: time="2025-01-30T14:56:51.361041263Z" level=info msg="StopPodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" returns successfully" Jan 30 14:56:51.361661 containerd[1508]: time="2025-01-30T14:56:51.361631472Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\"" Jan 30 14:56:51.361851 containerd[1508]: time="2025-01-30T14:56:51.361824395Z" level=info msg="TearDown network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" successfully" Jan 30 14:56:51.362093 containerd[1508]: time="2025-01-30T14:56:51.361959044Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" returns successfully" Jan 30 14:56:51.365516 systemd[1]: run-netns-cni\x2db3a9b458\x2dc26a\x2d886f\x2d65a1\x2d26865f04faad.mount: Deactivated successfully. Jan 30 14:56:51.366013 containerd[1508]: time="2025-01-30T14:56:51.365980873Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\"" Jan 30 14:56:51.366493 containerd[1508]: time="2025-01-30T14:56:51.366463272Z" level=info msg="TearDown network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" successfully" Jan 30 14:56:51.366617 containerd[1508]: time="2025-01-30T14:56:51.366591256Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" returns successfully" Jan 30 14:56:51.367726 containerd[1508]: time="2025-01-30T14:56:51.367528518Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\"" Jan 30 14:56:51.367726 containerd[1508]: time="2025-01-30T14:56:51.367648356Z" level=info msg="TearDown network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" successfully" Jan 30 14:56:51.367726 containerd[1508]: time="2025-01-30T14:56:51.367667264Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" returns successfully" Jan 30 14:56:51.369108 kubelet[1911]: I0130 14:56:51.368410 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618" Jan 30 14:56:51.370266 containerd[1508]: time="2025-01-30T14:56:51.370235285Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" Jan 30 14:56:51.370465 containerd[1508]: time="2025-01-30T14:56:51.370438527Z" level=info msg="TearDown network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" successfully" Jan 30 14:56:51.370596 containerd[1508]: time="2025-01-30T14:56:51.370570160Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" returns successfully" Jan 30 14:56:51.371051 containerd[1508]: time="2025-01-30T14:56:51.371015072Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:56:51.371213 containerd[1508]: time="2025-01-30T14:56:51.371147278Z" level=info msg="TearDown network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" successfully" Jan 30 14:56:51.371213 containerd[1508]: time="2025-01-30T14:56:51.371167384Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" returns successfully" Jan 30 14:56:51.371314 containerd[1508]: time="2025-01-30T14:56:51.371225683Z" level=info msg="StopPodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\"" Jan 30 14:56:51.371706 containerd[1508]: time="2025-01-30T14:56:51.371419803Z" level=info msg="Ensure that sandbox 41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618 in task-service has been cleanup successfully" Jan 30 14:56:51.372225 containerd[1508]: time="2025-01-30T14:56:51.372179737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:6,}" Jan 30 14:56:51.376009 containerd[1508]: time="2025-01-30T14:56:51.375910329Z" level=info msg="TearDown network for sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" successfully" Jan 30 14:56:51.376009 containerd[1508]: time="2025-01-30T14:56:51.375938970Z" level=info msg="StopPodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" returns successfully" Jan 30 14:56:51.377111 containerd[1508]: time="2025-01-30T14:56:51.376589479Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\"" Jan 30 14:56:51.377111 containerd[1508]: time="2025-01-30T14:56:51.376699191Z" level=info msg="TearDown network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" successfully" Jan 30 14:56:51.377111 containerd[1508]: time="2025-01-30T14:56:51.376718120Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" returns successfully" Jan 30 14:56:51.377490 containerd[1508]: time="2025-01-30T14:56:51.377427006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:2,}" Jan 30 14:56:51.378366 systemd[1]: run-netns-cni\x2d2ed4405c\x2dbe66\x2df4a4\x2d8f3b\x2d4260f0e563d4.mount: Deactivated successfully. Jan 30 14:56:51.535426 containerd[1508]: time="2025-01-30T14:56:51.535294919Z" level=error msg="Failed to destroy network for sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:51.536082 containerd[1508]: time="2025-01-30T14:56:51.535962351Z" level=error msg="encountered an error cleaning up failed sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:51.536519 containerd[1508]: time="2025-01-30T14:56:51.536039032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:51.537275 kubelet[1911]: E0130 14:56:51.537142 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:51.537572 kubelet[1911]: E0130 14:56:51.537245 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:51.537572 kubelet[1911]: E0130 14:56:51.537453 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:51.538422 kubelet[1911]: E0130 14:56:51.538020 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-4jcjz_default(69380d9b-ecad-417e-b5ed-aa8051a80de1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-4jcjz_default(69380d9b-ecad-417e-b5ed-aa8051a80de1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-4jcjz" podUID="69380d9b-ecad-417e-b5ed-aa8051a80de1" Jan 30 14:56:51.571162 containerd[1508]: time="2025-01-30T14:56:51.570992030Z" level=error msg="Failed to destroy network for sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:51.572247 containerd[1508]: time="2025-01-30T14:56:51.572036133Z" level=error msg="encountered an error cleaning up failed sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:51.573116 containerd[1508]: time="2025-01-30T14:56:51.572484424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:51.573202 kubelet[1911]: E0130 14:56:51.572759 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:51.573202 kubelet[1911]: E0130 14:56:51.572835 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:51.573202 kubelet[1911]: E0130 14:56:51.572878 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:51.573398 kubelet[1911]: E0130 14:56:51.572931 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:52.101999 kubelet[1911]: E0130 14:56:52.101638 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:52.338627 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28-shm.mount: Deactivated successfully. Jan 30 14:56:52.379972 kubelet[1911]: I0130 14:56:52.379092 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28" Jan 30 14:56:52.380774 containerd[1508]: time="2025-01-30T14:56:52.380735094Z" level=info msg="StopPodSandbox for \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\"" Jan 30 14:56:52.382874 containerd[1508]: time="2025-01-30T14:56:52.382704781Z" level=info msg="Ensure that sandbox 5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28 in task-service has been cleanup successfully" Jan 30 14:56:52.385369 containerd[1508]: time="2025-01-30T14:56:52.385333300Z" level=info msg="TearDown network for sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\" successfully" Jan 30 14:56:52.385736 containerd[1508]: time="2025-01-30T14:56:52.385478621Z" level=info msg="StopPodSandbox for \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\" returns successfully" Jan 30 14:56:52.387095 containerd[1508]: time="2025-01-30T14:56:52.386103618Z" level=info msg="StopPodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\"" Jan 30 14:56:52.387095 containerd[1508]: time="2025-01-30T14:56:52.386219329Z" level=info msg="TearDown network for sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" successfully" Jan 30 14:56:52.387095 containerd[1508]: time="2025-01-30T14:56:52.386245733Z" level=info msg="StopPodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" returns successfully" Jan 30 14:56:52.386910 systemd[1]: run-netns-cni\x2d273f9962\x2ddba1\x2deb9b\x2d981f\x2db28605e6120b.mount: Deactivated successfully. Jan 30 14:56:52.388505 containerd[1508]: time="2025-01-30T14:56:52.387645622Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\"" Jan 30 14:56:52.388505 containerd[1508]: time="2025-01-30T14:56:52.387760444Z" level=info msg="TearDown network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" successfully" Jan 30 14:56:52.388505 containerd[1508]: time="2025-01-30T14:56:52.387780584Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" returns successfully" Jan 30 14:56:52.388708 kubelet[1911]: I0130 14:56:52.388339 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b" Jan 30 14:56:52.390113 containerd[1508]: time="2025-01-30T14:56:52.389799626Z" level=info msg="StopPodSandbox for \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\"" Jan 30 14:56:52.390385 containerd[1508]: time="2025-01-30T14:56:52.390355378Z" level=info msg="Ensure that sandbox 18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b in task-service has been cleanup successfully" Jan 30 14:56:52.392570 systemd[1]: run-netns-cni\x2dcf1e3489\x2dfd50\x2d9675\x2dbca1\x2d77372308f7aa.mount: Deactivated successfully. Jan 30 14:56:52.393203 containerd[1508]: time="2025-01-30T14:56:52.392967289Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\"" Jan 30 14:56:52.393777 containerd[1508]: time="2025-01-30T14:56:52.393167572Z" level=info msg="TearDown network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" successfully" Jan 30 14:56:52.393777 containerd[1508]: time="2025-01-30T14:56:52.393650059Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" returns successfully" Jan 30 14:56:52.393777 containerd[1508]: time="2025-01-30T14:56:52.393375752Z" level=info msg="TearDown network for sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\" successfully" Jan 30 14:56:52.393777 containerd[1508]: time="2025-01-30T14:56:52.393741421Z" level=info msg="StopPodSandbox for \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\" returns successfully" Jan 30 14:56:52.395701 containerd[1508]: time="2025-01-30T14:56:52.394951127Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\"" Jan 30 14:56:52.395701 containerd[1508]: time="2025-01-30T14:56:52.395174161Z" level=info msg="TearDown network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" successfully" Jan 30 14:56:52.395701 containerd[1508]: time="2025-01-30T14:56:52.395198092Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" returns successfully" Jan 30 14:56:52.395701 containerd[1508]: time="2025-01-30T14:56:52.395461030Z" level=info msg="StopPodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\"" Jan 30 14:56:52.395917 containerd[1508]: time="2025-01-30T14:56:52.395802282Z" level=info msg="TearDown network for sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" successfully" Jan 30 14:56:52.395917 containerd[1508]: time="2025-01-30T14:56:52.395832570Z" level=info msg="StopPodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" returns successfully" Jan 30 14:56:52.397091 containerd[1508]: time="2025-01-30T14:56:52.396461659Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\"" Jan 30 14:56:52.397091 containerd[1508]: time="2025-01-30T14:56:52.396660852Z" level=info msg="TearDown network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" successfully" Jan 30 14:56:52.397091 containerd[1508]: time="2025-01-30T14:56:52.396682261Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" returns successfully" Jan 30 14:56:52.397091 containerd[1508]: time="2025-01-30T14:56:52.396767104Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" Jan 30 14:56:52.397091 containerd[1508]: time="2025-01-30T14:56:52.396923979Z" level=info msg="TearDown network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" successfully" Jan 30 14:56:52.397091 containerd[1508]: time="2025-01-30T14:56:52.396950380Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" returns successfully" Jan 30 14:56:52.399093 containerd[1508]: time="2025-01-30T14:56:52.398917793Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:56:52.399620 containerd[1508]: time="2025-01-30T14:56:52.399053927Z" level=info msg="TearDown network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" successfully" Jan 30 14:56:52.400099 containerd[1508]: time="2025-01-30T14:56:52.399887975Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" returns successfully" Jan 30 14:56:52.400099 containerd[1508]: time="2025-01-30T14:56:52.399611375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:3,}" Jan 30 14:56:52.409718 containerd[1508]: time="2025-01-30T14:56:52.409675380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:7,}" Jan 30 14:56:52.539164 containerd[1508]: time="2025-01-30T14:56:52.539011346Z" level=error msg="Failed to destroy network for sandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:52.540061 containerd[1508]: time="2025-01-30T14:56:52.539839205Z" level=error msg="encountered an error cleaning up failed sandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:52.540061 containerd[1508]: time="2025-01-30T14:56:52.539937779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:52.540809 kubelet[1911]: E0130 14:56:52.540506 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:52.540809 kubelet[1911]: E0130 14:56:52.540595 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:52.540809 kubelet[1911]: E0130 14:56:52.540628 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:52.541004 kubelet[1911]: E0130 14:56:52.540702 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-4jcjz_default(69380d9b-ecad-417e-b5ed-aa8051a80de1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-4jcjz_default(69380d9b-ecad-417e-b5ed-aa8051a80de1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-4jcjz" podUID="69380d9b-ecad-417e-b5ed-aa8051a80de1" Jan 30 14:56:52.577590 containerd[1508]: time="2025-01-30T14:56:52.577484735Z" level=error msg="Failed to destroy network for sandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:52.578929 containerd[1508]: time="2025-01-30T14:56:52.578681963Z" level=error msg="encountered an error cleaning up failed sandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:52.578929 containerd[1508]: time="2025-01-30T14:56:52.578768882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:52.579817 kubelet[1911]: E0130 14:56:52.579257 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:52.579817 kubelet[1911]: E0130 14:56:52.579335 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:52.579817 kubelet[1911]: E0130 14:56:52.579366 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:52.580022 kubelet[1911]: E0130 14:56:52.579427 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:53.102546 kubelet[1911]: E0130 14:56:53.102443 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:53.336936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b-shm.mount: Deactivated successfully. Jan 30 14:56:53.397641 kubelet[1911]: I0130 14:56:53.396697 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0" Jan 30 14:56:53.398493 containerd[1508]: time="2025-01-30T14:56:53.398429959Z" level=info msg="StopPodSandbox for \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\"" Jan 30 14:56:53.399501 containerd[1508]: time="2025-01-30T14:56:53.399285608Z" level=info msg="Ensure that sandbox f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0 in task-service has been cleanup successfully" Jan 30 14:56:53.402671 containerd[1508]: time="2025-01-30T14:56:53.402640698Z" level=info msg="TearDown network for sandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\" successfully" Jan 30 14:56:53.402792 systemd[1]: run-netns-cni\x2dd3bbbd5b\x2dae31\x2df0b7\x2d2ced\x2d7c32abf1ea85.mount: Deactivated successfully. Jan 30 14:56:53.404308 containerd[1508]: time="2025-01-30T14:56:53.402784307Z" level=info msg="StopPodSandbox for \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\" returns successfully" Jan 30 14:56:53.404374 containerd[1508]: time="2025-01-30T14:56:53.404349032Z" level=info msg="StopPodSandbox for \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\"" Jan 30 14:56:53.405117 containerd[1508]: time="2025-01-30T14:56:53.404449722Z" level=info msg="TearDown network for sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\" successfully" Jan 30 14:56:53.405117 containerd[1508]: time="2025-01-30T14:56:53.404476557Z" level=info msg="StopPodSandbox for \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\" returns successfully" Jan 30 14:56:53.405117 containerd[1508]: time="2025-01-30T14:56:53.404915451Z" level=info msg="StopPodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\"" Jan 30 14:56:53.405117 containerd[1508]: time="2025-01-30T14:56:53.405021216Z" level=info msg="TearDown network for sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" successfully" Jan 30 14:56:53.405117 containerd[1508]: time="2025-01-30T14:56:53.405038176Z" level=info msg="StopPodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" returns successfully" Jan 30 14:56:53.406438 containerd[1508]: time="2025-01-30T14:56:53.405974410Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\"" Jan 30 14:56:53.406438 containerd[1508]: time="2025-01-30T14:56:53.406124711Z" level=info msg="TearDown network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" successfully" Jan 30 14:56:53.406438 containerd[1508]: time="2025-01-30T14:56:53.406143537Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" returns successfully" Jan 30 14:56:53.406849 containerd[1508]: time="2025-01-30T14:56:53.406765994Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\"" Jan 30 14:56:53.406933 containerd[1508]: time="2025-01-30T14:56:53.406877428Z" level=info msg="TearDown network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" successfully" Jan 30 14:56:53.406933 containerd[1508]: time="2025-01-30T14:56:53.406895322Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" returns successfully" Jan 30 14:56:53.409269 containerd[1508]: time="2025-01-30T14:56:53.407432578Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\"" Jan 30 14:56:53.409269 containerd[1508]: time="2025-01-30T14:56:53.407530677Z" level=info msg="TearDown network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" successfully" Jan 30 14:56:53.409269 containerd[1508]: time="2025-01-30T14:56:53.407549555Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" returns successfully" Jan 30 14:56:53.409269 containerd[1508]: time="2025-01-30T14:56:53.408500609Z" level=info msg="StopPodSandbox for \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\"" Jan 30 14:56:53.409269 containerd[1508]: time="2025-01-30T14:56:53.408739361Z" level=info msg="Ensure that sandbox 882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b in task-service has been cleanup successfully" Jan 30 14:56:53.409472 kubelet[1911]: I0130 14:56:53.407748 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b" Jan 30 14:56:53.409770 containerd[1508]: time="2025-01-30T14:56:53.409735017Z" level=info msg="TearDown network for sandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\" successfully" Jan 30 14:56:53.409770 containerd[1508]: time="2025-01-30T14:56:53.409764135Z" level=info msg="StopPodSandbox for \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\" returns successfully" Jan 30 14:56:53.409909 containerd[1508]: time="2025-01-30T14:56:53.409851175Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" Jan 30 14:56:53.409967 containerd[1508]: time="2025-01-30T14:56:53.409953682Z" level=info msg="TearDown network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" successfully" Jan 30 14:56:53.410042 containerd[1508]: time="2025-01-30T14:56:53.409970930Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" returns successfully" Jan 30 14:56:53.411307 containerd[1508]: time="2025-01-30T14:56:53.411272706Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:56:53.411403 containerd[1508]: time="2025-01-30T14:56:53.411377163Z" level=info msg="TearDown network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" successfully" Jan 30 14:56:53.411459 containerd[1508]: time="2025-01-30T14:56:53.411401265Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" returns successfully" Jan 30 14:56:53.411533 containerd[1508]: time="2025-01-30T14:56:53.411470426Z" level=info msg="StopPodSandbox for \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\"" Jan 30 14:56:53.411599 containerd[1508]: time="2025-01-30T14:56:53.411558247Z" level=info msg="TearDown network for sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\" successfully" Jan 30 14:56:53.411650 containerd[1508]: time="2025-01-30T14:56:53.411600761Z" level=info msg="StopPodSandbox for \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\" returns successfully" Jan 30 14:56:53.414915 containerd[1508]: time="2025-01-30T14:56:53.412936689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:8,}" Jan 30 14:56:53.411989 systemd[1]: run-netns-cni\x2dfc5cb619\x2ddb89\x2d328b\x2dd7e4\x2d69d895780f5a.mount: Deactivated successfully. Jan 30 14:56:53.415556 containerd[1508]: time="2025-01-30T14:56:53.415523229Z" level=info msg="StopPodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\"" Jan 30 14:56:53.415772 containerd[1508]: time="2025-01-30T14:56:53.415745499Z" level=info msg="TearDown network for sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" successfully" Jan 30 14:56:53.415886 containerd[1508]: time="2025-01-30T14:56:53.415863418Z" level=info msg="StopPodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" returns successfully" Jan 30 14:56:53.417488 containerd[1508]: time="2025-01-30T14:56:53.417457933Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\"" Jan 30 14:56:53.417881 containerd[1508]: time="2025-01-30T14:56:53.417853965Z" level=info msg="TearDown network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" successfully" Jan 30 14:56:53.418086 containerd[1508]: time="2025-01-30T14:56:53.417994397Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" returns successfully" Jan 30 14:56:53.419187 containerd[1508]: time="2025-01-30T14:56:53.418853908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:4,}" Jan 30 14:56:53.565557 containerd[1508]: time="2025-01-30T14:56:53.565496668Z" level=error msg="Failed to destroy network for sandbox \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:53.566897 containerd[1508]: time="2025-01-30T14:56:53.566096578Z" level=error msg="encountered an error cleaning up failed sandbox \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:53.566897 containerd[1508]: time="2025-01-30T14:56:53.566188382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:53.567114 kubelet[1911]: E0130 14:56:53.566428 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:53.567114 kubelet[1911]: E0130 14:56:53.566501 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:53.567114 kubelet[1911]: E0130 14:56:53.566534 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:53.567280 kubelet[1911]: E0130 14:56:53.566602 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-4jcjz_default(69380d9b-ecad-417e-b5ed-aa8051a80de1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-4jcjz_default(69380d9b-ecad-417e-b5ed-aa8051a80de1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-4jcjz" podUID="69380d9b-ecad-417e-b5ed-aa8051a80de1" Jan 30 14:56:53.579860 containerd[1508]: time="2025-01-30T14:56:53.579048616Z" level=error msg="Failed to destroy network for sandbox \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:53.579860 containerd[1508]: time="2025-01-30T14:56:53.579468179Z" level=error msg="encountered an error cleaning up failed sandbox \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:53.579860 containerd[1508]: time="2025-01-30T14:56:53.579533221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:53.580648 kubelet[1911]: E0130 14:56:53.580204 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:53.580648 kubelet[1911]: E0130 14:56:53.580270 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:53.580648 kubelet[1911]: E0130 14:56:53.580302 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:53.580822 kubelet[1911]: E0130 14:56:53.580362 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:54.103025 kubelet[1911]: E0130 14:56:54.102946 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:54.291469 containerd[1508]: time="2025-01-30T14:56:54.290397493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:54.332376 containerd[1508]: time="2025-01-30T14:56:54.332214307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 14:56:54.333779 containerd[1508]: time="2025-01-30T14:56:54.333741443Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:54.336734 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca-shm.mount: Deactivated successfully. Jan 30 14:56:54.336932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4033534490.mount: Deactivated successfully. Jan 30 14:56:54.337991 containerd[1508]: time="2025-01-30T14:56:54.337548881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:54.340393 containerd[1508]: time="2025-01-30T14:56:54.340226630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.042391996s" Jan 30 14:56:54.340393 containerd[1508]: time="2025-01-30T14:56:54.340268918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 14:56:54.375925 containerd[1508]: time="2025-01-30T14:56:54.375671957Z" level=info msg="CreateContainer within sandbox \"5cfdb41e5de9d8e7549bf564659a953f43c6e83b14d3711c9331be6c11cbe01c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 14:56:54.395402 containerd[1508]: time="2025-01-30T14:56:54.395352585Z" level=info msg="CreateContainer within sandbox \"5cfdb41e5de9d8e7549bf564659a953f43c6e83b14d3711c9331be6c11cbe01c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dc01abecd907a50a1e89981562253ee5a40ca9ff32d0c11cedabae586c858ba1\"" Jan 30 14:56:54.396475 containerd[1508]: time="2025-01-30T14:56:54.396286917Z" level=info msg="StartContainer for \"dc01abecd907a50a1e89981562253ee5a40ca9ff32d0c11cedabae586c858ba1\"" Jan 30 14:56:54.421104 kubelet[1911]: I0130 14:56:54.420542 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca" Jan 30 14:56:54.421611 containerd[1508]: time="2025-01-30T14:56:54.421552761Z" level=info msg="StopPodSandbox for \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\"" Jan 30 14:56:54.423687 containerd[1508]: time="2025-01-30T14:56:54.423028320Z" level=info msg="Ensure that sandbox 5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca in task-service has been cleanup successfully" Jan 30 14:56:54.427133 containerd[1508]: time="2025-01-30T14:56:54.427017313Z" level=info msg="TearDown network for sandbox \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\" successfully" Jan 30 14:56:54.427133 containerd[1508]: time="2025-01-30T14:56:54.427044493Z" level=info msg="StopPodSandbox for \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\" returns successfully" Jan 30 14:56:54.428080 systemd[1]: run-netns-cni\x2d2f50fc49\x2dc570\x2dc6a9\x2de51a\x2dc6fc5976b032.mount: Deactivated successfully. Jan 30 14:56:54.431503 containerd[1508]: time="2025-01-30T14:56:54.430620520Z" level=info msg="StopPodSandbox for \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\"" Jan 30 14:56:54.431503 containerd[1508]: time="2025-01-30T14:56:54.430741549Z" level=info msg="TearDown network for sandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\" successfully" Jan 30 14:56:54.431503 containerd[1508]: time="2025-01-30T14:56:54.430761145Z" level=info msg="StopPodSandbox for \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\" returns successfully" Jan 30 14:56:54.431835 containerd[1508]: time="2025-01-30T14:56:54.431518071Z" level=info msg="StopPodSandbox for \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\"" Jan 30 14:56:54.431835 containerd[1508]: time="2025-01-30T14:56:54.431664491Z" level=info msg="TearDown network for sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\" successfully" Jan 30 14:56:54.431835 containerd[1508]: time="2025-01-30T14:56:54.431684475Z" level=info msg="StopPodSandbox for \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\" returns successfully" Jan 30 14:56:54.432942 kubelet[1911]: I0130 14:56:54.432294 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d" Jan 30 14:56:54.433192 containerd[1508]: time="2025-01-30T14:56:54.433160221Z" level=info msg="StopPodSandbox for \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\"" Jan 30 14:56:54.433801 containerd[1508]: time="2025-01-30T14:56:54.433770577Z" level=info msg="Ensure that sandbox e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d in task-service has been cleanup successfully" Jan 30 14:56:54.434215 containerd[1508]: time="2025-01-30T14:56:54.433514128Z" level=info msg="StopPodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\"" Jan 30 14:56:54.434296 containerd[1508]: time="2025-01-30T14:56:54.434210707Z" level=info msg="TearDown network for sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" successfully" Jan 30 14:56:54.434296 containerd[1508]: time="2025-01-30T14:56:54.434232041Z" level=info msg="StopPodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" returns successfully" Jan 30 14:56:54.435012 containerd[1508]: time="2025-01-30T14:56:54.434647976Z" level=info msg="TearDown network for sandbox \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\" successfully" Jan 30 14:56:54.435431 containerd[1508]: time="2025-01-30T14:56:54.435322173Z" level=info msg="StopPodSandbox for \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\" returns successfully" Jan 30 14:56:54.435600 containerd[1508]: time="2025-01-30T14:56:54.434978289Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\"" Jan 30 14:56:54.435783 containerd[1508]: time="2025-01-30T14:56:54.435738565Z" level=info msg="TearDown network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" successfully" Jan 30 14:56:54.435957 containerd[1508]: time="2025-01-30T14:56:54.435859415Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" returns successfully" Jan 30 14:56:54.437025 containerd[1508]: time="2025-01-30T14:56:54.436647869Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\"" Jan 30 14:56:54.437025 containerd[1508]: time="2025-01-30T14:56:54.436749754Z" level=info msg="TearDown network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" successfully" Jan 30 14:56:54.437025 containerd[1508]: time="2025-01-30T14:56:54.436768377Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" returns successfully" Jan 30 14:56:54.437025 containerd[1508]: time="2025-01-30T14:56:54.436852318Z" level=info msg="StopPodSandbox for \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\"" Jan 30 14:56:54.437025 containerd[1508]: time="2025-01-30T14:56:54.436945216Z" level=info msg="TearDown network for sandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\" successfully" Jan 30 14:56:54.437025 containerd[1508]: time="2025-01-30T14:56:54.436962547Z" level=info msg="StopPodSandbox for \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\" returns successfully" Jan 30 14:56:54.437743 containerd[1508]: time="2025-01-30T14:56:54.437715522Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\"" Jan 30 14:56:54.437993 containerd[1508]: time="2025-01-30T14:56:54.437967849Z" level=info msg="TearDown network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" successfully" Jan 30 14:56:54.438307 containerd[1508]: time="2025-01-30T14:56:54.438103997Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" returns successfully" Jan 30 14:56:54.438487 containerd[1508]: time="2025-01-30T14:56:54.438153075Z" level=info msg="StopPodSandbox for \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\"" Jan 30 14:56:54.438811 containerd[1508]: time="2025-01-30T14:56:54.438680834Z" level=info msg="TearDown network for sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\" successfully" Jan 30 14:56:54.439880 containerd[1508]: time="2025-01-30T14:56:54.439560904Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" Jan 30 14:56:54.440105 containerd[1508]: time="2025-01-30T14:56:54.440025628Z" level=info msg="TearDown network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" successfully" Jan 30 14:56:54.440310 containerd[1508]: time="2025-01-30T14:56:54.440242006Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" returns successfully" Jan 30 14:56:54.440421 containerd[1508]: time="2025-01-30T14:56:54.440059954Z" level=info msg="StopPodSandbox for \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\" returns successfully" Jan 30 14:56:54.441419 containerd[1508]: time="2025-01-30T14:56:54.440913117Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:56:54.441419 containerd[1508]: time="2025-01-30T14:56:54.441023014Z" level=info msg="TearDown network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" successfully" Jan 30 14:56:54.441419 containerd[1508]: time="2025-01-30T14:56:54.441040967Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" returns successfully" Jan 30 14:56:54.441962 containerd[1508]: time="2025-01-30T14:56:54.441909278Z" level=info msg="StopPodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\"" Jan 30 14:56:54.442033 containerd[1508]: time="2025-01-30T14:56:54.442017963Z" level=info msg="TearDown network for sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" successfully" Jan 30 14:56:54.442275 containerd[1508]: time="2025-01-30T14:56:54.442037030Z" level=info msg="StopPodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" returns successfully" Jan 30 14:56:54.442427 containerd[1508]: time="2025-01-30T14:56:54.442341700Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\"" Jan 30 14:56:54.442494 containerd[1508]: time="2025-01-30T14:56:54.442443667Z" level=info msg="TearDown network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" successfully" Jan 30 14:56:54.442494 containerd[1508]: time="2025-01-30T14:56:54.442462395Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" returns successfully" Jan 30 14:56:54.443211 containerd[1508]: time="2025-01-30T14:56:54.442958523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:9,}" Jan 30 14:56:54.443730 containerd[1508]: time="2025-01-30T14:56:54.443566318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:5,}" Jan 30 14:56:54.541309 systemd[1]: Started cri-containerd-dc01abecd907a50a1e89981562253ee5a40ca9ff32d0c11cedabae586c858ba1.scope - libcontainer container dc01abecd907a50a1e89981562253ee5a40ca9ff32d0c11cedabae586c858ba1. Jan 30 14:56:54.580730 containerd[1508]: time="2025-01-30T14:56:54.580672027Z" level=error msg="Failed to destroy network for sandbox \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:54.581461 containerd[1508]: time="2025-01-30T14:56:54.581322021Z" level=error msg="encountered an error cleaning up failed sandbox \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:54.581461 containerd[1508]: time="2025-01-30T14:56:54.581399877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:54.582129 kubelet[1911]: E0130 14:56:54.581971 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:54.582229 kubelet[1911]: E0130 14:56:54.582051 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:54.582315 kubelet[1911]: E0130 14:56:54.582231 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqq8s" Jan 30 14:56:54.582410 kubelet[1911]: E0130 14:56:54.582302 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gqq8s_calico-system(11876523-7753-443e-b7b7-8d73fa03192e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gqq8s" podUID="11876523-7753-443e-b7b7-8d73fa03192e" Jan 30 14:56:54.607251 containerd[1508]: time="2025-01-30T14:56:54.607038849Z" level=error msg="Failed to destroy network for sandbox \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:54.607846 containerd[1508]: time="2025-01-30T14:56:54.607700102Z" level=error msg="encountered an error cleaning up failed sandbox \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:54.607846 containerd[1508]: time="2025-01-30T14:56:54.607784907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:54.609397 kubelet[1911]: E0130 14:56:54.608830 1911 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:56:54.609397 kubelet[1911]: E0130 14:56:54.608929 1911 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:54.609397 kubelet[1911]: E0130 14:56:54.608981 1911 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4jcjz" Jan 30 14:56:54.610174 kubelet[1911]: E0130 14:56:54.610101 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-4jcjz_default(69380d9b-ecad-417e-b5ed-aa8051a80de1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-4jcjz_default(69380d9b-ecad-417e-b5ed-aa8051a80de1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-4jcjz" podUID="69380d9b-ecad-417e-b5ed-aa8051a80de1" Jan 30 14:56:54.615299 containerd[1508]: time="2025-01-30T14:56:54.615221403Z" level=info msg="StartContainer for \"dc01abecd907a50a1e89981562253ee5a40ca9ff32d0c11cedabae586c858ba1\" returns successfully" Jan 30 14:56:54.710222 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 14:56:54.710893 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 14:56:55.104206 kubelet[1911]: E0130 14:56:55.104139 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:55.338527 systemd[1]: run-netns-cni\x2d98dd893d\x2d2b6b\x2dfe67\x2dd840\x2d5d8094a304e7.mount: Deactivated successfully. Jan 30 14:56:55.458660 kubelet[1911]: I0130 14:56:55.457369 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53" Jan 30 14:56:55.459130 containerd[1508]: time="2025-01-30T14:56:55.458323105Z" level=info msg="StopPodSandbox for \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\"" Jan 30 14:56:55.459130 containerd[1508]: time="2025-01-30T14:56:55.458642763Z" level=info msg="Ensure that sandbox bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53 in task-service has been cleanup successfully" Jan 30 14:56:55.463413 systemd[1]: run-netns-cni\x2ded3be61a\x2dc06a\x2dd8f8\x2d7da2\x2d9a1bc60d08bf.mount: Deactivated successfully. Jan 30 14:56:55.465632 containerd[1508]: time="2025-01-30T14:56:55.465587738Z" level=info msg="TearDown network for sandbox \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\" successfully" Jan 30 14:56:55.465849 containerd[1508]: time="2025-01-30T14:56:55.465816734Z" level=info msg="StopPodSandbox for \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\" returns successfully" Jan 30 14:56:55.467265 containerd[1508]: time="2025-01-30T14:56:55.467143531Z" level=info msg="StopPodSandbox for \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\"" Jan 30 14:56:55.467881 kubelet[1911]: I0130 14:56:55.467713 1911 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e" Jan 30 14:56:55.468124 containerd[1508]: time="2025-01-30T14:56:55.467810521Z" level=info msg="TearDown network for sandbox \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\" successfully" Jan 30 14:56:55.468124 containerd[1508]: time="2025-01-30T14:56:55.467938729Z" level=info msg="StopPodSandbox for \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\" returns successfully" Jan 30 14:56:55.468854 containerd[1508]: time="2025-01-30T14:56:55.468821606Z" level=info msg="StopPodSandbox for \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\"" Jan 30 14:56:55.469150 containerd[1508]: time="2025-01-30T14:56:55.469054307Z" level=info msg="Ensure that sandbox da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e in task-service has been cleanup successfully" Jan 30 14:56:55.469505 containerd[1508]: time="2025-01-30T14:56:55.469439176Z" level=info msg="StopPodSandbox for \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\"" Jan 30 14:56:55.471933 containerd[1508]: time="2025-01-30T14:56:55.469682020Z" level=info msg="TearDown network for sandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\" successfully" Jan 30 14:56:55.471933 containerd[1508]: time="2025-01-30T14:56:55.469765515Z" level=info msg="StopPodSandbox for \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\" returns successfully" Jan 30 14:56:55.471933 containerd[1508]: time="2025-01-30T14:56:55.469882367Z" level=info msg="TearDown network for sandbox \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\" successfully" Jan 30 14:56:55.471933 containerd[1508]: time="2025-01-30T14:56:55.469902416Z" level=info msg="StopPodSandbox for \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\" returns successfully" Jan 30 14:56:55.473457 systemd[1]: run-netns-cni\x2d1451449a\x2dd873\x2d26b8\x2dc440\x2db9a314766eda.mount: Deactivated successfully. Jan 30 14:56:55.474479 containerd[1508]: time="2025-01-30T14:56:55.474436960Z" level=info msg="StopPodSandbox for \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\"" Jan 30 14:56:55.474607 containerd[1508]: time="2025-01-30T14:56:55.474550880Z" level=info msg="TearDown network for sandbox \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\" successfully" Jan 30 14:56:55.474607 containerd[1508]: time="2025-01-30T14:56:55.474586520Z" level=info msg="StopPodSandbox for \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\" returns successfully" Jan 30 14:56:55.475174 containerd[1508]: time="2025-01-30T14:56:55.474658262Z" level=info msg="StopPodSandbox for \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\"" Jan 30 14:56:55.475174 containerd[1508]: time="2025-01-30T14:56:55.474749262Z" level=info msg="TearDown network for sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\" successfully" Jan 30 14:56:55.475174 containerd[1508]: time="2025-01-30T14:56:55.474766876Z" level=info msg="StopPodSandbox for \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\" returns successfully" Jan 30 14:56:55.475928 containerd[1508]: time="2025-01-30T14:56:55.475747620Z" level=info msg="StopPodSandbox for \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\"" Jan 30 14:56:55.475928 containerd[1508]: time="2025-01-30T14:56:55.475856438Z" level=info msg="TearDown network for sandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\" successfully" Jan 30 14:56:55.475928 containerd[1508]: time="2025-01-30T14:56:55.475876314Z" level=info msg="StopPodSandbox for \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\" returns successfully" Jan 30 14:56:55.476356 containerd[1508]: time="2025-01-30T14:56:55.475944158Z" level=info msg="StopPodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\"" Jan 30 14:56:55.476356 containerd[1508]: time="2025-01-30T14:56:55.476039206Z" level=info msg="TearDown network for sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" successfully" Jan 30 14:56:55.476356 containerd[1508]: time="2025-01-30T14:56:55.476056805Z" level=info msg="StopPodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" returns successfully" Jan 30 14:56:55.477266 containerd[1508]: time="2025-01-30T14:56:55.477215573Z" level=info msg="StopPodSandbox for \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\"" Jan 30 14:56:55.477436 containerd[1508]: time="2025-01-30T14:56:55.477322250Z" level=info msg="TearDown network for sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\" successfully" Jan 30 14:56:55.477436 containerd[1508]: time="2025-01-30T14:56:55.477341560Z" level=info msg="StopPodSandbox for \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\" returns successfully" Jan 30 14:56:55.477436 containerd[1508]: time="2025-01-30T14:56:55.477411280Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\"" Jan 30 14:56:55.477824 containerd[1508]: time="2025-01-30T14:56:55.477506118Z" level=info msg="TearDown network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" successfully" Jan 30 14:56:55.477824 containerd[1508]: time="2025-01-30T14:56:55.477525584Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" returns successfully" Jan 30 14:56:55.478333 containerd[1508]: time="2025-01-30T14:56:55.478272420Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\"" Jan 30 14:56:55.478509 containerd[1508]: time="2025-01-30T14:56:55.478369814Z" level=info msg="TearDown network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" successfully" Jan 30 14:56:55.478509 containerd[1508]: time="2025-01-30T14:56:55.478387364Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" returns successfully" Jan 30 14:56:55.478657 containerd[1508]: time="2025-01-30T14:56:55.478504648Z" level=info msg="StopPodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\"" Jan 30 14:56:55.478657 containerd[1508]: time="2025-01-30T14:56:55.478612661Z" level=info msg="TearDown network for sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" successfully" Jan 30 14:56:55.478657 containerd[1508]: time="2025-01-30T14:56:55.478630202Z" level=info msg="StopPodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" returns successfully" Jan 30 14:56:55.479374 containerd[1508]: time="2025-01-30T14:56:55.479324246Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\"" Jan 30 14:56:55.479477 containerd[1508]: time="2025-01-30T14:56:55.479431927Z" level=info msg="TearDown network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" successfully" Jan 30 14:56:55.479477 containerd[1508]: time="2025-01-30T14:56:55.479450392Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" returns successfully" Jan 30 14:56:55.479616 containerd[1508]: time="2025-01-30T14:56:55.479572064Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\"" Jan 30 14:56:55.479890 containerd[1508]: time="2025-01-30T14:56:55.479671929Z" level=info msg="TearDown network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" successfully" Jan 30 14:56:55.479890 containerd[1508]: time="2025-01-30T14:56:55.479695356Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" returns successfully" Jan 30 14:56:55.482152 containerd[1508]: time="2025-01-30T14:56:55.482047639Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" Jan 30 14:56:55.482358 containerd[1508]: time="2025-01-30T14:56:55.482199657Z" level=info msg="TearDown network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" successfully" Jan 30 14:56:55.482358 containerd[1508]: time="2025-01-30T14:56:55.482220739Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" returns successfully" Jan 30 14:56:55.483263 containerd[1508]: time="2025-01-30T14:56:55.483195239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:6,}" Jan 30 14:56:55.486316 containerd[1508]: time="2025-01-30T14:56:55.486206464Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:56:55.486410 containerd[1508]: time="2025-01-30T14:56:55.486320375Z" level=info msg="TearDown network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" successfully" Jan 30 14:56:55.486410 containerd[1508]: time="2025-01-30T14:56:55.486340091Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" returns successfully" Jan 30 14:56:55.487646 containerd[1508]: time="2025-01-30T14:56:55.487417542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:10,}" Jan 30 14:56:55.777979 systemd-networkd[1432]: cali5aeefc1bc19: Link UP Jan 30 14:56:55.778391 systemd-networkd[1432]: cali5aeefc1bc19: Gained carrier Jan 30 14:56:55.797170 kubelet[1911]: I0130 14:56:55.795125 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gpcp4" podStartSLOduration=5.13497996 podStartE2EDuration="25.794607712s" podCreationTimestamp="2025-01-30 14:56:30 +0000 UTC" firstStartedPulling="2025-01-30 14:56:33.682356569 +0000 UTC m=+4.586165944" lastFinishedPulling="2025-01-30 14:56:54.341984319 +0000 UTC m=+25.245793696" observedRunningTime="2025-01-30 14:56:55.474151937 +0000 UTC m=+26.377961328" watchObservedRunningTime="2025-01-30 14:56:55.794607712 +0000 UTC m=+26.698417095" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.584 [INFO][2962] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.648 [INFO][2962] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.244.11.234-k8s-csi--node--driver--gqq8s-eth0 csi-node-driver- calico-system 11876523-7753-443e-b7b7-8d73fa03192e 1186 0 2025-01-30 14:56:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.244.11.234 csi-node-driver-gqq8s eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5aeefc1bc19 [] []}} ContainerID="fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" Namespace="calico-system" Pod="csi-node-driver-gqq8s" WorkloadEndpoint="10.244.11.234-k8s-csi--node--driver--gqq8s-" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.648 [INFO][2962] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" Namespace="calico-system" Pod="csi-node-driver-gqq8s" WorkloadEndpoint="10.244.11.234-k8s-csi--node--driver--gqq8s-eth0" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.706 [INFO][2986] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" HandleID="k8s-pod-network.fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" Workload="10.244.11.234-k8s-csi--node--driver--gqq8s-eth0" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.720 [INFO][2986] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" HandleID="k8s-pod-network.fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" Workload="10.244.11.234-k8s-csi--node--driver--gqq8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-system", "node":"10.244.11.234", "pod":"csi-node-driver-gqq8s", "timestamp":"2025-01-30 14:56:55.706004452 +0000 UTC"}, Hostname:"10.244.11.234", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.720 [INFO][2986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.720 [INFO][2986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.720 [INFO][2986] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.244.11.234' Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.724 [INFO][2986] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" host="10.244.11.234" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.730 [INFO][2986] ipam/ipam.go 372: Looking up existing affinities for host host="10.244.11.234" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.736 [INFO][2986] ipam/ipam.go 489: Trying affinity for 192.168.32.0/26 host="10.244.11.234" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.739 [INFO][2986] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.0/26 host="10.244.11.234" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.742 [INFO][2986] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="10.244.11.234" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.742 [INFO][2986] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" host="10.244.11.234" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.744 [INFO][2986] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.750 [INFO][2986] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" host="10.244.11.234" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.760 [INFO][2986] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.1/26] block=192.168.32.0/26 handle="k8s-pod-network.fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" host="10.244.11.234" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.760 [INFO][2986] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.1/26] handle="k8s-pod-network.fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" host="10.244.11.234" Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.760 [INFO][2986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:56:55.797388 containerd[1508]: 2025-01-30 14:56:55.760 [INFO][2986] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.1/26] IPv6=[] ContainerID="fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" HandleID="k8s-pod-network.fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" Workload="10.244.11.234-k8s-csi--node--driver--gqq8s-eth0" Jan 30 14:56:55.798265 containerd[1508]: 2025-01-30 14:56:55.765 [INFO][2962] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" Namespace="calico-system" Pod="csi-node-driver-gqq8s" WorkloadEndpoint="10.244.11.234-k8s-csi--node--driver--gqq8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.11.234-k8s-csi--node--driver--gqq8s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11876523-7753-443e-b7b7-8d73fa03192e", ResourceVersion:"1186", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.11.234", ContainerID:"", Pod:"csi-node-driver-gqq8s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5aeefc1bc19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:56:55.798265 containerd[1508]: 2025-01-30 14:56:55.765 [INFO][2962] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.1/32] ContainerID="fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" Namespace="calico-system" Pod="csi-node-driver-gqq8s" WorkloadEndpoint="10.244.11.234-k8s-csi--node--driver--gqq8s-eth0" Jan 30 14:56:55.798265 containerd[1508]: 2025-01-30 14:56:55.765 [INFO][2962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5aeefc1bc19 ContainerID="fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" Namespace="calico-system" Pod="csi-node-driver-gqq8s" WorkloadEndpoint="10.244.11.234-k8s-csi--node--driver--gqq8s-eth0" Jan 30 14:56:55.798265 containerd[1508]: 2025-01-30 14:56:55.778 [INFO][2962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" Namespace="calico-system" Pod="csi-node-driver-gqq8s" WorkloadEndpoint="10.244.11.234-k8s-csi--node--driver--gqq8s-eth0" Jan 30 14:56:55.798265 containerd[1508]: 2025-01-30 14:56:55.779 [INFO][2962] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" Namespace="calico-system" Pod="csi-node-driver-gqq8s" WorkloadEndpoint="10.244.11.234-k8s-csi--node--driver--gqq8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.11.234-k8s-csi--node--driver--gqq8s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11876523-7753-443e-b7b7-8d73fa03192e", ResourceVersion:"1186", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.11.234", ContainerID:"fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d", Pod:"csi-node-driver-gqq8s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5aeefc1bc19", MAC:"b6:97:ba:73:cc:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:56:55.798265 containerd[1508]: 2025-01-30 14:56:55.793 [INFO][2962] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d" Namespace="calico-system" Pod="csi-node-driver-gqq8s" WorkloadEndpoint="10.244.11.234-k8s-csi--node--driver--gqq8s-eth0" Jan 30 14:56:55.838269 containerd[1508]: time="2025-01-30T14:56:55.838086779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:56:55.838454 containerd[1508]: time="2025-01-30T14:56:55.838332874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:56:55.838761 containerd[1508]: time="2025-01-30T14:56:55.838502531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:56:55.839985 containerd[1508]: time="2025-01-30T14:56:55.839632116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:56:55.868429 systemd[1]: Started cri-containerd-fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d.scope - libcontainer container fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d. Jan 30 14:56:55.907836 containerd[1508]: time="2025-01-30T14:56:55.907786449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqq8s,Uid:11876523-7753-443e-b7b7-8d73fa03192e,Namespace:calico-system,Attempt:10,} returns sandbox id \"fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d\"" Jan 30 14:56:55.911429 containerd[1508]: time="2025-01-30T14:56:55.911394778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 14:56:55.920148 systemd-networkd[1432]: cali776eecb1b16: Link UP Jan 30 14:56:55.920472 systemd-networkd[1432]: cali776eecb1b16: Gained carrier Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.582 [INFO][2957] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.648 [INFO][2957] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0 nginx-deployment-7fcdb87857- default 69380d9b-ecad-417e-b5ed-aa8051a80de1 1276 0 2025-01-30 14:56:49 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.244.11.234 nginx-deployment-7fcdb87857-4jcjz eth0 default [] [] [kns.default ksa.default.default] cali776eecb1b16 [] []}} ContainerID="c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-4jcjz" WorkloadEndpoint="10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-" Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.648 [INFO][2957] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-4jcjz" WorkloadEndpoint="10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0" Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.707 [INFO][2985] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" HandleID="k8s-pod-network.c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" Workload="10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0" Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.726 [INFO][2985] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" HandleID="k8s-pod-network.c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" Workload="10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318af0), Attrs:map[string]string{"namespace":"default", "node":"10.244.11.234", "pod":"nginx-deployment-7fcdb87857-4jcjz", "timestamp":"2025-01-30 14:56:55.707163502 +0000 UTC"}, Hostname:"10.244.11.234", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.726 [INFO][2985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.761 [INFO][2985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.761 [INFO][2985] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.244.11.234' Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.831 [INFO][2985] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" host="10.244.11.234" Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.858 [INFO][2985] ipam/ipam.go 372: Looking up existing affinities for host host="10.244.11.234" Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.869 [INFO][2985] ipam/ipam.go 489: Trying affinity for 192.168.32.0/26 host="10.244.11.234" Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.874 [INFO][2985] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.0/26 host="10.244.11.234" Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.879 [INFO][2985] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="10.244.11.234" Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.879 [INFO][2985] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" host="10.244.11.234" Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.883 [INFO][2985] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.890 [INFO][2985] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" host="10.244.11.234" Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.912 [INFO][2985] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.2/26] block=192.168.32.0/26 handle="k8s-pod-network.c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" host="10.244.11.234" Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.912 [INFO][2985] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.2/26] handle="k8s-pod-network.c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" host="10.244.11.234" Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.912 [INFO][2985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:56:55.932891 containerd[1508]: 2025-01-30 14:56:55.912 [INFO][2985] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.2/26] IPv6=[] ContainerID="c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" HandleID="k8s-pod-network.c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" Workload="10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0" Jan 30 14:56:55.934024 containerd[1508]: 2025-01-30 14:56:55.915 [INFO][2957] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-4jcjz" WorkloadEndpoint="10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"69380d9b-ecad-417e-b5ed-aa8051a80de1", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.11.234", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-4jcjz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali776eecb1b16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:56:55.934024 containerd[1508]: 2025-01-30 14:56:55.915 [INFO][2957] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.2/32] ContainerID="c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-4jcjz" WorkloadEndpoint="10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0" Jan 30 14:56:55.934024 containerd[1508]: 2025-01-30 14:56:55.916 [INFO][2957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali776eecb1b16 ContainerID="c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-4jcjz" WorkloadEndpoint="10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0" Jan 30 14:56:55.934024 containerd[1508]: 2025-01-30 14:56:55.919 [INFO][2957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-4jcjz" WorkloadEndpoint="10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0" Jan 30 14:56:55.934024 containerd[1508]: 2025-01-30 14:56:55.920 [INFO][2957] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-4jcjz" WorkloadEndpoint="10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"69380d9b-ecad-417e-b5ed-aa8051a80de1", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.11.234", ContainerID:"c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf", Pod:"nginx-deployment-7fcdb87857-4jcjz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali776eecb1b16", MAC:"d2:e3:c6:c0:b0:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:56:55.934024 containerd[1508]: 2025-01-30 14:56:55.931 [INFO][2957] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-4jcjz" WorkloadEndpoint="10.244.11.234-k8s-nginx--deployment--7fcdb87857--4jcjz-eth0" Jan 30 14:56:55.964835 containerd[1508]: time="2025-01-30T14:56:55.964683923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:56:55.965015 containerd[1508]: time="2025-01-30T14:56:55.964861586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:56:55.965015 containerd[1508]: time="2025-01-30T14:56:55.964939506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:56:55.965274 containerd[1508]: time="2025-01-30T14:56:55.965143717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:56:55.987288 systemd[1]: Started cri-containerd-c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf.scope - libcontainer container c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf. Jan 30 14:56:56.043920 containerd[1508]: time="2025-01-30T14:56:56.043613907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4jcjz,Uid:69380d9b-ecad-417e-b5ed-aa8051a80de1,Namespace:default,Attempt:6,} returns sandbox id \"c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf\"" Jan 30 14:56:56.106119 kubelet[1911]: E0130 14:56:56.104865 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:56.438157 kernel: bpftool[3217]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 14:56:56.822838 systemd-networkd[1432]: vxlan.calico: Link UP Jan 30 14:56:56.822852 systemd-networkd[1432]: vxlan.calico: Gained carrier Jan 30 14:56:57.105613 kubelet[1911]: E0130 14:56:57.105157 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:57.385816 containerd[1508]: time="2025-01-30T14:56:57.385646816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:57.387576 containerd[1508]: time="2025-01-30T14:56:57.387534686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 14:56:57.387873 containerd[1508]: time="2025-01-30T14:56:57.387816280Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:57.390523 containerd[1508]: time="2025-01-30T14:56:57.390476954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:56:57.391988 containerd[1508]: time="2025-01-30T14:56:57.391657159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.480217361s" Jan 30 14:56:57.391988 containerd[1508]: time="2025-01-30T14:56:57.391704959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 14:56:57.393367 containerd[1508]: time="2025-01-30T14:56:57.393046771Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 14:56:57.394662 containerd[1508]: time="2025-01-30T14:56:57.394628637Z" level=info msg="CreateContainer within sandbox \"fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 14:56:57.416986 containerd[1508]: time="2025-01-30T14:56:57.416935822Z" level=info msg="CreateContainer within sandbox \"fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8809b3747575b733638fa9bb13eedcde09665db81e25c99063389cf622c1fcf9\"" Jan 30 14:56:57.418171 containerd[1508]: time="2025-01-30T14:56:57.418048621Z" level=info msg="StartContainer for \"8809b3747575b733638fa9bb13eedcde09665db81e25c99063389cf622c1fcf9\"" Jan 30 14:56:57.422226 systemd-networkd[1432]: cali5aeefc1bc19: Gained IPv6LL Jan 30 14:56:57.459144 systemd[1]: run-containerd-runc-k8s.io-8809b3747575b733638fa9bb13eedcde09665db81e25c99063389cf622c1fcf9-runc.KoB3KJ.mount: Deactivated successfully. Jan 30 14:56:57.473333 systemd[1]: Started cri-containerd-8809b3747575b733638fa9bb13eedcde09665db81e25c99063389cf622c1fcf9.scope - libcontainer container 8809b3747575b733638fa9bb13eedcde09665db81e25c99063389cf622c1fcf9. Jan 30 14:56:57.485262 systemd-networkd[1432]: cali776eecb1b16: Gained IPv6LL Jan 30 14:56:57.575959 containerd[1508]: time="2025-01-30T14:56:57.575714013Z" level=info msg="StartContainer for \"8809b3747575b733638fa9bb13eedcde09665db81e25c99063389cf622c1fcf9\" returns successfully" Jan 30 14:56:58.061687 systemd-networkd[1432]: vxlan.calico: Gained IPv6LL Jan 30 14:56:58.105819 kubelet[1911]: E0130 14:56:58.105723 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:56:59.106961 kubelet[1911]: E0130 14:56:59.106849 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:00.107389 kubelet[1911]: E0130 14:57:00.107314 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:00.954257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2548929608.mount: Deactivated successfully. Jan 30 14:57:01.107924 kubelet[1911]: E0130 14:57:01.107843 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:02.108459 kubelet[1911]: E0130 14:57:02.108316 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:02.843125 containerd[1508]: time="2025-01-30T14:57:02.842593498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:57:02.844794 containerd[1508]: time="2025-01-30T14:57:02.844598527Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 14:57:02.847238 containerd[1508]: time="2025-01-30T14:57:02.845524559Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:57:02.850055 containerd[1508]: time="2025-01-30T14:57:02.849993880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:57:02.851516 containerd[1508]: time="2025-01-30T14:57:02.851342131Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.458235093s" Jan 30 14:57:02.851516 containerd[1508]: time="2025-01-30T14:57:02.851387701Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 14:57:02.853606 containerd[1508]: time="2025-01-30T14:57:02.853548022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 14:57:02.862992 containerd[1508]: time="2025-01-30T14:57:02.862806116Z" level=info msg="CreateContainer within sandbox \"c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 14:57:02.892101 containerd[1508]: time="2025-01-30T14:57:02.891934567Z" level=info msg="CreateContainer within sandbox \"c8a232f045e741961b4d05977b56d56f09652eaec1f34005dc32942cecb2a6bf\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6984b5ca0444f030e4eb06d41741079993f80d624a37075c8ea1d11943de911f\"" Jan 30 14:57:02.895669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount393301747.mount: Deactivated successfully. Jan 30 14:57:02.898717 containerd[1508]: time="2025-01-30T14:57:02.898642843Z" level=info msg="StartContainer for \"6984b5ca0444f030e4eb06d41741079993f80d624a37075c8ea1d11943de911f\"" Jan 30 14:57:02.940309 systemd[1]: Started cri-containerd-6984b5ca0444f030e4eb06d41741079993f80d624a37075c8ea1d11943de911f.scope - libcontainer container 6984b5ca0444f030e4eb06d41741079993f80d624a37075c8ea1d11943de911f. Jan 30 14:57:02.985316 containerd[1508]: time="2025-01-30T14:57:02.985260191Z" level=info msg="StartContainer for \"6984b5ca0444f030e4eb06d41741079993f80d624a37075c8ea1d11943de911f\" returns successfully" Jan 30 14:57:03.108874 kubelet[1911]: E0130 14:57:03.108649 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:04.109122 kubelet[1911]: E0130 14:57:04.108915 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:04.375841 containerd[1508]: time="2025-01-30T14:57:04.375535038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:57:04.377161 containerd[1508]: time="2025-01-30T14:57:04.376673772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 14:57:04.378624 containerd[1508]: time="2025-01-30T14:57:04.378202996Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:57:04.382240 containerd[1508]: time="2025-01-30T14:57:04.381844768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:57:04.382953 containerd[1508]: time="2025-01-30T14:57:04.382914144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.529312678s" Jan 30 14:57:04.383027 containerd[1508]: time="2025-01-30T14:57:04.382957606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 14:57:04.387223 containerd[1508]: time="2025-01-30T14:57:04.387181594Z" level=info msg="CreateContainer within sandbox \"fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 14:57:04.406699 containerd[1508]: time="2025-01-30T14:57:04.405764679Z" level=info msg="CreateContainer within sandbox \"fec9d4acbb4ea8b6a582758042aa79f9a82488bbc9bb4b4f3c9fa7219617aa9d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cdd92649726752449b52f1e8fafb445e77e00e49b22210c90eb84f5963633425\"" Jan 30 14:57:04.406699 containerd[1508]: time="2025-01-30T14:57:04.406639617Z" level=info msg="StartContainer for \"cdd92649726752449b52f1e8fafb445e77e00e49b22210c90eb84f5963633425\"" Jan 30 14:57:04.446197 systemd[1]: run-containerd-runc-k8s.io-cdd92649726752449b52f1e8fafb445e77e00e49b22210c90eb84f5963633425-runc.N6IjqT.mount: Deactivated successfully. Jan 30 14:57:04.460321 systemd[1]: Started cri-containerd-cdd92649726752449b52f1e8fafb445e77e00e49b22210c90eb84f5963633425.scope - libcontainer container cdd92649726752449b52f1e8fafb445e77e00e49b22210c90eb84f5963633425. Jan 30 14:57:04.509239 containerd[1508]: time="2025-01-30T14:57:04.509188368Z" level=info msg="StartContainer for \"cdd92649726752449b52f1e8fafb445e77e00e49b22210c90eb84f5963633425\" returns successfully" Jan 30 14:57:04.573846 kubelet[1911]: I0130 14:57:04.573758 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gqq8s" podStartSLOduration=26.099327775 podStartE2EDuration="34.573735277s" podCreationTimestamp="2025-01-30 14:56:30 +0000 UTC" firstStartedPulling="2025-01-30 14:56:55.910858353 +0000 UTC m=+26.814667724" lastFinishedPulling="2025-01-30 14:57:04.385265856 +0000 UTC m=+35.289075226" observedRunningTime="2025-01-30 14:57:04.568003015 +0000 UTC m=+35.471812409" watchObservedRunningTime="2025-01-30 14:57:04.573735277 +0000 UTC m=+35.477544654" Jan 30 14:57:04.574165 kubelet[1911]: I0130 14:57:04.573990 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-4jcjz" podStartSLOduration=8.768267298 podStartE2EDuration="15.573980668s" podCreationTimestamp="2025-01-30 14:56:49 +0000 UTC" firstStartedPulling="2025-01-30 14:56:56.046913386 +0000 UTC m=+26.950722763" lastFinishedPulling="2025-01-30 14:57:02.852626757 +0000 UTC m=+33.756436133" observedRunningTime="2025-01-30 14:57:03.551806558 +0000 UTC m=+34.455615948" watchObservedRunningTime="2025-01-30 14:57:04.573980668 +0000 UTC m=+35.477790052" Jan 30 14:57:05.109735 kubelet[1911]: E0130 14:57:05.109661 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:05.238405 kubelet[1911]: I0130 14:57:05.238231 1911 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 14:57:05.238405 kubelet[1911]: I0130 14:57:05.238298 1911 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 14:57:06.109900 kubelet[1911]: E0130 14:57:06.109847 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:07.110866 kubelet[1911]: E0130 14:57:07.110769 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:08.111658 kubelet[1911]: E0130 14:57:08.111577 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:09.112467 kubelet[1911]: E0130 14:57:09.112388 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:09.990426 systemd[1]: Created slice kubepods-besteffort-pod911430aa_d3ab_4876_ab42_a5339f3a120b.slice - libcontainer container kubepods-besteffort-pod911430aa_d3ab_4876_ab42_a5339f3a120b.slice. Jan 30 14:57:10.076034 kubelet[1911]: I0130 14:57:10.075900 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/911430aa-d3ab-4876-ab42-a5339f3a120b-data\") pod \"nfs-server-provisioner-0\" (UID: \"911430aa-d3ab-4876-ab42-a5339f3a120b\") " pod="default/nfs-server-provisioner-0" Jan 30 14:57:10.076034 kubelet[1911]: I0130 14:57:10.075984 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmmf\" (UniqueName: \"kubernetes.io/projected/911430aa-d3ab-4876-ab42-a5339f3a120b-kube-api-access-blmmf\") pod \"nfs-server-provisioner-0\" (UID: \"911430aa-d3ab-4876-ab42-a5339f3a120b\") " pod="default/nfs-server-provisioner-0" Jan 30 14:57:10.081290 kubelet[1911]: E0130 14:57:10.081029 1911 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:10.112879 kubelet[1911]: E0130 14:57:10.112798 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:10.294651 containerd[1508]: time="2025-01-30T14:57:10.294398602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:911430aa-d3ab-4876-ab42-a5339f3a120b,Namespace:default,Attempt:0,}" Jan 30 14:57:10.484263 systemd-networkd[1432]: cali60e51b789ff: Link UP Jan 30 14:57:10.485853 systemd-networkd[1432]: cali60e51b789ff: Gained carrier Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.373 [INFO][3542] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.244.11.234-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 911430aa-d3ab-4876-ab42-a5339f3a120b 1402 0 2025-01-30 14:57:09 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.244.11.234 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.11.234-k8s-nfs--server--provisioner--0-" Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.373 [INFO][3542] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.11.234-k8s-nfs--server--provisioner--0-eth0" Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.414 [INFO][3552] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" HandleID="k8s-pod-network.2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" Workload="10.244.11.234-k8s-nfs--server--provisioner--0-eth0" Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.431 [INFO][3552] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" HandleID="k8s-pod-network.2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" Workload="10.244.11.234-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040c030), Attrs:map[string]string{"namespace":"default", "node":"10.244.11.234", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-30 14:57:10.414396631 +0000 UTC"}, Hostname:"10.244.11.234", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.431 [INFO][3552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.431 [INFO][3552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.431 [INFO][3552] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.244.11.234' Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.435 [INFO][3552] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" host="10.244.11.234" Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.441 [INFO][3552] ipam/ipam.go 372: Looking up existing affinities for host host="10.244.11.234" Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.449 [INFO][3552] ipam/ipam.go 489: Trying affinity for 192.168.32.0/26 host="10.244.11.234" Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.452 [INFO][3552] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.0/26 host="10.244.11.234" Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.456 [INFO][3552] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="10.244.11.234" Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.456 [INFO][3552] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" host="10.244.11.234" Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.459 [INFO][3552] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.465 [INFO][3552] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" host="10.244.11.234" Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.477 [INFO][3552] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.3/26] block=192.168.32.0/26 handle="k8s-pod-network.2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" host="10.244.11.234" Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.477 [INFO][3552] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.3/26] handle="k8s-pod-network.2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" host="10.244.11.234" Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.477 [INFO][3552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:57:10.502260 containerd[1508]: 2025-01-30 14:57:10.478 [INFO][3552] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.3/26] IPv6=[] ContainerID="2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" HandleID="k8s-pod-network.2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" Workload="10.244.11.234-k8s-nfs--server--provisioner--0-eth0" Jan 30 14:57:10.503352 containerd[1508]: 2025-01-30 14:57:10.479 [INFO][3542] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.11.234-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.11.234-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"911430aa-d3ab-4876-ab42-a5339f3a120b", ResourceVersion:"1402", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.11.234", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:57:10.503352 containerd[1508]: 2025-01-30 14:57:10.480 [INFO][3542] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.3/32] ContainerID="2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.11.234-k8s-nfs--server--provisioner--0-eth0" Jan 30 14:57:10.503352 containerd[1508]: 2025-01-30 14:57:10.480 [INFO][3542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.11.234-k8s-nfs--server--provisioner--0-eth0" Jan 30 14:57:10.503352 containerd[1508]: 2025-01-30 14:57:10.485 [INFO][3542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.11.234-k8s-nfs--server--provisioner--0-eth0" Jan 30 14:57:10.503644 containerd[1508]: 2025-01-30 14:57:10.485 [INFO][3542] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.11.234-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.11.234-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"911430aa-d3ab-4876-ab42-a5339f3a120b", ResourceVersion:"1402", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.11.234", ContainerID:"2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"b6:ed:cd:1d:be:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:57:10.503644 containerd[1508]: 2025-01-30 14:57:10.500 [INFO][3542] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.11.234-k8s-nfs--server--provisioner--0-eth0" Jan 30 14:57:10.536487 containerd[1508]: time="2025-01-30T14:57:10.536150762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:57:10.536487 containerd[1508]: time="2025-01-30T14:57:10.536292548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:57:10.536487 containerd[1508]: time="2025-01-30T14:57:10.536366493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:57:10.537230 containerd[1508]: time="2025-01-30T14:57:10.536664516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:57:10.571333 systemd[1]: Started cri-containerd-2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf.scope - libcontainer container 2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf. Jan 30 14:57:10.631605 containerd[1508]: time="2025-01-30T14:57:10.631510646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:911430aa-d3ab-4876-ab42-a5339f3a120b,Namespace:default,Attempt:0,} returns sandbox id \"2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf\"" Jan 30 14:57:10.634134 containerd[1508]: time="2025-01-30T14:57:10.633818757Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 14:57:11.113774 kubelet[1911]: E0130 14:57:11.113660 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:11.192140 systemd[1]: run-containerd-runc-k8s.io-2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf-runc.P5MP40.mount: Deactivated successfully. Jan 30 14:57:11.926942 systemd[1]: Started sshd@9-10.244.11.234:22-47.108.74.203:52434.service - OpenSSH per-connection server daemon (47.108.74.203:52434). Jan 30 14:57:12.013362 systemd-networkd[1432]: cali60e51b789ff: Gained IPv6LL Jan 30 14:57:12.115095 kubelet[1911]: E0130 14:57:12.113921 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:13.114234 kubelet[1911]: E0130 14:57:13.114116 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:14.015681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3128962389.mount: Deactivated successfully. Jan 30 14:57:14.115256 kubelet[1911]: E0130 14:57:14.115195 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:14.917408 sshd[3612]: banner exchange: Connection from 47.108.74.203 port 52434: invalid format Jan 30 14:57:14.919053 systemd[1]: sshd@9-10.244.11.234:22-47.108.74.203:52434.service: Deactivated successfully. Jan 30 14:57:15.116303 kubelet[1911]: E0130 14:57:15.116245 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:16.117384 kubelet[1911]: E0130 14:57:16.117328 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:17.035529 containerd[1508]: time="2025-01-30T14:57:17.035421676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:57:17.037611 containerd[1508]: time="2025-01-30T14:57:17.037488017Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 30 14:57:17.041102 containerd[1508]: time="2025-01-30T14:57:17.040315675Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:57:17.043985 containerd[1508]: time="2025-01-30T14:57:17.043923971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:57:17.045806 containerd[1508]: time="2025-01-30T14:57:17.045636182Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.411775763s" Jan 30 14:57:17.045806 containerd[1508]: time="2025-01-30T14:57:17.045682590Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 14:57:17.049735 containerd[1508]: time="2025-01-30T14:57:17.049692384Z" level=info msg="CreateContainer within sandbox \"2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 14:57:17.065427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559474601.mount: Deactivated successfully. Jan 30 14:57:17.067092 containerd[1508]: time="2025-01-30T14:57:17.066656556Z" level=info msg="CreateContainer within sandbox \"2a359ed0b6ef29af69a3dd830eebc15e05671a2b52d9d6d84be280b7c87bdbbf\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9c950d3d2a33fcbafdf493f84a96f883f8ea571e80d1ab02b193e1c5763105fd\"" Jan 30 14:57:17.069183 containerd[1508]: time="2025-01-30T14:57:17.067975166Z" level=info msg="StartContainer for \"9c950d3d2a33fcbafdf493f84a96f883f8ea571e80d1ab02b193e1c5763105fd\"" Jan 30 14:57:17.113407 systemd[1]: Started cri-containerd-9c950d3d2a33fcbafdf493f84a96f883f8ea571e80d1ab02b193e1c5763105fd.scope - libcontainer container 9c950d3d2a33fcbafdf493f84a96f883f8ea571e80d1ab02b193e1c5763105fd. Jan 30 14:57:17.118457 kubelet[1911]: E0130 14:57:17.118341 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:17.155150 containerd[1508]: time="2025-01-30T14:57:17.154991993Z" level=info msg="StartContainer for \"9c950d3d2a33fcbafdf493f84a96f883f8ea571e80d1ab02b193e1c5763105fd\" returns successfully" Jan 30 14:57:18.119974 kubelet[1911]: E0130 14:57:18.119816 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:19.120494 kubelet[1911]: E0130 14:57:19.120345 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:20.121369 kubelet[1911]: E0130 14:57:20.121275 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:21.122141 kubelet[1911]: E0130 14:57:21.122025 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:22.122621 kubelet[1911]: E0130 14:57:22.122551 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:23.122997 kubelet[1911]: E0130 14:57:23.122916 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:24.123491 kubelet[1911]: E0130 14:57:24.123389 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:25.123714 kubelet[1911]: E0130 14:57:25.123628 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:26.124245 kubelet[1911]: E0130 14:57:26.124148 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:26.402513 systemd[1]: Started sshd@10-10.244.11.234:22-47.108.74.203:44666.service - OpenSSH per-connection server daemon (47.108.74.203:44666). Jan 30 14:57:26.571189 kubelet[1911]: I0130 14:57:26.569948 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.155521704 podStartE2EDuration="17.569917575s" podCreationTimestamp="2025-01-30 14:57:09 +0000 UTC" firstStartedPulling="2025-01-30 14:57:10.6332429 +0000 UTC m=+41.537052270" lastFinishedPulling="2025-01-30 14:57:17.047638766 +0000 UTC m=+47.951448141" observedRunningTime="2025-01-30 14:57:17.641567344 +0000 UTC m=+48.545376728" watchObservedRunningTime="2025-01-30 14:57:26.569917575 +0000 UTC m=+57.473726962" Jan 30 14:57:26.578589 systemd[1]: Created slice kubepods-besteffort-pod5010e786_5287_4265_8f25_5bef8feea547.slice - libcontainer container kubepods-besteffort-pod5010e786_5287_4265_8f25_5bef8feea547.slice. Jan 30 14:57:26.592793 kubelet[1911]: I0130 14:57:26.592559 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dghh\" (UniqueName: \"kubernetes.io/projected/5010e786-5287-4265-8f25-5bef8feea547-kube-api-access-7dghh\") pod \"test-pod-1\" (UID: \"5010e786-5287-4265-8f25-5bef8feea547\") " pod="default/test-pod-1" Jan 30 14:57:26.592793 kubelet[1911]: I0130 14:57:26.592618 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-db7ab64b-d746-4841-981f-74e209d32959\" (UniqueName: \"kubernetes.io/nfs/5010e786-5287-4265-8f25-5bef8feea547-pvc-db7ab64b-d746-4841-981f-74e209d32959\") pod \"test-pod-1\" (UID: \"5010e786-5287-4265-8f25-5bef8feea547\") " pod="default/test-pod-1" Jan 30 14:57:26.739120 kernel: FS-Cache: Loaded Jan 30 14:57:26.819229 kernel: RPC: Registered named UNIX socket transport module. Jan 30 14:57:26.819405 kernel: RPC: Registered udp transport module. Jan 30 14:57:26.820369 kernel: RPC: Registered tcp transport module. Jan 30 14:57:26.821250 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 14:57:26.822428 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 14:57:27.095269 kernel: NFS: Registering the id_resolver key type Jan 30 14:57:27.095477 kernel: Key type id_resolver registered Jan 30 14:57:27.096039 kernel: Key type id_legacy registered Jan 30 14:57:27.125496 kubelet[1911]: E0130 14:57:27.125323 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:27.145643 nfsidmap[3747]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 30 14:57:27.153867 nfsidmap[3750]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 30 14:57:27.184819 containerd[1508]: time="2025-01-30T14:57:27.184443364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5010e786-5287-4265-8f25-5bef8feea547,Namespace:default,Attempt:0,}" Jan 30 14:57:27.369347 systemd-networkd[1432]: cali5ec59c6bf6e: Link UP Jan 30 14:57:27.371227 systemd-networkd[1432]: cali5ec59c6bf6e: Gained carrier Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.251 [INFO][3754] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.244.11.234-k8s-test--pod--1-eth0 default 5010e786-5287-4265-8f25-5bef8feea547 1464 0 2025-01-30 14:57:12 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.244.11.234 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.11.234-k8s-test--pod--1-" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.251 [INFO][3754] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.11.234-k8s-test--pod--1-eth0" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.295 [INFO][3764] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" HandleID="k8s-pod-network.d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" Workload="10.244.11.234-k8s-test--pod--1-eth0" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.309 [INFO][3764] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" HandleID="k8s-pod-network.d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" Workload="10.244.11.234-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051670), Attrs:map[string]string{"namespace":"default", "node":"10.244.11.234", "pod":"test-pod-1", "timestamp":"2025-01-30 14:57:27.295005631 +0000 UTC"}, Hostname:"10.244.11.234", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.309 [INFO][3764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.309 [INFO][3764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.309 [INFO][3764] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.244.11.234' Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.317 [INFO][3764] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" host="10.244.11.234" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.324 [INFO][3764] ipam/ipam.go 372: Looking up existing affinities for host host="10.244.11.234" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.332 [INFO][3764] ipam/ipam.go 489: Trying affinity for 192.168.32.0/26 host="10.244.11.234" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.335 [INFO][3764] ipam/ipam.go 155: Attempting to load block cidr=192.168.32.0/26 host="10.244.11.234" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.339 [INFO][3764] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="10.244.11.234" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.339 [INFO][3764] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" host="10.244.11.234" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.343 [INFO][3764] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708 Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.351 [INFO][3764] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" host="10.244.11.234" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.359 [INFO][3764] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.32.4/26] block=192.168.32.0/26 handle="k8s-pod-network.d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" host="10.244.11.234" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.359 [INFO][3764] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.4/26] handle="k8s-pod-network.d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" host="10.244.11.234" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.359 [INFO][3764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.359 [INFO][3764] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.4/26] IPv6=[] ContainerID="d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" HandleID="k8s-pod-network.d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" Workload="10.244.11.234-k8s-test--pod--1-eth0" Jan 30 14:57:27.386495 containerd[1508]: 2025-01-30 14:57:27.362 [INFO][3754] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.11.234-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.11.234-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5010e786-5287-4265-8f25-5bef8feea547", ResourceVersion:"1464", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.11.234", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:57:27.392228 containerd[1508]: 2025-01-30 14:57:27.362 [INFO][3754] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.32.4/32] ContainerID="d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.11.234-k8s-test--pod--1-eth0" Jan 30 14:57:27.392228 containerd[1508]: 2025-01-30 14:57:27.362 [INFO][3754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.11.234-k8s-test--pod--1-eth0" Jan 30 14:57:27.392228 containerd[1508]: 2025-01-30 14:57:27.369 [INFO][3754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.11.234-k8s-test--pod--1-eth0" Jan 30 14:57:27.392228 containerd[1508]: 2025-01-30 14:57:27.369 [INFO][3754] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.11.234-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.11.234-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5010e786-5287-4265-8f25-5bef8feea547", ResourceVersion:"1464", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.11.234", ContainerID:"d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"32:c6:d8:25:dc:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:57:27.392228 containerd[1508]: 2025-01-30 14:57:27.380 [INFO][3754] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.11.234-k8s-test--pod--1-eth0" Jan 30 14:57:27.430024 containerd[1508]: time="2025-01-30T14:57:27.429709603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:57:27.430782 containerd[1508]: time="2025-01-30T14:57:27.430723344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:57:27.431106 containerd[1508]: time="2025-01-30T14:57:27.430920704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:57:27.431514 containerd[1508]: time="2025-01-30T14:57:27.431374791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:57:27.459310 systemd[1]: Started cri-containerd-d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708.scope - libcontainer container d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708. Jan 30 14:57:27.528382 containerd[1508]: time="2025-01-30T14:57:27.528331885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5010e786-5287-4265-8f25-5bef8feea547,Namespace:default,Attempt:0,} returns sandbox id \"d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708\"" Jan 30 14:57:27.530869 containerd[1508]: time="2025-01-30T14:57:27.530464954Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 14:57:27.905113 containerd[1508]: time="2025-01-30T14:57:27.903209846Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:57:27.907097 containerd[1508]: time="2025-01-30T14:57:27.906414006Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 14:57:27.910093 containerd[1508]: time="2025-01-30T14:57:27.910026704Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 379.508329ms" Jan 30 14:57:27.910237 containerd[1508]: time="2025-01-30T14:57:27.910208223Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 14:57:27.918155 containerd[1508]: time="2025-01-30T14:57:27.918119135Z" level=info msg="CreateContainer within sandbox \"d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 14:57:27.932958 containerd[1508]: time="2025-01-30T14:57:27.932897767Z" level=info msg="CreateContainer within sandbox \"d19f946995bd7a4716fc9ca34042870472bfa8d8a59869ea9ae904715d29b708\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2a2b73d7e98b8ea13c1bf9b45f4ad959d144886ac44c47b934f095257d78fb52\"" Jan 30 14:57:27.933819 containerd[1508]: time="2025-01-30T14:57:27.933783545Z" level=info msg="StartContainer for \"2a2b73d7e98b8ea13c1bf9b45f4ad959d144886ac44c47b934f095257d78fb52\"" Jan 30 14:57:27.976386 systemd[1]: Started cri-containerd-2a2b73d7e98b8ea13c1bf9b45f4ad959d144886ac44c47b934f095257d78fb52.scope - libcontainer container 2a2b73d7e98b8ea13c1bf9b45f4ad959d144886ac44c47b934f095257d78fb52. Jan 30 14:57:28.016578 containerd[1508]: time="2025-01-30T14:57:28.016008474Z" level=info msg="StartContainer for \"2a2b73d7e98b8ea13c1bf9b45f4ad959d144886ac44c47b934f095257d78fb52\" returns successfully" Jan 30 14:57:28.126279 kubelet[1911]: E0130 14:57:28.126174 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:28.673421 kubelet[1911]: I0130 14:57:28.673320 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.287479104 podStartE2EDuration="16.673271683s" podCreationTimestamp="2025-01-30 14:57:12 +0000 UTC" firstStartedPulling="2025-01-30 14:57:27.530033324 +0000 UTC m=+58.433842696" lastFinishedPulling="2025-01-30 14:57:27.915825899 +0000 UTC m=+58.819635275" observedRunningTime="2025-01-30 14:57:28.672401723 +0000 UTC m=+59.576211113" watchObservedRunningTime="2025-01-30 14:57:28.673271683 +0000 UTC m=+59.577081067" Jan 30 14:57:29.127110 kubelet[1911]: E0130 14:57:29.126981 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:29.165951 systemd-networkd[1432]: cali5ec59c6bf6e: Gained IPv6LL Jan 30 14:57:30.081371 kubelet[1911]: E0130 14:57:30.081258 1911 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:30.124056 containerd[1508]: time="2025-01-30T14:57:30.123989858Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\"" Jan 30 14:57:30.124819 containerd[1508]: time="2025-01-30T14:57:30.124226139Z" level=info msg="TearDown network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" successfully" Jan 30 14:57:30.124819 containerd[1508]: time="2025-01-30T14:57:30.124249156Z" level=info msg="StopPodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" returns successfully" Jan 30 14:57:30.128140 kubelet[1911]: E0130 14:57:30.128092 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:30.130670 containerd[1508]: time="2025-01-30T14:57:30.130635381Z" level=info msg="RemovePodSandbox for \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\"" Jan 30 14:57:30.138840 containerd[1508]: time="2025-01-30T14:57:30.138745072Z" level=info msg="Forcibly stopping sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\"" Jan 30 14:57:30.148572 containerd[1508]: time="2025-01-30T14:57:30.138899266Z" level=info msg="TearDown network for sandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" successfully" Jan 30 14:57:30.175536 containerd[1508]: time="2025-01-30T14:57:30.175437911Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.175791 containerd[1508]: time="2025-01-30T14:57:30.175568526Z" level=info msg="RemovePodSandbox \"e1d98f09672c94e89b41c4f140d546c97c961040a22204d85cdbb8f8cf9750f1\" returns successfully" Jan 30 14:57:30.176624 containerd[1508]: time="2025-01-30T14:57:30.176580789Z" level=info msg="StopPodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\"" Jan 30 14:57:30.176783 containerd[1508]: time="2025-01-30T14:57:30.176757708Z" level=info msg="TearDown network for sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" successfully" Jan 30 14:57:30.176783 containerd[1508]: time="2025-01-30T14:57:30.176781348Z" level=info msg="StopPodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" returns successfully" Jan 30 14:57:30.177295 containerd[1508]: time="2025-01-30T14:57:30.177252770Z" level=info msg="RemovePodSandbox for \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\"" Jan 30 14:57:30.178328 containerd[1508]: time="2025-01-30T14:57:30.177451485Z" level=info msg="Forcibly stopping sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\"" Jan 30 14:57:30.178328 containerd[1508]: time="2025-01-30T14:57:30.177566702Z" level=info msg="TearDown network for sandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" successfully" Jan 30 14:57:30.192944 containerd[1508]: time="2025-01-30T14:57:30.192837404Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.193179 containerd[1508]: time="2025-01-30T14:57:30.192964809Z" level=info msg="RemovePodSandbox \"41df09afd726f8105c3f3ecc76238dc2f911944a62913dd26360d641f16b5618\" returns successfully" Jan 30 14:57:30.194351 containerd[1508]: time="2025-01-30T14:57:30.193867155Z" level=info msg="StopPodSandbox for \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\"" Jan 30 14:57:30.194351 containerd[1508]: time="2025-01-30T14:57:30.194034612Z" level=info msg="TearDown network for sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\" successfully" Jan 30 14:57:30.194351 containerd[1508]: time="2025-01-30T14:57:30.194057234Z" level=info msg="StopPodSandbox for \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\" returns successfully" Jan 30 14:57:30.195202 containerd[1508]: time="2025-01-30T14:57:30.194922726Z" level=info msg="RemovePodSandbox for \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\"" Jan 30 14:57:30.195202 containerd[1508]: time="2025-01-30T14:57:30.194958533Z" level=info msg="Forcibly stopping sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\"" Jan 30 14:57:30.195202 containerd[1508]: time="2025-01-30T14:57:30.195079170Z" level=info msg="TearDown network for sandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\" successfully" Jan 30 14:57:30.201094 containerd[1508]: time="2025-01-30T14:57:30.198639742Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.201094 containerd[1508]: time="2025-01-30T14:57:30.198754451Z" level=info msg="RemovePodSandbox \"18553c42538670b9db2d2756483b75c3d03706320a780dd1c18c218799e23b8b\" returns successfully" Jan 30 14:57:30.203091 containerd[1508]: time="2025-01-30T14:57:30.203024418Z" level=info msg="StopPodSandbox for \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\"" Jan 30 14:57:30.203238 containerd[1508]: time="2025-01-30T14:57:30.203206713Z" level=info msg="TearDown network for sandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\" successfully" Jan 30 14:57:30.203325 containerd[1508]: time="2025-01-30T14:57:30.203237154Z" level=info msg="StopPodSandbox for \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\" returns successfully" Jan 30 14:57:30.203873 containerd[1508]: time="2025-01-30T14:57:30.203833143Z" level=info msg="RemovePodSandbox for \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\"" Jan 30 14:57:30.203961 containerd[1508]: time="2025-01-30T14:57:30.203881860Z" level=info msg="Forcibly stopping sandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\"" Jan 30 14:57:30.204042 containerd[1508]: time="2025-01-30T14:57:30.203981049Z" level=info msg="TearDown network for sandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\" successfully" Jan 30 14:57:30.207171 containerd[1508]: time="2025-01-30T14:57:30.207126066Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.207331 containerd[1508]: time="2025-01-30T14:57:30.207301788Z" level=info msg="RemovePodSandbox \"882bdfb5d4d70257a97829500f87079d0293bae0d271e6ea7c08268947b02f0b\" returns successfully" Jan 30 14:57:30.207870 containerd[1508]: time="2025-01-30T14:57:30.207838327Z" level=info msg="StopPodSandbox for \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\"" Jan 30 14:57:30.208229 containerd[1508]: time="2025-01-30T14:57:30.208201661Z" level=info msg="TearDown network for sandbox \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\" successfully" Jan 30 14:57:30.208371 containerd[1508]: time="2025-01-30T14:57:30.208346632Z" level=info msg="StopPodSandbox for \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\" returns successfully" Jan 30 14:57:30.209020 containerd[1508]: time="2025-01-30T14:57:30.208990593Z" level=info msg="RemovePodSandbox for \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\"" Jan 30 14:57:30.209170 containerd[1508]: time="2025-01-30T14:57:30.209145065Z" level=info msg="Forcibly stopping sandbox \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\"" Jan 30 14:57:30.209428 containerd[1508]: time="2025-01-30T14:57:30.209375155Z" level=info msg="TearDown network for sandbox \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\" successfully" Jan 30 14:57:30.211989 containerd[1508]: time="2025-01-30T14:57:30.211955441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.212239 containerd[1508]: time="2025-01-30T14:57:30.212208847Z" level=info msg="RemovePodSandbox \"e7f250782c5ab52b8bdfe01f8c997fd05f102b466cee9c74029b2a3dffdaee6d\" returns successfully" Jan 30 14:57:30.212952 containerd[1508]: time="2025-01-30T14:57:30.212697926Z" level=info msg="StopPodSandbox for \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\"" Jan 30 14:57:30.212952 containerd[1508]: time="2025-01-30T14:57:30.212810714Z" level=info msg="TearDown network for sandbox \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\" successfully" Jan 30 14:57:30.212952 containerd[1508]: time="2025-01-30T14:57:30.212830361Z" level=info msg="StopPodSandbox for \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\" returns successfully" Jan 30 14:57:30.213731 containerd[1508]: time="2025-01-30T14:57:30.213528267Z" level=info msg="RemovePodSandbox for \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\"" Jan 30 14:57:30.213731 containerd[1508]: time="2025-01-30T14:57:30.213563218Z" level=info msg="Forcibly stopping sandbox \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\"" Jan 30 14:57:30.213731 containerd[1508]: time="2025-01-30T14:57:30.213652169Z" level=info msg="TearDown network for sandbox \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\" successfully" Jan 30 14:57:30.216664 containerd[1508]: time="2025-01-30T14:57:30.216534691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.216664 containerd[1508]: time="2025-01-30T14:57:30.216587353Z" level=info msg="RemovePodSandbox \"da30d25875ce7f86c87c6eef08e0b014a9d9780cf6e22bee3e2003196def865e\" returns successfully" Jan 30 14:57:30.217601 containerd[1508]: time="2025-01-30T14:57:30.217134606Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:57:30.217601 containerd[1508]: time="2025-01-30T14:57:30.217246350Z" level=info msg="TearDown network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" successfully" Jan 30 14:57:30.217601 containerd[1508]: time="2025-01-30T14:57:30.217278679Z" level=info msg="StopPodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" returns successfully" Jan 30 14:57:30.217794 containerd[1508]: time="2025-01-30T14:57:30.217730589Z" level=info msg="RemovePodSandbox for \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:57:30.217794 containerd[1508]: time="2025-01-30T14:57:30.217764882Z" level=info msg="Forcibly stopping sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\"" Jan 30 14:57:30.218032 containerd[1508]: time="2025-01-30T14:57:30.217858588Z" level=info msg="TearDown network for sandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" successfully" Jan 30 14:57:30.220918 containerd[1508]: time="2025-01-30T14:57:30.220844367Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.220918 containerd[1508]: time="2025-01-30T14:57:30.220909210Z" level=info msg="RemovePodSandbox \"bd3d5c609570b40f84c71ad7a5595eb9afd5d18585792931cbab2d17a3469ca5\" returns successfully" Jan 30 14:57:30.222555 containerd[1508]: time="2025-01-30T14:57:30.222397560Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" Jan 30 14:57:30.222754 containerd[1508]: time="2025-01-30T14:57:30.222654659Z" level=info msg="TearDown network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" successfully" Jan 30 14:57:30.222754 containerd[1508]: time="2025-01-30T14:57:30.222686432Z" level=info msg="StopPodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" returns successfully" Jan 30 14:57:30.224484 containerd[1508]: time="2025-01-30T14:57:30.223190016Z" level=info msg="RemovePodSandbox for \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" Jan 30 14:57:30.224484 containerd[1508]: time="2025-01-30T14:57:30.223227768Z" level=info msg="Forcibly stopping sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\"" Jan 30 14:57:30.224484 containerd[1508]: time="2025-01-30T14:57:30.223339043Z" level=info msg="TearDown network for sandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" successfully" Jan 30 14:57:30.226355 containerd[1508]: time="2025-01-30T14:57:30.226319546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.226542 containerd[1508]: time="2025-01-30T14:57:30.226513565Z" level=info msg="RemovePodSandbox \"d95e5644b8fab9fc0c7987249ecf728f68c43890ececbb05769861fe9f9267b0\" returns successfully" Jan 30 14:57:30.228145 containerd[1508]: time="2025-01-30T14:57:30.228113419Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\"" Jan 30 14:57:30.228367 containerd[1508]: time="2025-01-30T14:57:30.228338551Z" level=info msg="TearDown network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" successfully" Jan 30 14:57:30.228527 containerd[1508]: time="2025-01-30T14:57:30.228500142Z" level=info msg="StopPodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" returns successfully" Jan 30 14:57:30.229215 containerd[1508]: time="2025-01-30T14:57:30.229185017Z" level=info msg="RemovePodSandbox for \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\"" Jan 30 14:57:30.229548 containerd[1508]: time="2025-01-30T14:57:30.229522433Z" level=info msg="Forcibly stopping sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\"" Jan 30 14:57:30.229772 containerd[1508]: time="2025-01-30T14:57:30.229723096Z" level=info msg="TearDown network for sandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" successfully" Jan 30 14:57:30.233435 containerd[1508]: time="2025-01-30T14:57:30.233402229Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.233593 containerd[1508]: time="2025-01-30T14:57:30.233565592Z" level=info msg="RemovePodSandbox \"9038aa05c143f5d1db0fd67e58903074e426024b2ec77de18e1cbdc4e3833dba\" returns successfully" Jan 30 14:57:30.236710 containerd[1508]: time="2025-01-30T14:57:30.236677870Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\"" Jan 30 14:57:30.236972 containerd[1508]: time="2025-01-30T14:57:30.236943587Z" level=info msg="TearDown network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" successfully" Jan 30 14:57:30.237152 containerd[1508]: time="2025-01-30T14:57:30.237126187Z" level=info msg="StopPodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" returns successfully" Jan 30 14:57:30.237744 containerd[1508]: time="2025-01-30T14:57:30.237700910Z" level=info msg="RemovePodSandbox for \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\"" Jan 30 14:57:30.237827 containerd[1508]: time="2025-01-30T14:57:30.237744501Z" level=info msg="Forcibly stopping sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\"" Jan 30 14:57:30.237892 containerd[1508]: time="2025-01-30T14:57:30.237835069Z" level=info msg="TearDown network for sandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" successfully" Jan 30 14:57:30.241530 containerd[1508]: time="2025-01-30T14:57:30.241492893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.241988 containerd[1508]: time="2025-01-30T14:57:30.241556582Z" level=info msg="RemovePodSandbox \"51e47ce6859e0c8224ccddb7d4975cab63c3590bfcdf23ef81b4680e413fec6d\" returns successfully" Jan 30 14:57:30.244119 containerd[1508]: time="2025-01-30T14:57:30.242385389Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\"" Jan 30 14:57:30.244119 containerd[1508]: time="2025-01-30T14:57:30.242508910Z" level=info msg="TearDown network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" successfully" Jan 30 14:57:30.244119 containerd[1508]: time="2025-01-30T14:57:30.242542184Z" level=info msg="StopPodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" returns successfully" Jan 30 14:57:30.245124 containerd[1508]: time="2025-01-30T14:57:30.244730010Z" level=info msg="RemovePodSandbox for \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\"" Jan 30 14:57:30.245124 containerd[1508]: time="2025-01-30T14:57:30.244772511Z" level=info msg="Forcibly stopping sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\"" Jan 30 14:57:30.245124 containerd[1508]: time="2025-01-30T14:57:30.244861026Z" level=info msg="TearDown network for sandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" successfully" Jan 30 14:57:30.247482 containerd[1508]: time="2025-01-30T14:57:30.247431437Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.247563 containerd[1508]: time="2025-01-30T14:57:30.247490809Z" level=info msg="RemovePodSandbox \"0047141d98b4f281249932883f6044a7a71f5c64d93703c8c731390c7be1fb27\" returns successfully" Jan 30 14:57:30.248097 containerd[1508]: time="2025-01-30T14:57:30.248041434Z" level=info msg="StopPodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\"" Jan 30 14:57:30.248202 containerd[1508]: time="2025-01-30T14:57:30.248175852Z" level=info msg="TearDown network for sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" successfully" Jan 30 14:57:30.248273 containerd[1508]: time="2025-01-30T14:57:30.248203942Z" level=info msg="StopPodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" returns successfully" Jan 30 14:57:30.249641 containerd[1508]: time="2025-01-30T14:57:30.248551692Z" level=info msg="RemovePodSandbox for \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\"" Jan 30 14:57:30.249641 containerd[1508]: time="2025-01-30T14:57:30.248587497Z" level=info msg="Forcibly stopping sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\"" Jan 30 14:57:30.249641 containerd[1508]: time="2025-01-30T14:57:30.248682784Z" level=info msg="TearDown network for sandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" successfully" Jan 30 14:57:30.251365 containerd[1508]: time="2025-01-30T14:57:30.251330332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.251510 containerd[1508]: time="2025-01-30T14:57:30.251482541Z" level=info msg="RemovePodSandbox \"2430ca96a200de6f4701420853943b8d824ad7be6a16d58479ecd138e315d159\" returns successfully" Jan 30 14:57:30.252189 containerd[1508]: time="2025-01-30T14:57:30.252162530Z" level=info msg="StopPodSandbox for \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\"" Jan 30 14:57:30.252440 containerd[1508]: time="2025-01-30T14:57:30.252413396Z" level=info msg="TearDown network for sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\" successfully" Jan 30 14:57:30.252572 containerd[1508]: time="2025-01-30T14:57:30.252547120Z" level=info msg="StopPodSandbox for \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\" returns successfully" Jan 30 14:57:30.253178 containerd[1508]: time="2025-01-30T14:57:30.253144010Z" level=info msg="RemovePodSandbox for \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\"" Jan 30 14:57:30.253342 containerd[1508]: time="2025-01-30T14:57:30.253316542Z" level=info msg="Forcibly stopping sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\"" Jan 30 14:57:30.253576 containerd[1508]: time="2025-01-30T14:57:30.253531536Z" level=info msg="TearDown network for sandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\" successfully" Jan 30 14:57:30.255938 containerd[1508]: time="2025-01-30T14:57:30.255905387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.256113 containerd[1508]: time="2025-01-30T14:57:30.256085400Z" level=info msg="RemovePodSandbox \"5e4755132eec85704e84ed55f6d32f78a72c2b3cf036fc5a299d4b91c6b1cb28\" returns successfully" Jan 30 14:57:30.256810 containerd[1508]: time="2025-01-30T14:57:30.256776182Z" level=info msg="StopPodSandbox for \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\"" Jan 30 14:57:30.256931 containerd[1508]: time="2025-01-30T14:57:30.256905899Z" level=info msg="TearDown network for sandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\" successfully" Jan 30 14:57:30.257006 containerd[1508]: time="2025-01-30T14:57:30.256933797Z" level=info msg="StopPodSandbox for \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\" returns successfully" Jan 30 14:57:30.257415 containerd[1508]: time="2025-01-30T14:57:30.257382844Z" level=info msg="RemovePodSandbox for \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\"" Jan 30 14:57:30.257535 containerd[1508]: time="2025-01-30T14:57:30.257421476Z" level=info msg="Forcibly stopping sandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\"" Jan 30 14:57:30.257692 containerd[1508]: time="2025-01-30T14:57:30.257512912Z" level=info msg="TearDown network for sandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\" successfully" Jan 30 14:57:30.260473 containerd[1508]: time="2025-01-30T14:57:30.260394250Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.260586 containerd[1508]: time="2025-01-30T14:57:30.260479477Z" level=info msg="RemovePodSandbox \"f259287be116a6246c3821a069792e85404b434886e6a9388a2baecbff57e5a0\" returns successfully" Jan 30 14:57:30.261475 containerd[1508]: time="2025-01-30T14:57:30.261244399Z" level=info msg="StopPodSandbox for \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\"" Jan 30 14:57:30.261475 containerd[1508]: time="2025-01-30T14:57:30.261374044Z" level=info msg="TearDown network for sandbox \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\" successfully" Jan 30 14:57:30.261475 containerd[1508]: time="2025-01-30T14:57:30.261394564Z" level=info msg="StopPodSandbox for \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\" returns successfully" Jan 30 14:57:30.263516 containerd[1508]: time="2025-01-30T14:57:30.262338956Z" level=info msg="RemovePodSandbox for \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\"" Jan 30 14:57:30.263516 containerd[1508]: time="2025-01-30T14:57:30.262376791Z" level=info msg="Forcibly stopping sandbox \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\"" Jan 30 14:57:30.263516 containerd[1508]: time="2025-01-30T14:57:30.262468342Z" level=info msg="TearDown network for sandbox \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\" successfully" Jan 30 14:57:30.265318 containerd[1508]: time="2025-01-30T14:57:30.265282621Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.265490 containerd[1508]: time="2025-01-30T14:57:30.265447714Z" level=info msg="RemovePodSandbox \"5de4727472ad39491307e908695d73d48e278e1ac7d186b855cb8119e37a26ca\" returns successfully" Jan 30 14:57:30.265975 containerd[1508]: time="2025-01-30T14:57:30.265928344Z" level=info msg="StopPodSandbox for \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\"" Jan 30 14:57:30.266130 containerd[1508]: time="2025-01-30T14:57:30.266095046Z" level=info msg="TearDown network for sandbox \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\" successfully" Jan 30 14:57:30.266130 containerd[1508]: time="2025-01-30T14:57:30.266122855Z" level=info msg="StopPodSandbox for \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\" returns successfully" Jan 30 14:57:30.266702 containerd[1508]: time="2025-01-30T14:57:30.266656071Z" level=info msg="RemovePodSandbox for \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\"" Jan 30 14:57:30.266764 containerd[1508]: time="2025-01-30T14:57:30.266706616Z" level=info msg="Forcibly stopping sandbox \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\"" Jan 30 14:57:30.266850 containerd[1508]: time="2025-01-30T14:57:30.266796737Z" level=info msg="TearDown network for sandbox \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\" successfully" Jan 30 14:57:30.269775 containerd[1508]: time="2025-01-30T14:57:30.269730839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:57:30.269983 containerd[1508]: time="2025-01-30T14:57:30.269783293Z" level=info msg="RemovePodSandbox \"bbc8162d08327bb49bc3aedf16a297b4603f5c51e9164a15852ce97cba030e53\" returns successfully" Jan 30 14:57:31.128938 kubelet[1911]: E0130 14:57:31.128851 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:32.129788 kubelet[1911]: E0130 14:57:32.129723 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:33.130666 kubelet[1911]: E0130 14:57:33.130576 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:34.131376 kubelet[1911]: E0130 14:57:34.131293 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:35.131762 kubelet[1911]: E0130 14:57:35.131674 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:36.132713 kubelet[1911]: E0130 14:57:36.132621 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:37.133762 kubelet[1911]: E0130 14:57:37.133657 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:38.133953 kubelet[1911]: E0130 14:57:38.133867 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:57:39.134419 kubelet[1911]: E0130 14:57:39.134321 1911 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"