Jan 13 21:40:58.032980 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 21:40:58.033031 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 21:40:58.033046 kernel: BIOS-provided physical RAM map: Jan 13 21:40:58.033062 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:40:58.033072 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:40:58.033082 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:40:58.033094 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 13 21:40:58.033104 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 13 21:40:58.033114 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:40:58.033125 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 21:40:58.033135 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:40:58.033145 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:40:58.033161 kernel: NX (Execute Disable) protection: active Jan 13 21:40:58.033171 kernel: APIC: Static calls initialized Jan 13 21:40:58.033184 kernel: SMBIOS 2.8 present. Jan 13 21:40:58.033195 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 13 21:40:58.033206 kernel: Hypervisor detected: KVM Jan 13 21:40:58.033222 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:40:58.033249 kernel: kvm-clock: using sched offset of 4475959246 cycles Jan 13 21:40:58.033262 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:40:58.033273 kernel: tsc: Detected 2799.998 MHz processor Jan 13 21:40:58.033285 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:40:58.033296 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:40:58.033307 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 13 21:40:58.033319 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:40:58.033345 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:40:58.033363 kernel: Using GB pages for direct mapping Jan 13 21:40:58.033374 kernel: ACPI: Early table checksum verification disabled Jan 13 21:40:58.033385 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 13 21:40:58.033397 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:40:58.033408 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:40:58.033419 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:40:58.033431 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 13 21:40:58.033442 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:40:58.033453 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:40:58.033469 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:40:58.033480 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:40:58.033492 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 13 21:40:58.033503 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 13 21:40:58.033514 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 13 21:40:58.033531 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 13 21:40:58.033543 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 13 21:40:58.033559 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 13 21:40:58.033571 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 13 21:40:58.033583 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:40:58.033595 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:40:58.033606 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 13 21:40:58.033618 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 13 21:40:58.033629 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 13 21:40:58.033641 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 13 21:40:58.033665 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 13 21:40:58.033677 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 13 21:40:58.033688 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 13 21:40:58.033700 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 13 21:40:58.033711 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 13 21:40:58.033723 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 13 21:40:58.033734 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 13 21:40:58.033746 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 13 21:40:58.033757 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 13 21:40:58.033769 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 13 21:40:58.033785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 21:40:58.033797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 13 21:40:58.033815 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 13 21:40:58.033827 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 13 21:40:58.033850 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 13 21:40:58.033863 kernel: Zone ranges: Jan 13 21:40:58.033880 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:40:58.033891 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 13 21:40:58.033903 kernel: Normal empty Jan 13 21:40:58.033920 kernel: Movable zone start for each node Jan 13 21:40:58.033932 kernel: Early memory node ranges Jan 13 21:40:58.033943 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:40:58.033955 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 13 21:40:58.033967 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 13 21:40:58.033978 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:40:58.033990 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:40:58.034002 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 13 21:40:58.034013 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:40:58.034030 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:40:58.034042 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:40:58.034054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:40:58.034065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:40:58.034077 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:40:58.034088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:40:58.034100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:40:58.034111 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:40:58.034123 kernel: TSC deadline timer available Jan 13 21:40:58.034140 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 13 21:40:58.034151 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:40:58.034163 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 21:40:58.034175 kernel: Booting paravirtualized kernel on KVM Jan 13 21:40:58.034186 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:40:58.034198 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 21:40:58.034210 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 21:40:58.034221 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 21:40:58.034233 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 21:40:58.034249 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:40:58.034261 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:40:58.034274 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 21:40:58.034286 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:40:58.034298 kernel: random: crng init done Jan 13 21:40:58.034310 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:40:58.034387 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:40:58.034404 kernel: Fallback order for Node 0: 0 Jan 13 21:40:58.034432 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 13 21:40:58.034444 kernel: Policy zone: DMA32 Jan 13 21:40:58.034455 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:40:58.034467 kernel: software IO TLB: area num 16. Jan 13 21:40:58.034479 kernel: Memory: 1899484K/2096616K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 196872K reserved, 0K cma-reserved) Jan 13 21:40:58.034491 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 21:40:58.034503 kernel: Kernel/User page tables isolation: enabled Jan 13 21:40:58.034515 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 21:40:58.034526 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:40:58.034550 kernel: Dynamic Preempt: voluntary Jan 13 21:40:58.034562 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:40:58.034574 kernel: rcu: RCU event tracing is enabled. Jan 13 21:40:58.034586 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 21:40:58.034598 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:40:58.034629 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:40:58.034646 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:40:58.034658 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:40:58.034674 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 21:40:58.034687 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 13 21:40:58.034699 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:40:58.034711 kernel: Console: colour VGA+ 80x25 Jan 13 21:40:58.034728 kernel: printk: console [tty0] enabled Jan 13 21:40:58.034740 kernel: printk: console [ttyS0] enabled Jan 13 21:40:58.034752 kernel: ACPI: Core revision 20230628 Jan 13 21:40:58.034765 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:40:58.034777 kernel: x2apic enabled Jan 13 21:40:58.034794 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:40:58.034806 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 13 21:40:58.034819 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 13 21:40:58.034831 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:40:58.034855 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 21:40:58.034867 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 21:40:58.034879 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:40:58.034903 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:40:58.034923 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:40:58.034936 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:40:58.034954 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 13 21:40:58.034966 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:40:58.034979 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:40:58.034991 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 21:40:58.035007 kernel: MMIO Stale Data: Unknown: No mitigations Jan 13 21:40:58.035021 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 13 21:40:58.035033 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:40:58.035046 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:40:58.035058 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:40:58.035070 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:40:58.035087 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 21:40:58.035100 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:40:58.035112 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:40:58.035124 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:40:58.035136 kernel: landlock: Up and running. Jan 13 21:40:58.035149 kernel: SELinux: Initializing. Jan 13 21:40:58.035161 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:40:58.035173 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:40:58.035185 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 13 21:40:58.035198 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 21:40:58.035210 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 21:40:58.035227 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 21:40:58.035240 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 13 21:40:58.035253 kernel: signal: max sigframe size: 1776 Jan 13 21:40:58.035265 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:40:58.035278 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:40:58.035290 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:40:58.035302 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:40:58.035315 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:40:58.035348 kernel: .... node #0, CPUs: #1 Jan 13 21:40:58.035370 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 13 21:40:58.035382 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:40:58.035394 kernel: smpboot: Max logical packages: 16 Jan 13 21:40:58.035407 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 13 21:40:58.035426 kernel: devtmpfs: initialized Jan 13 21:40:58.035439 kernel: x86/mm: Memory block size: 128MB Jan 13 21:40:58.035452 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:40:58.035464 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 21:40:58.035477 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:40:58.035495 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:40:58.035507 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:40:58.035519 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:40:58.035532 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:40:58.035544 kernel: audit: type=2000 audit(1736804456.321:1): state=initialized audit_enabled=0 res=1 Jan 13 21:40:58.035556 kernel: cpuidle: using governor menu Jan 13 21:40:58.035568 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:40:58.035581 kernel: dca service started, version 1.12.1 Jan 13 21:40:58.035594 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:40:58.035610 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:40:58.035623 kernel: PCI: Using configuration type 1 for base access Jan 13 21:40:58.035636 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:40:58.035648 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:40:58.035660 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:40:58.035677 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:40:58.035690 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:40:58.035702 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:40:58.035715 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:40:58.035741 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:40:58.035754 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:40:58.035766 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:40:58.035778 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:40:58.035791 kernel: ACPI: Interpreter enabled Jan 13 21:40:58.035805 kernel: ACPI: PM: (supports S0 S5) Jan 13 21:40:58.035817 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:40:58.035829 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:40:58.035855 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:40:58.035876 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:40:58.035889 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:40:58.036199 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:40:58.036415 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:40:58.036597 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:40:58.036617 kernel: PCI host bridge to bus 0000:00 Jan 13 21:40:58.036822 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:40:58.036995 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:40:58.037147 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:40:58.037293 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 13 21:40:58.037487 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:40:58.037639 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 13 21:40:58.037794 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:40:58.037996 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:40:58.038189 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 13 21:40:58.038398 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 13 21:40:58.038569 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 13 21:40:58.038739 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 13 21:40:58.038916 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:40:58.039089 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 21:40:58.039261 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 13 21:40:58.039458 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 21:40:58.039677 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 13 21:40:58.039878 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 21:40:58.040043 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 13 21:40:58.040235 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 21:40:58.040428 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 13 21:40:58.040609 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 21:40:58.040771 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 13 21:40:58.040966 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 21:40:58.041128 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 13 21:40:58.041299 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 21:40:58.041526 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 13 21:40:58.041702 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 21:40:58.041874 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 13 21:40:58.042043 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:40:58.042202 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 21:40:58.042406 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 13 21:40:58.042571 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 13 21:40:58.042744 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 13 21:40:58.042926 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:40:58.043084 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:40:58.043242 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 13 21:40:58.043461 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 13 21:40:58.043656 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:40:58.043818 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:40:58.044018 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:40:58.044217 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 13 21:40:58.044432 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 13 21:40:58.044619 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:40:58.044791 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 21:40:58.044982 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 13 21:40:58.045153 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 13 21:40:58.045317 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 13 21:40:58.045521 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 13 21:40:58.045681 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 21:40:58.045872 kernel: pci_bus 0000:02: extended config space not accessible Jan 13 21:40:58.046073 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 13 21:40:58.046252 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 13 21:40:58.046524 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 13 21:40:58.046688 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 13 21:40:58.046896 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 21:40:58.047092 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 13 21:40:58.047252 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 13 21:40:58.047443 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 13 21:40:58.047600 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 21:40:58.047801 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 21:40:58.048000 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 13 21:40:58.048163 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 13 21:40:58.048369 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 13 21:40:58.048531 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 21:40:58.048698 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 13 21:40:58.048877 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 13 21:40:58.049042 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 21:40:58.049199 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 13 21:40:58.049382 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 13 21:40:58.049549 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 21:40:58.049714 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 13 21:40:58.049883 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 13 21:40:58.050040 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 21:40:58.050197 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 13 21:40:58.050406 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 13 21:40:58.050564 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 21:40:58.050728 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 13 21:40:58.050897 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 13 21:40:58.051053 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 21:40:58.051072 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:40:58.051085 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:40:58.051098 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:40:58.051111 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:40:58.051130 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:40:58.051143 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:40:58.051160 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:40:58.051172 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:40:58.051185 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:40:58.051197 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:40:58.051209 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:40:58.051222 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:40:58.051234 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:40:58.051260 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:40:58.051273 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:40:58.051285 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:40:58.051298 kernel: iommu: Default domain type: Translated Jan 13 21:40:58.051318 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:40:58.051357 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:40:58.051370 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:40:58.051383 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:40:58.051395 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 13 21:40:58.051576 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:40:58.051760 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:40:58.051950 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:40:58.051969 kernel: vgaarb: loaded Jan 13 21:40:58.051982 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:40:58.051995 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:40:58.052007 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:40:58.052020 kernel: pnp: PnP ACPI init Jan 13 21:40:58.052191 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:40:58.052211 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:40:58.052225 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:40:58.052237 kernel: NET: Registered PF_INET protocol family Jan 13 21:40:58.052250 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:40:58.052263 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 21:40:58.052276 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:40:58.052288 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:40:58.052308 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:40:58.052321 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 21:40:58.052371 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:40:58.052388 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:40:58.052400 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:40:58.052413 kernel: NET: Registered PF_XDP protocol family Jan 13 21:40:58.052585 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 13 21:40:58.052741 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 21:40:58.052928 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 21:40:58.053087 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 21:40:58.053242 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 21:40:58.053428 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 21:40:58.053623 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 21:40:58.053794 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 21:40:58.053973 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 21:40:58.054131 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 21:40:58.054289 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 21:40:58.054485 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 21:40:58.054642 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 21:40:58.054798 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 21:40:58.055026 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 21:40:58.055189 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 21:40:58.055410 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 13 21:40:58.055583 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 13 21:40:58.055746 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 13 21:40:58.055966 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 21:40:58.056128 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 13 21:40:58.056362 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 21:40:58.056568 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 13 21:40:58.056728 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 21:40:58.056910 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 13 21:40:58.057069 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 21:40:58.057226 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 13 21:40:58.057421 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 21:40:58.057581 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 13 21:40:58.057747 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 21:40:58.057927 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 13 21:40:58.058088 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 21:40:58.058256 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 13 21:40:58.058469 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 21:40:58.058628 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 13 21:40:58.058784 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 21:40:58.058956 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 13 21:40:58.059114 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 21:40:58.059273 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 13 21:40:58.059471 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 21:40:58.059632 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 13 21:40:58.059791 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 21:40:58.059963 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 13 21:40:58.060123 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 21:40:58.060295 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 13 21:40:58.060529 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 21:40:58.060688 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 13 21:40:58.060881 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 21:40:58.061040 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 13 21:40:58.061238 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 21:40:58.061421 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:40:58.061602 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:40:58.061749 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:40:58.061916 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 13 21:40:58.062062 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:40:58.062206 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 13 21:40:58.062425 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 21:40:58.062590 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 13 21:40:58.062744 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 21:40:58.062951 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 13 21:40:58.063123 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 13 21:40:58.063273 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 13 21:40:58.063464 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 21:40:58.063652 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 13 21:40:58.063824 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 13 21:40:58.063987 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 21:40:58.064164 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 13 21:40:58.064316 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 13 21:40:58.064508 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 21:40:58.064676 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 13 21:40:58.064826 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 13 21:40:58.064989 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 21:40:58.065146 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 13 21:40:58.065304 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 13 21:40:58.065481 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 21:40:58.065657 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 13 21:40:58.065810 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 13 21:40:58.065973 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 21:40:58.066133 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 13 21:40:58.066286 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 13 21:40:58.066484 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 21:40:58.066506 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:40:58.066520 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:40:58.066540 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:40:58.066553 kernel: software IO TLB: mapped [mem 0x0000000073e00000-0x0000000077e00000] (64MB) Jan 13 21:40:58.066566 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:40:58.066579 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 13 21:40:58.066593 kernel: Initialise system trusted keyrings Jan 13 21:40:58.066610 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 21:40:58.066623 kernel: Key type asymmetric registered Jan 13 21:40:58.066636 kernel: Asymmetric key parser 'x509' registered Jan 13 21:40:58.066649 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:40:58.066662 kernel: io scheduler mq-deadline registered Jan 13 21:40:58.066675 kernel: io scheduler kyber registered Jan 13 21:40:58.066688 kernel: io scheduler bfq registered Jan 13 21:40:58.066855 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 13 21:40:58.067018 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 13 21:40:58.067183 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:40:58.067368 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 13 21:40:58.067538 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 13 21:40:58.067709 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:40:58.067888 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 13 21:40:58.068049 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 13 21:40:58.068223 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:40:58.068426 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 13 21:40:58.068609 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 13 21:40:58.068787 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:40:58.068969 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 13 21:40:58.069138 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 13 21:40:58.069322 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:40:58.069509 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 13 21:40:58.069666 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 13 21:40:58.069844 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:40:58.070013 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 13 21:40:58.070182 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 13 21:40:58.070401 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:40:58.070563 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 13 21:40:58.070721 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 13 21:40:58.070922 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 21:40:58.070943 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:40:58.070957 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:40:58.070977 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:40:58.070990 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:40:58.071003 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:40:58.071016 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:40:58.071029 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:40:58.071043 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:40:58.071056 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:40:58.071246 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 13 21:40:58.071451 kernel: rtc_cmos 00:03: registered as rtc0 Jan 13 21:40:58.071622 kernel: rtc_cmos 00:03: setting system clock to 2025-01-13T21:40:57 UTC (1736804457) Jan 13 21:40:58.071787 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 13 21:40:58.071817 kernel: intel_pstate: CPU model not supported Jan 13 21:40:58.071829 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:40:58.071853 kernel: Segment Routing with IPv6 Jan 13 21:40:58.071878 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:40:58.071891 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:40:58.071904 kernel: Key type dns_resolver registered Jan 13 21:40:58.071923 kernel: IPI shorthand broadcast: enabled Jan 13 21:40:58.071937 kernel: sched_clock: Marking stable (1203014283, 222784714)->(1543350205, -117551208) Jan 13 21:40:58.071950 kernel: registered taskstats version 1 Jan 13 21:40:58.071963 kernel: Loading compiled-in X.509 certificates Jan 13 21:40:58.071981 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 21:40:58.071994 kernel: Key type .fscrypt registered Jan 13 21:40:58.072007 kernel: Key type fscrypt-provisioning registered Jan 13 21:40:58.072020 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:40:58.072033 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:40:58.072051 kernel: ima: No architecture policies found Jan 13 21:40:58.072064 kernel: clk: Disabling unused clocks Jan 13 21:40:58.072077 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 21:40:58.072090 kernel: Write protecting the kernel read-only data: 38912k Jan 13 21:40:58.072103 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 21:40:58.072116 kernel: Run /init as init process Jan 13 21:40:58.072128 kernel: with arguments: Jan 13 21:40:58.072141 kernel: /init Jan 13 21:40:58.072166 kernel: with environment: Jan 13 21:40:58.072182 kernel: HOME=/ Jan 13 21:40:58.072193 kernel: TERM=linux Jan 13 21:40:58.072205 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:40:58.072238 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:40:58.072255 systemd[1]: Detected virtualization kvm. Jan 13 21:40:58.072269 systemd[1]: Detected architecture x86-64. Jan 13 21:40:58.072282 systemd[1]: Running in initrd. Jan 13 21:40:58.072307 systemd[1]: No hostname configured, using default hostname. Jan 13 21:40:58.072326 systemd[1]: Hostname set to . Jan 13 21:40:58.072391 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:40:58.072407 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:40:58.072421 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:40:58.072435 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:40:58.072449 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:40:58.072463 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:40:58.072477 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:40:58.072498 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:40:58.072514 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:40:58.072529 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:40:58.072543 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:40:58.072557 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:40:58.072571 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:40:58.072590 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:40:58.072604 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:40:58.072618 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:40:58.072632 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:40:58.072646 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:40:58.072660 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:40:58.072674 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:40:58.072688 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:40:58.072702 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:40:58.072720 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:40:58.072734 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:40:58.072748 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:40:58.072762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:40:58.072776 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:40:58.072790 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:40:58.072804 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:40:58.072818 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:40:58.072842 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:40:58.072864 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:40:58.072878 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:40:58.072892 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:40:58.072948 systemd-journald[201]: Collecting audit messages is disabled. Jan 13 21:40:58.072986 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:40:58.073008 systemd-journald[201]: Journal started Jan 13 21:40:58.073040 systemd-journald[201]: Runtime Journal (/run/log/journal/8488a11ca21c462abb64bf2ddf0e252e) is 4.7M, max 37.9M, 33.2M free. Jan 13 21:40:58.032415 systemd-modules-load[202]: Inserted module 'overlay' Jan 13 21:40:58.123116 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:40:58.123163 kernel: Bridge firewalling registered Jan 13 21:40:58.123182 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:40:58.081985 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 13 21:40:58.125288 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:40:58.126252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:40:58.133677 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:40:58.139500 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:40:58.148522 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:40:58.152688 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:40:58.163580 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:40:58.167530 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:40:58.178762 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:40:58.182690 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:40:58.185068 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:40:58.191629 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:40:58.196527 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:40:58.211991 dracut-cmdline[236]: dracut-dracut-053 Jan 13 21:40:58.215407 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 21:40:58.244484 systemd-resolved[238]: Positive Trust Anchors: Jan 13 21:40:58.245497 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:40:58.245540 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:40:58.253490 systemd-resolved[238]: Defaulting to hostname 'linux'. Jan 13 21:40:58.256720 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:40:58.257495 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:40:58.309365 kernel: SCSI subsystem initialized Jan 13 21:40:58.321354 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:40:58.334395 kernel: iscsi: registered transport (tcp) Jan 13 21:40:58.359870 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:40:58.359950 kernel: QLogic iSCSI HBA Driver Jan 13 21:40:58.412735 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:40:58.419587 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:40:58.459791 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:40:58.459891 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:40:58.461374 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:40:58.510386 kernel: raid6: sse2x4 gen() 13333 MB/s Jan 13 21:40:58.528364 kernel: raid6: sse2x2 gen() 9171 MB/s Jan 13 21:40:58.546908 kernel: raid6: sse2x1 gen() 8531 MB/s Jan 13 21:40:58.547077 kernel: raid6: using algorithm sse2x4 gen() 13333 MB/s Jan 13 21:40:58.565947 kernel: raid6: .... xor() 8014 MB/s, rmw enabled Jan 13 21:40:58.566089 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 21:40:58.591391 kernel: xor: automatically using best checksumming function avx Jan 13 21:40:58.752362 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:40:58.767406 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:40:58.775603 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:40:58.795811 systemd-udevd[421]: Using default interface naming scheme 'v255'. Jan 13 21:40:58.804161 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:40:58.813708 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:40:58.834360 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Jan 13 21:40:58.873964 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:40:58.881510 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:40:58.993899 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:40:59.000538 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:40:59.034269 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:40:59.037005 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:40:59.040398 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:40:59.042268 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:40:59.050681 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:40:59.078943 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:40:59.118352 kernel: ACPI: bus type USB registered Jan 13 21:40:59.123522 kernel: usbcore: registered new interface driver usbfs Jan 13 21:40:59.123566 kernel: usbcore: registered new interface driver hub Jan 13 21:40:59.123585 kernel: usbcore: registered new device driver usb Jan 13 21:40:59.132346 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 13 21:40:59.208609 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:40:59.208659 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 13 21:40:59.208925 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 13 21:40:59.210290 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 13 21:40:59.210544 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 21:40:59.210748 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 13 21:40:59.210985 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 13 21:40:59.211186 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 13 21:40:59.211416 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:40:59.211436 kernel: GPT:17805311 != 125829119 Jan 13 21:40:59.211458 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:40:59.211475 kernel: GPT:17805311 != 125829119 Jan 13 21:40:59.211491 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:40:59.211515 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:40:59.211532 kernel: hub 1-0:1.0: USB hub found Jan 13 21:40:59.211786 kernel: hub 1-0:1.0: 4 ports detected Jan 13 21:40:59.212019 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 21:40:59.212250 kernel: hub 2-0:1.0: USB hub found Jan 13 21:40:59.217519 kernel: hub 2-0:1.0: 4 ports detected Jan 13 21:40:59.176403 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:40:59.177044 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:40:59.178924 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:40:59.230027 kernel: AVX version of gcm_enc/dec engaged. Jan 13 21:40:59.204823 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:40:59.205139 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:40:59.209774 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:40:59.220657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:40:59.249642 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (469) Jan 13 21:40:59.260377 kernel: AES CTR mode by8 optimization enabled Jan 13 21:40:59.262286 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:40:59.374507 kernel: libata version 3.00 loaded. Jan 13 21:40:59.374561 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:40:59.374855 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:40:59.374886 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:40:59.375082 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:40:59.375284 kernel: scsi host0: ahci Jan 13 21:40:59.375517 kernel: scsi host1: ahci Jan 13 21:40:59.375707 kernel: scsi host2: ahci Jan 13 21:40:59.375909 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (470) Jan 13 21:40:59.375930 kernel: scsi host3: ahci Jan 13 21:40:59.376114 kernel: scsi host4: ahci Jan 13 21:40:59.376346 kernel: scsi host5: ahci Jan 13 21:40:59.376546 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 13 21:40:59.376567 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 13 21:40:59.376585 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 13 21:40:59.376603 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 13 21:40:59.376620 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 13 21:40:59.376637 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 13 21:40:59.380863 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:40:59.382057 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:40:59.395071 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:40:59.411027 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:40:59.412023 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:40:59.420542 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:40:59.425548 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:40:59.436898 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:40:59.437237 disk-uuid[558]: Primary Header is updated. Jan 13 21:40:59.437237 disk-uuid[558]: Secondary Entries is updated. Jan 13 21:40:59.437237 disk-uuid[558]: Secondary Header is updated. Jan 13 21:40:59.449460 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 21:40:59.473715 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:40:59.594356 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:40:59.623513 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:40:59.623637 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:40:59.625421 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:40:59.627649 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:40:59.627691 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 21:40:59.630793 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:40:59.644071 kernel: usbcore: registered new interface driver usbhid Jan 13 21:40:59.644132 kernel: usbhid: USB HID core driver Jan 13 21:40:59.652561 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 13 21:40:59.652628 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 13 21:41:00.462717 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:41:00.462815 disk-uuid[559]: The operation has completed successfully. Jan 13 21:41:00.521228 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:41:00.521452 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:41:00.547542 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:41:00.553798 sh[584]: Success Jan 13 21:41:00.572155 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 13 21:41:00.635968 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:41:00.645459 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:41:00.651671 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:41:00.684401 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 21:41:00.684558 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:41:00.684589 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:41:00.684607 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:41:00.685349 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:41:00.698010 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:41:00.699608 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:41:00.705567 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:41:00.707858 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:41:00.729489 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 21:41:00.729577 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:41:00.729599 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:41:00.736358 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:41:00.748213 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:41:00.750430 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 21:41:00.757877 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:41:00.768692 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:41:00.856546 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:41:00.864629 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:41:00.905116 systemd-networkd[769]: lo: Link UP Jan 13 21:41:00.906251 systemd-networkd[769]: lo: Gained carrier Jan 13 21:41:00.908709 systemd-networkd[769]: Enumeration completed Jan 13 21:41:00.908874 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:41:00.909890 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:41:00.909896 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:41:00.911236 systemd-networkd[769]: eth0: Link UP Jan 13 21:41:00.911242 systemd-networkd[769]: eth0: Gained carrier Jan 13 21:41:00.911253 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:41:00.912045 systemd[1]: Reached target network.target - Network. Jan 13 21:41:00.925350 ignition[681]: Ignition 2.20.0 Jan 13 21:41:00.925378 ignition[681]: Stage: fetch-offline Jan 13 21:41:00.925477 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:41:00.927947 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:41:00.925497 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:41:00.925716 ignition[681]: parsed url from cmdline: "" Jan 13 21:41:00.925724 ignition[681]: no config URL provided Jan 13 21:41:00.925733 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:41:00.925748 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:41:00.925784 ignition[681]: failed to fetch config: resource requires networking Jan 13 21:41:00.926269 ignition[681]: Ignition finished successfully Jan 13 21:41:00.936648 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:41:00.955539 systemd-networkd[769]: eth0: DHCPv4 address 10.230.41.226/30, gateway 10.230.41.225 acquired from 10.230.41.225 Jan 13 21:41:00.955615 ignition[776]: Ignition 2.20.0 Jan 13 21:41:00.955627 ignition[776]: Stage: fetch Jan 13 21:41:00.955939 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:41:00.955959 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:41:00.956100 ignition[776]: parsed url from cmdline: "" Jan 13 21:41:00.956107 ignition[776]: no config URL provided Jan 13 21:41:00.956116 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:41:00.956131 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:41:00.956295 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 21:41:00.956605 ignition[776]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 13 21:41:00.956656 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 21:41:00.956677 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 21:41:01.156879 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jan 13 21:41:01.175743 ignition[776]: GET result: OK Jan 13 21:41:01.176289 ignition[776]: parsing config with SHA512: 67d8a3de4866a3578671e66df2ca32abc86b9718996b0a93c909903189b20ec0efb422d1f33ceca43f61f645f9cdb5e977de582304707147ba5549dfdb399dd9 Jan 13 21:41:01.181448 unknown[776]: fetched base config from "system" Jan 13 21:41:01.181991 unknown[776]: fetched base config from "system" Jan 13 21:41:01.182001 unknown[776]: fetched user config from "openstack" Jan 13 21:41:01.183808 ignition[776]: fetch: fetch complete Jan 13 21:41:01.185897 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:41:01.183818 ignition[776]: fetch: fetch passed Jan 13 21:41:01.183908 ignition[776]: Ignition finished successfully Jan 13 21:41:01.208574 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:41:01.224240 ignition[783]: Ignition 2.20.0 Jan 13 21:41:01.224265 ignition[783]: Stage: kargs Jan 13 21:41:01.224539 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:41:01.224560 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:41:01.226822 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:41:01.225430 ignition[783]: kargs: kargs passed Jan 13 21:41:01.225503 ignition[783]: Ignition finished successfully Jan 13 21:41:01.233517 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:41:01.251209 ignition[789]: Ignition 2.20.0 Jan 13 21:41:01.251233 ignition[789]: Stage: disks Jan 13 21:41:01.251527 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:41:01.255032 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:41:01.251548 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:41:01.252364 ignition[789]: disks: disks passed Jan 13 21:41:01.256650 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:41:01.252452 ignition[789]: Ignition finished successfully Jan 13 21:41:01.258202 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:41:01.259493 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:41:01.260996 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:41:01.262170 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:41:01.272534 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:41:01.292817 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:41:01.298208 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:41:01.302456 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:41:01.419417 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 21:41:01.420946 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:41:01.422347 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:41:01.428425 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:41:01.431484 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:41:01.433858 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:41:01.438526 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 21:41:01.439916 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:41:01.439963 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:41:01.445746 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:41:01.448355 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (806) Jan 13 21:41:01.452646 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 21:41:01.452682 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:41:01.452732 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:41:01.460668 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:41:01.463284 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:41:01.465132 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:41:01.544847 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:41:01.553826 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:41:01.561057 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:41:01.569604 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:41:01.678963 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:41:01.683492 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:41:01.687533 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:41:01.699720 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:41:01.702509 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 21:41:01.734968 ignition[922]: INFO : Ignition 2.20.0 Jan 13 21:41:01.734968 ignition[922]: INFO : Stage: mount Jan 13 21:41:01.734968 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:41:01.734968 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:41:01.734968 ignition[922]: INFO : mount: mount passed Jan 13 21:41:01.734968 ignition[922]: INFO : Ignition finished successfully Jan 13 21:41:01.738487 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:41:01.744233 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:41:02.245225 systemd-networkd[769]: eth0: Gained IPv6LL Jan 13 21:41:03.753681 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8a78:24:19ff:fee6:29e2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8a78:24:19ff:fee6:29e2/64 assigned by NDisc. Jan 13 21:41:03.753708 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 13 21:41:08.629734 coreos-metadata[808]: Jan 13 21:41:08.629 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:41:08.652764 coreos-metadata[808]: Jan 13 21:41:08.652 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:41:08.669802 coreos-metadata[808]: Jan 13 21:41:08.669 INFO Fetch successful Jan 13 21:41:08.670716 coreos-metadata[808]: Jan 13 21:41:08.670 INFO wrote hostname srv-c9w3r.gb1.brightbox.com to /sysroot/etc/hostname Jan 13 21:41:08.673429 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 21:41:08.673654 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 21:41:08.691012 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:41:08.699810 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:41:08.719382 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Jan 13 21:41:08.724602 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 21:41:08.724647 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:41:08.724668 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:41:08.730356 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:41:08.733958 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:41:08.774589 ignition[957]: INFO : Ignition 2.20.0 Jan 13 21:41:08.774589 ignition[957]: INFO : Stage: files Jan 13 21:41:08.776436 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:41:08.776436 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:41:08.776436 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:41:08.779291 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:41:08.779291 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:41:08.781537 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:41:08.782538 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:41:08.783693 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:41:08.783604 unknown[957]: wrote ssh authorized keys file for user: core Jan 13 21:41:08.785789 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:41:08.786918 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:41:08.786918 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:41:08.786918 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:41:08.786918 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:41:08.786918 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:41:08.786918 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:41:08.786918 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:41:09.368674 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 21:41:11.372438 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:41:11.376706 ignition[957]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:41:11.376706 ignition[957]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:41:11.376706 ignition[957]: INFO : files: files passed Jan 13 21:41:11.376706 ignition[957]: INFO : Ignition finished successfully Jan 13 21:41:11.376069 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:41:11.391750 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:41:11.394888 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:41:11.397614 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:41:11.397743 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:41:11.417881 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:41:11.417881 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:41:11.421272 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:41:11.421595 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:41:11.423972 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:41:11.437025 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:41:11.473781 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:41:11.473965 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:41:11.476025 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:41:11.477140 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:41:11.478668 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:41:11.484571 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:41:11.503507 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:41:11.508595 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:41:11.531053 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:41:11.533014 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:41:11.533977 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:41:11.535718 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:41:11.535896 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:41:11.537462 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:41:11.538331 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:41:11.539760 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:41:11.541087 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:41:11.542446 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:41:11.543973 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:41:11.545344 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:41:11.546825 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:41:11.548273 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:41:11.549773 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:41:11.550985 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:41:11.551171 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:41:11.552760 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:41:11.553640 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:41:11.554920 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:41:11.555383 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:41:11.556549 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:41:11.556697 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:41:11.558734 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:41:11.558909 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:41:11.560471 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:41:11.560681 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:41:11.568714 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:41:11.569409 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:41:11.569677 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:41:11.574631 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:41:11.576701 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:41:11.576962 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:41:11.579573 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:41:11.579810 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:41:11.588738 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:41:11.591628 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:41:11.600890 ignition[1009]: INFO : Ignition 2.20.0 Jan 13 21:41:11.600890 ignition[1009]: INFO : Stage: umount Jan 13 21:41:11.600890 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:41:11.600890 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:41:11.600890 ignition[1009]: INFO : umount: umount passed Jan 13 21:41:11.600890 ignition[1009]: INFO : Ignition finished successfully Jan 13 21:41:11.602861 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:41:11.603051 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:41:11.605217 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:41:11.605369 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:41:11.606068 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:41:11.606129 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:41:11.608999 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:41:11.609079 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:41:11.610053 systemd[1]: Stopped target network.target - Network. Jan 13 21:41:11.610661 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:41:11.610728 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:41:11.613487 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:41:11.615025 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:41:11.615166 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:41:11.616678 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:41:11.618081 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:41:11.618757 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:41:11.618819 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:41:11.620772 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:41:11.620832 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:41:11.621973 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:41:11.622076 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:41:11.630489 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:41:11.630573 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:41:11.631496 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:41:11.633761 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:41:11.635565 systemd-networkd[769]: eth0: DHCPv6 lease lost Jan 13 21:41:11.639113 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:41:11.640191 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:41:11.640366 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:41:11.641887 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:41:11.642041 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:41:11.646247 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:41:11.646718 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:41:11.652488 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:41:11.653838 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:41:11.653920 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:41:11.656421 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:41:11.656490 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:41:11.659027 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:41:11.659092 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:41:11.661032 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:41:11.661107 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:41:11.662695 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:41:11.673879 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:41:11.674085 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:41:11.677953 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:41:11.678083 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:41:11.680272 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:41:11.681015 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:41:11.682023 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:41:11.682078 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:41:11.683480 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:41:11.683560 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:41:11.685545 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:41:11.685624 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:41:11.686893 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:41:11.686958 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:41:11.693570 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:41:11.694245 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:41:11.694929 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:41:11.697718 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:41:11.697793 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:41:11.703374 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:41:11.704423 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:41:11.741415 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:41:11.741605 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:41:11.743644 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:41:11.744455 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:41:11.744558 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:41:11.757036 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:41:11.766254 systemd[1]: Switching root. Jan 13 21:41:11.806592 systemd-journald[201]: Journal stopped Jan 13 21:41:13.188454 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 13 21:41:13.188576 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:41:13.188620 kernel: SELinux: policy capability open_perms=1 Jan 13 21:41:13.188641 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:41:13.188667 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:41:13.188686 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:41:13.188705 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:41:13.188723 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:41:13.188741 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:41:13.188759 kernel: audit: type=1403 audit(1736804472.043:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:41:13.188781 systemd[1]: Successfully loaded SELinux policy in 50.348ms. Jan 13 21:41:13.188826 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.908ms. Jan 13 21:41:13.188849 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:41:13.188871 systemd[1]: Detected virtualization kvm. Jan 13 21:41:13.188892 systemd[1]: Detected architecture x86-64. Jan 13 21:41:13.188923 systemd[1]: Detected first boot. Jan 13 21:41:13.188946 systemd[1]: Hostname set to . Jan 13 21:41:13.188978 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:41:13.188999 zram_generator::config[1052]: No configuration found. Jan 13 21:41:13.189026 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:41:13.189053 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:41:13.189087 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:41:13.189107 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:41:13.189128 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:41:13.189160 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:41:13.189204 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:41:13.189224 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:41:13.189250 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:41:13.189271 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:41:13.189297 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:41:13.189345 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:41:13.189370 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:41:13.189391 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:41:13.189426 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:41:13.189454 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:41:13.189492 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:41:13.189525 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:41:13.189551 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:41:13.189572 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:41:13.189598 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:41:13.189633 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:41:13.189655 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:41:13.189675 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:41:13.189696 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:41:13.189716 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:41:13.189736 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:41:13.189756 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:41:13.189776 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:41:13.189808 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:41:13.189837 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:41:13.189868 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:41:13.189890 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:41:13.189915 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:41:13.189935 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:41:13.189956 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:41:13.189976 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:41:13.189996 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:41:13.190047 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:41:13.190068 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:41:13.190105 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:41:13.190124 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:41:13.190141 systemd[1]: Reached target machines.target - Containers. Jan 13 21:41:13.190158 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:41:13.190176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:41:13.190206 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:41:13.190239 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:41:13.190271 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:41:13.190296 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:41:13.196468 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:41:13.196539 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:41:13.196563 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:41:13.196598 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:41:13.196620 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:41:13.196640 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:41:13.196668 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:41:13.196690 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:41:13.196711 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:41:13.196731 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:41:13.196751 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:41:13.196838 systemd-journald[1155]: Collecting audit messages is disabled. Jan 13 21:41:13.196906 kernel: loop: module loaded Jan 13 21:41:13.196928 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:41:13.196964 kernel: fuse: init (API version 7.39) Jan 13 21:41:13.196989 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:41:13.197014 systemd-journald[1155]: Journal started Jan 13 21:41:13.197046 systemd-journald[1155]: Runtime Journal (/run/log/journal/8488a11ca21c462abb64bf2ddf0e252e) is 4.7M, max 37.9M, 33.2M free. Jan 13 21:41:12.852716 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:41:13.199586 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:41:13.199622 systemd[1]: Stopped verity-setup.service. Jan 13 21:41:12.875908 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:41:12.876652 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:41:13.207423 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:41:13.213567 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:41:13.217589 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:41:13.218401 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:41:13.219213 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:41:13.220381 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:41:13.221212 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:41:13.222112 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:41:13.223195 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:41:13.224470 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:41:13.225682 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:41:13.225917 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:41:13.227143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:41:13.227382 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:41:13.228676 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:41:13.228946 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:41:13.230248 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:41:13.230588 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:41:13.231719 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:41:13.231960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:41:13.233003 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:41:13.234074 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:41:13.235144 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:41:13.257076 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:41:13.270557 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:41:13.276437 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:41:13.277241 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:41:13.277289 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:41:13.281866 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:41:13.314586 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:41:13.319420 kernel: ACPI: bus type drm_connector registered Jan 13 21:41:13.322622 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:41:13.323485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:41:13.327385 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:41:13.329588 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:41:13.330494 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:41:13.338605 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:41:13.341445 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:41:13.350566 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:41:13.354564 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:41:13.361630 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:41:13.376367 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:41:13.377452 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:41:13.379953 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:41:13.381023 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:41:13.383123 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:41:13.412138 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:41:13.413742 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:41:13.428171 systemd-journald[1155]: Time spent on flushing to /var/log/journal/8488a11ca21c462abb64bf2ddf0e252e is 134.725ms for 1125 entries. Jan 13 21:41:13.428171 systemd-journald[1155]: System Journal (/var/log/journal/8488a11ca21c462abb64bf2ddf0e252e) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:41:13.591715 systemd-journald[1155]: Received client request to flush runtime journal. Jan 13 21:41:13.591783 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 21:41:13.591811 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:41:13.591848 kernel: loop1: detected capacity change from 0 to 210664 Jan 13 21:41:13.434041 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:41:13.509211 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:41:13.519275 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:41:13.523189 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:41:13.543996 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:41:13.562572 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:41:13.595959 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:41:13.599281 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:41:13.611599 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:41:13.647393 kernel: loop2: detected capacity change from 0 to 141000 Jan 13 21:41:13.661307 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jan 13 21:41:13.662036 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jan 13 21:41:13.690948 udevadm[1207]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:41:13.695949 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:41:13.709396 kernel: loop3: detected capacity change from 0 to 8 Jan 13 21:41:13.738372 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 21:41:13.772383 kernel: loop5: detected capacity change from 0 to 210664 Jan 13 21:41:13.802225 kernel: loop6: detected capacity change from 0 to 141000 Jan 13 21:41:13.827523 kernel: loop7: detected capacity change from 0 to 8 Jan 13 21:41:13.833199 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 21:41:13.834007 (sd-merge)[1211]: Merged extensions into '/usr'. Jan 13 21:41:13.845895 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:41:13.845920 systemd[1]: Reloading... Jan 13 21:41:13.993370 zram_generator::config[1237]: No configuration found. Jan 13 21:41:14.143934 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:41:14.262383 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:41:14.332215 systemd[1]: Reloading finished in 485 ms. Jan 13 21:41:14.367216 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:41:14.373998 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:41:14.388624 systemd[1]: Starting ensure-sysext.service... Jan 13 21:41:14.394977 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:41:14.428258 systemd[1]: Reloading requested from client PID 1293 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:41:14.428286 systemd[1]: Reloading... Jan 13 21:41:14.430509 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:41:14.431020 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:41:14.434827 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:41:14.435274 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Jan 13 21:41:14.436429 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Jan 13 21:41:14.449059 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:41:14.449083 systemd-tmpfiles[1294]: Skipping /boot Jan 13 21:41:14.494938 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:41:14.494958 systemd-tmpfiles[1294]: Skipping /boot Jan 13 21:41:14.511392 zram_generator::config[1317]: No configuration found. Jan 13 21:41:14.716417 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:41:14.785844 systemd[1]: Reloading finished in 356 ms. Jan 13 21:41:14.812684 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:41:14.822303 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:41:14.837766 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:41:14.842685 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:41:14.848911 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:41:14.858702 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:41:14.870801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:41:14.875970 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:41:14.880856 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:41:14.881132 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:41:14.888677 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:41:14.894541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:41:14.900756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:41:14.902042 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:41:14.911666 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:41:14.912385 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:41:14.914029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:41:14.914825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:41:14.925680 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:41:14.926004 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:41:14.933140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:41:14.934052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:41:14.934206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:41:14.935493 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:41:14.943825 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:41:14.944139 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:41:14.951972 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:41:14.954645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:41:14.954907 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:41:14.957799 systemd[1]: Finished ensure-sysext.service. Jan 13 21:41:14.966634 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:41:14.969179 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:41:14.971061 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:41:14.994522 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:41:14.995767 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:41:14.996084 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:41:15.008620 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:41:15.011206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:41:15.013746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:41:15.014916 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:41:15.020943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:41:15.021183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:41:15.024668 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:41:15.025406 augenrules[1419]: No rules Jan 13 21:41:15.027778 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:41:15.028050 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:41:15.033945 systemd-udevd[1389]: Using default interface naming scheme 'v255'. Jan 13 21:41:15.050488 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:41:15.068587 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:41:15.074269 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:41:15.076769 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:41:15.106530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:41:15.119624 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:41:15.231225 systemd-resolved[1388]: Positive Trust Anchors: Jan 13 21:41:15.232178 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:41:15.232357 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:41:15.242670 systemd-resolved[1388]: Using system hostname 'srv-c9w3r.gb1.brightbox.com'. Jan 13 21:41:15.245427 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:41:15.246423 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:41:15.250801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:41:15.253684 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:41:15.254798 systemd-networkd[1437]: lo: Link UP Jan 13 21:41:15.254809 systemd-networkd[1437]: lo: Gained carrier Jan 13 21:41:15.255899 systemd-timesyncd[1411]: No network connectivity, watching for changes. Jan 13 21:41:15.256996 systemd-networkd[1437]: Enumeration completed Jan 13 21:41:15.257133 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:41:15.257944 systemd[1]: Reached target network.target - Network. Jan 13 21:41:15.264591 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:41:15.334391 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1436) Jan 13 21:41:15.339366 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:41:15.379792 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:41:15.388953 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:41:15.418000 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:41:15.449196 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:41:15.449210 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:41:15.451301 systemd-networkd[1437]: eth0: Link UP Jan 13 21:41:15.451314 systemd-networkd[1437]: eth0: Gained carrier Jan 13 21:41:15.451362 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:41:15.464637 systemd-networkd[1437]: eth0: DHCPv4 address 10.230.41.226/30, gateway 10.230.41.225 acquired from 10.230.41.225 Jan 13 21:41:15.465979 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Jan 13 21:41:15.485357 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:41:15.493403 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:41:15.495374 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:41:15.530760 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:41:15.535616 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:41:15.535926 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:41:15.549115 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:41:15.600677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:41:15.765481 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:41:15.788365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:41:15.794647 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:41:15.813254 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:41:15.849984 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:41:15.851683 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:41:15.852514 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:41:15.853298 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:41:15.854275 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:41:15.855351 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:41:15.856191 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:41:15.856959 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:41:15.857684 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:41:15.857737 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:41:15.858344 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:41:15.860664 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:41:15.863183 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:41:15.868539 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:41:15.871142 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:41:15.872669 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:41:15.873651 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:41:15.874360 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:41:15.875158 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:41:15.875210 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:41:15.890221 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:41:15.897717 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:41:15.898680 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:41:15.900567 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:41:15.904962 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:41:15.910661 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:41:15.912430 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:41:15.919536 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:41:15.931540 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:41:15.936213 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:41:15.950644 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:41:15.953390 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:41:15.954869 jq[1480]: false Jan 13 21:41:15.954738 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:41:15.960901 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:41:15.966528 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:41:15.969781 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:41:15.978990 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:41:15.979323 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:41:15.984152 extend-filesystems[1481]: Found loop4 Jan 13 21:41:15.988427 extend-filesystems[1481]: Found loop5 Jan 13 21:41:15.988427 extend-filesystems[1481]: Found loop6 Jan 13 21:41:15.988427 extend-filesystems[1481]: Found loop7 Jan 13 21:41:15.988427 extend-filesystems[1481]: Found vda Jan 13 21:41:15.988427 extend-filesystems[1481]: Found vda1 Jan 13 21:41:15.988427 extend-filesystems[1481]: Found vda2 Jan 13 21:41:15.988427 extend-filesystems[1481]: Found vda3 Jan 13 21:41:15.988427 extend-filesystems[1481]: Found usr Jan 13 21:41:15.988427 extend-filesystems[1481]: Found vda4 Jan 13 21:41:15.988427 extend-filesystems[1481]: Found vda6 Jan 13 21:41:16.002016 extend-filesystems[1481]: Found vda7 Jan 13 21:41:16.002016 extend-filesystems[1481]: Found vda9 Jan 13 21:41:16.002016 extend-filesystems[1481]: Checking size of /dev/vda9 Jan 13 21:41:16.011313 update_engine[1489]: I20250113 21:41:16.011174 1489 main.cc:92] Flatcar Update Engine starting Jan 13 21:41:16.025980 extend-filesystems[1481]: Resized partition /dev/vda9 Jan 13 21:41:16.025942 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:41:16.028403 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:41:16.029238 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:41:16.032557 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:41:16.040402 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 13 21:41:16.045283 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:41:16.045615 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:41:16.049811 dbus-daemon[1479]: [system] SELinux support is enabled Jan 13 21:41:16.050061 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:41:16.061622 dbus-daemon[1479]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1437 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:41:16.064537 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:41:16.066808 update_engine[1489]: I20250113 21:41:16.064129 1489 update_check_scheduler.cc:74] Next update check in 7m23s Jan 13 21:41:16.066197 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:41:16.064588 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:41:16.066485 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:41:16.066515 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:41:16.067433 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:41:16.069357 jq[1490]: true Jan 13 21:41:16.089071 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:41:16.091517 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:41:16.100550 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:41:16.133221 jq[1514]: true Jan 13 21:41:16.148498 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1448) Jan 13 21:41:16.281135 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:41:16.289059 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Jan 13 21:41:16.289106 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:41:16.289438 systemd-logind[1487]: New seat seat0. Jan 13 21:41:16.289527 systemd-timesyncd[1411]: Contacted time server 217.144.90.27:123 (1.flatcar.pool.ntp.org). Jan 13 21:41:16.289621 systemd-timesyncd[1411]: Initial clock synchronization to Mon 2025-01-13 21:41:16.535996 UTC. Jan 13 21:41:16.290947 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:41:16.323346 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 13 21:41:16.353976 extend-filesystems[1507]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:41:16.353976 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 13 21:41:16.353976 extend-filesystems[1507]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 13 21:41:16.361153 extend-filesystems[1481]: Resized filesystem in /dev/vda9 Jan 13 21:41:16.362808 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:41:16.363104 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:41:16.373518 bash[1540]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:41:16.375752 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:41:16.388150 systemd[1]: Starting sshkeys.service... Jan 13 21:41:16.432321 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:41:16.435619 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:41:16.436318 dbus-daemon[1479]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1516 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:41:16.443570 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:41:16.445287 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:41:16.460223 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:41:16.494608 polkitd[1550]: Started polkitd version 121 Jan 13 21:41:16.517024 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:41:16.517842 polkitd[1550]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:41:16.517939 polkitd[1550]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:41:16.519015 polkitd[1550]: Finished loading, compiling and executing 2 rules Jan 13 21:41:16.519904 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:41:16.520266 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:41:16.523083 polkitd[1550]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:41:16.547415 systemd-hostnamed[1516]: Hostname set to (static) Jan 13 21:41:16.554592 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:41:16.564829 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:41:16.568628 systemd[1]: Started sshd@0-10.230.41.226:22-139.178.68.195:55456.service - OpenSSH per-connection server daemon (139.178.68.195:55456). Jan 13 21:41:16.578939 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:41:16.579213 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:41:16.589354 containerd[1511]: time="2025-01-13T21:41:16.588069968Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 21:41:16.591282 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:41:16.615295 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:41:16.624875 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:41:16.638349 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:41:16.639916 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:41:16.649240 containerd[1511]: time="2025-01-13T21:41:16.649178779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:41:16.651439 containerd[1511]: time="2025-01-13T21:41:16.651374902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:41:16.651439 containerd[1511]: time="2025-01-13T21:41:16.651432997Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:41:16.651547 containerd[1511]: time="2025-01-13T21:41:16.651458124Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:41:16.651782 containerd[1511]: time="2025-01-13T21:41:16.651729570Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:41:16.651843 containerd[1511]: time="2025-01-13T21:41:16.651788524Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:41:16.651955 containerd[1511]: time="2025-01-13T21:41:16.651921648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:41:16.651955 containerd[1511]: time="2025-01-13T21:41:16.651951977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:41:16.652220 containerd[1511]: time="2025-01-13T21:41:16.652163401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:41:16.652275 containerd[1511]: time="2025-01-13T21:41:16.652219068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:41:16.652275 containerd[1511]: time="2025-01-13T21:41:16.652240740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:41:16.652275 containerd[1511]: time="2025-01-13T21:41:16.652257498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:41:16.652469 containerd[1511]: time="2025-01-13T21:41:16.652437368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:41:16.652859 containerd[1511]: time="2025-01-13T21:41:16.652825641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:41:16.652999 containerd[1511]: time="2025-01-13T21:41:16.652964180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:41:16.653043 containerd[1511]: time="2025-01-13T21:41:16.652995869Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:41:16.653148 containerd[1511]: time="2025-01-13T21:41:16.653116624Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:41:16.653255 containerd[1511]: time="2025-01-13T21:41:16.653224904Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:41:16.656812 containerd[1511]: time="2025-01-13T21:41:16.656746013Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:41:16.656880 containerd[1511]: time="2025-01-13T21:41:16.656832588Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:41:16.656880 containerd[1511]: time="2025-01-13T21:41:16.656859161Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:41:16.656944 containerd[1511]: time="2025-01-13T21:41:16.656881195Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:41:16.656944 containerd[1511]: time="2025-01-13T21:41:16.656900539Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:41:16.657093 containerd[1511]: time="2025-01-13T21:41:16.657065751Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657455579Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657680206Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657707570Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657728988Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657750620Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657770806Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657788985Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657809285Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657829688Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657849097Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657866522Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657883962Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657933381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658357 containerd[1511]: time="2025-01-13T21:41:16.657956240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.657988710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658010048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658030063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658049908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658067301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658085045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658105005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658143931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658165748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658183737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658201046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658220327Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658255322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658278057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.658905 containerd[1511]: time="2025-01-13T21:41:16.658297786Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:41:16.659420 containerd[1511]: time="2025-01-13T21:41:16.659147264Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:41:16.659420 containerd[1511]: time="2025-01-13T21:41:16.659180556Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:41:16.659420 containerd[1511]: time="2025-01-13T21:41:16.659198549Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:41:16.659420 containerd[1511]: time="2025-01-13T21:41:16.659216385Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:41:16.659420 containerd[1511]: time="2025-01-13T21:41:16.659232023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.659420 containerd[1511]: time="2025-01-13T21:41:16.659252209Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:41:16.659420 containerd[1511]: time="2025-01-13T21:41:16.659278920Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:41:16.659420 containerd[1511]: time="2025-01-13T21:41:16.659296372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:41:16.659895 containerd[1511]: time="2025-01-13T21:41:16.659764251Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:41:16.659895 containerd[1511]: time="2025-01-13T21:41:16.659826958Z" level=info msg="Connect containerd service" Jan 13 21:41:16.660202 containerd[1511]: time="2025-01-13T21:41:16.659899042Z" level=info msg="using legacy CRI server" Jan 13 21:41:16.660202 containerd[1511]: time="2025-01-13T21:41:16.659928057Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:41:16.660202 containerd[1511]: time="2025-01-13T21:41:16.660076836Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:41:16.661070 containerd[1511]: time="2025-01-13T21:41:16.661019729Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:41:16.664073 containerd[1511]: time="2025-01-13T21:41:16.661283555Z" level=info msg="Start subscribing containerd event" Jan 13 21:41:16.664073 containerd[1511]: time="2025-01-13T21:41:16.661370620Z" level=info msg="Start recovering state" Jan 13 21:41:16.664073 containerd[1511]: time="2025-01-13T21:41:16.661482106Z" level=info msg="Start event monitor" Jan 13 21:41:16.664073 containerd[1511]: time="2025-01-13T21:41:16.661520712Z" level=info msg="Start snapshots syncer" Jan 13 21:41:16.664073 containerd[1511]: time="2025-01-13T21:41:16.661539217Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:41:16.664073 containerd[1511]: time="2025-01-13T21:41:16.661551794Z" level=info msg="Start streaming server" Jan 13 21:41:16.664073 containerd[1511]: time="2025-01-13T21:41:16.661648829Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:41:16.664073 containerd[1511]: time="2025-01-13T21:41:16.661723731Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:41:16.664073 containerd[1511]: time="2025-01-13T21:41:16.661871148Z" level=info msg="containerd successfully booted in 0.074947s" Jan 13 21:41:16.661992 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:41:17.028936 systemd-networkd[1437]: eth0: Gained IPv6LL Jan 13 21:41:17.034138 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:41:17.036716 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:41:17.047928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:41:17.052232 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:41:17.090510 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:41:17.529869 sshd[1571]: Accepted publickey for core from 139.178.68.195 port 55456 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:41:17.531736 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:41:17.551967 systemd-logind[1487]: New session 1 of user core. Jan 13 21:41:17.556166 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:41:17.566849 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:41:17.600782 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:41:17.609893 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:41:17.626729 (systemd)[1597]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:41:17.729538 systemd-networkd[1437]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8a78:24:19ff:fee6:29e2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8a78:24:19ff:fee6:29e2/64 assigned by NDisc. Jan 13 21:41:17.729549 systemd-networkd[1437]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 13 21:41:17.779730 systemd[1597]: Queued start job for default target default.target. Jan 13 21:41:17.788213 systemd[1597]: Created slice app.slice - User Application Slice. Jan 13 21:41:17.788264 systemd[1597]: Reached target paths.target - Paths. Jan 13 21:41:17.788288 systemd[1597]: Reached target timers.target - Timers. Jan 13 21:41:17.792539 systemd[1597]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:41:17.820267 systemd[1597]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:41:17.821309 systemd[1597]: Reached target sockets.target - Sockets. Jan 13 21:41:17.821337 systemd[1597]: Reached target basic.target - Basic System. Jan 13 21:41:17.821421 systemd[1597]: Reached target default.target - Main User Target. Jan 13 21:41:17.821493 systemd[1597]: Startup finished in 184ms. Jan 13 21:41:17.821754 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:41:17.833823 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:41:18.032913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:41:18.055310 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:41:18.489836 systemd[1]: Started sshd@1-10.230.41.226:22-139.178.68.195:49588.service - OpenSSH per-connection server daemon (139.178.68.195:49588). Jan 13 21:41:18.809867 kubelet[1612]: E0113 21:41:18.809667 1612 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:41:18.816083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:41:18.816783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:41:18.817268 systemd[1]: kubelet.service: Consumed 1.028s CPU time. Jan 13 21:41:19.414426 sshd[1619]: Accepted publickey for core from 139.178.68.195 port 49588 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:41:19.416272 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:41:19.423731 systemd-logind[1487]: New session 2 of user core. Jan 13 21:41:19.428608 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:41:20.051681 sshd[1624]: Connection closed by 139.178.68.195 port 49588 Jan 13 21:41:20.052603 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Jan 13 21:41:20.057185 systemd[1]: sshd@1-10.230.41.226:22-139.178.68.195:49588.service: Deactivated successfully. Jan 13 21:41:20.059485 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:41:20.060578 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:41:20.062146 systemd-logind[1487]: Removed session 2. Jan 13 21:41:20.209052 systemd[1]: Started sshd@2-10.230.41.226:22-139.178.68.195:49600.service - OpenSSH per-connection server daemon (139.178.68.195:49600). Jan 13 21:41:21.133144 sshd[1630]: Accepted publickey for core from 139.178.68.195 port 49600 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:41:21.135144 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:41:21.142284 systemd-logind[1487]: New session 3 of user core. Jan 13 21:41:21.149707 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:41:21.679845 agetty[1577]: failed to open credentials directory Jan 13 21:41:21.679930 agetty[1578]: failed to open credentials directory Jan 13 21:41:21.693833 login[1577]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:41:21.699824 login[1578]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:41:21.703121 systemd-logind[1487]: New session 4 of user core. Jan 13 21:41:21.712801 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:41:21.717241 systemd-logind[1487]: New session 5 of user core. Jan 13 21:41:21.726779 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:41:21.761775 sshd[1632]: Connection closed by 139.178.68.195 port 49600 Jan 13 21:41:21.765001 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Jan 13 21:41:21.772188 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:41:21.772292 systemd[1]: sshd@2-10.230.41.226:22-139.178.68.195:49600.service: Deactivated successfully. Jan 13 21:41:21.774930 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:41:21.777227 systemd-logind[1487]: Removed session 3. Jan 13 21:41:23.454490 coreos-metadata[1478]: Jan 13 21:41:23.454 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:41:23.479910 coreos-metadata[1478]: Jan 13 21:41:23.479 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 21:41:23.486968 coreos-metadata[1478]: Jan 13 21:41:23.486 INFO Fetch failed with 404: resource not found Jan 13 21:41:23.487057 coreos-metadata[1478]: Jan 13 21:41:23.486 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:41:23.487536 coreos-metadata[1478]: Jan 13 21:41:23.487 INFO Fetch successful Jan 13 21:41:23.487677 coreos-metadata[1478]: Jan 13 21:41:23.487 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 21:41:23.501112 coreos-metadata[1478]: Jan 13 21:41:23.501 INFO Fetch successful Jan 13 21:41:23.501347 coreos-metadata[1478]: Jan 13 21:41:23.501 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 21:41:23.515208 coreos-metadata[1478]: Jan 13 21:41:23.515 INFO Fetch successful Jan 13 21:41:23.515409 coreos-metadata[1478]: Jan 13 21:41:23.515 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 21:41:23.533762 coreos-metadata[1478]: Jan 13 21:41:23.533 INFO Fetch successful Jan 13 21:41:23.534100 coreos-metadata[1478]: Jan 13 21:41:23.534 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 21:41:23.552773 coreos-metadata[1478]: Jan 13 21:41:23.552 INFO Fetch successful Jan 13 21:41:23.561793 coreos-metadata[1549]: Jan 13 21:41:23.561 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:41:23.584757 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:41:23.586270 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:41:23.587698 coreos-metadata[1549]: Jan 13 21:41:23.587 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 21:41:23.611986 coreos-metadata[1549]: Jan 13 21:41:23.611 INFO Fetch successful Jan 13 21:41:23.612297 coreos-metadata[1549]: Jan 13 21:41:23.612 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:41:23.649044 coreos-metadata[1549]: Jan 13 21:41:23.648 INFO Fetch successful Jan 13 21:41:23.651526 unknown[1549]: wrote ssh authorized keys file for user: core Jan 13 21:41:23.682225 update-ssh-keys[1669]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:41:23.683077 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:41:23.687693 systemd[1]: Finished sshkeys.service. Jan 13 21:41:23.689267 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:41:23.691509 systemd[1]: Startup finished in 1.376s (kernel) + 14.295s (initrd) + 11.697s (userspace) = 27.370s. Jan 13 21:41:28.970992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:41:28.978689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:41:29.153566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:41:29.167387 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:41:29.250759 kubelet[1681]: E0113 21:41:29.250427 1681 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:41:29.254222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:41:29.254509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:41:32.024731 systemd[1]: Started sshd@3-10.230.41.226:22-139.178.68.195:59560.service - OpenSSH per-connection server daemon (139.178.68.195:59560). Jan 13 21:41:32.915969 sshd[1690]: Accepted publickey for core from 139.178.68.195 port 59560 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:41:32.918294 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:41:32.925023 systemd-logind[1487]: New session 6 of user core. Jan 13 21:41:32.933626 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:41:33.535406 sshd[1692]: Connection closed by 139.178.68.195 port 59560 Jan 13 21:41:33.536341 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Jan 13 21:41:33.541106 systemd[1]: sshd@3-10.230.41.226:22-139.178.68.195:59560.service: Deactivated successfully. Jan 13 21:41:33.543057 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:41:33.543923 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:41:33.545676 systemd-logind[1487]: Removed session 6. Jan 13 21:41:33.693872 systemd[1]: Started sshd@4-10.230.41.226:22-139.178.68.195:59564.service - OpenSSH per-connection server daemon (139.178.68.195:59564). Jan 13 21:41:34.606512 sshd[1697]: Accepted publickey for core from 139.178.68.195 port 59564 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:41:34.608606 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:41:34.617324 systemd-logind[1487]: New session 7 of user core. Jan 13 21:41:34.627651 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:41:35.225556 sshd[1699]: Connection closed by 139.178.68.195 port 59564 Jan 13 21:41:35.226660 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Jan 13 21:41:35.231900 systemd[1]: sshd@4-10.230.41.226:22-139.178.68.195:59564.service: Deactivated successfully. Jan 13 21:41:35.234249 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:41:35.235191 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:41:35.236675 systemd-logind[1487]: Removed session 7. Jan 13 21:41:35.388802 systemd[1]: Started sshd@5-10.230.41.226:22-139.178.68.195:53256.service - OpenSSH per-connection server daemon (139.178.68.195:53256). Jan 13 21:41:36.296873 sshd[1704]: Accepted publickey for core from 139.178.68.195 port 53256 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:41:36.299434 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:41:36.308434 systemd-logind[1487]: New session 8 of user core. Jan 13 21:41:36.314755 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:41:36.916805 sshd[1706]: Connection closed by 139.178.68.195 port 53256 Jan 13 21:41:36.918731 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jan 13 21:41:36.923246 systemd[1]: sshd@5-10.230.41.226:22-139.178.68.195:53256.service: Deactivated successfully. Jan 13 21:41:36.925964 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:41:36.928299 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:41:36.929790 systemd-logind[1487]: Removed session 8. Jan 13 21:41:37.082769 systemd[1]: Started sshd@6-10.230.41.226:22-139.178.68.195:53264.service - OpenSSH per-connection server daemon (139.178.68.195:53264). Jan 13 21:41:37.974364 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 53264 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:41:37.976383 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:41:37.983662 systemd-logind[1487]: New session 9 of user core. Jan 13 21:41:37.990559 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:41:38.464688 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:41:38.465198 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:41:38.484126 sudo[1714]: pam_unix(sudo:session): session closed for user root Jan 13 21:41:38.629354 sshd[1713]: Connection closed by 139.178.68.195 port 53264 Jan 13 21:41:38.628501 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jan 13 21:41:38.632954 systemd[1]: sshd@6-10.230.41.226:22-139.178.68.195:53264.service: Deactivated successfully. Jan 13 21:41:38.635676 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:41:38.637504 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:41:38.638985 systemd-logind[1487]: Removed session 9. Jan 13 21:41:38.787838 systemd[1]: Started sshd@7-10.230.41.226:22-139.178.68.195:53270.service - OpenSSH per-connection server daemon (139.178.68.195:53270). Jan 13 21:41:39.470380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:41:39.476575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:41:39.634613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:41:39.654972 (kubelet)[1729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:41:39.679688 sshd[1719]: Accepted publickey for core from 139.178.68.195 port 53270 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:41:39.681988 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:41:39.692047 systemd-logind[1487]: New session 10 of user core. Jan 13 21:41:39.697859 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:41:39.728699 kubelet[1729]: E0113 21:41:39.728485 1729 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:41:39.732468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:41:39.732708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:41:40.157358 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:41:40.158079 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:41:40.164039 sudo[1739]: pam_unix(sudo:session): session closed for user root Jan 13 21:41:40.172033 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 21:41:40.172477 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:41:40.189912 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:41:40.239442 augenrules[1761]: No rules Jan 13 21:41:40.240340 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:41:40.240626 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:41:40.242032 sudo[1738]: pam_unix(sudo:session): session closed for user root Jan 13 21:41:40.385945 sshd[1735]: Connection closed by 139.178.68.195 port 53270 Jan 13 21:41:40.386988 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Jan 13 21:41:40.391913 systemd[1]: sshd@7-10.230.41.226:22-139.178.68.195:53270.service: Deactivated successfully. Jan 13 21:41:40.394280 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:41:40.395194 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:41:40.396787 systemd-logind[1487]: Removed session 10. Jan 13 21:41:40.549813 systemd[1]: Started sshd@8-10.230.41.226:22-139.178.68.195:53274.service - OpenSSH per-connection server daemon (139.178.68.195:53274). Jan 13 21:41:41.444618 sshd[1769]: Accepted publickey for core from 139.178.68.195 port 53274 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 21:41:41.446685 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:41:41.452996 systemd-logind[1487]: New session 11 of user core. Jan 13 21:41:41.464694 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:41:41.923916 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:41:41.924438 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:41:42.737786 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:41:42.748594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:41:42.772533 systemd[1]: Reloading requested from client PID 1810 ('systemctl') (unit session-11.scope)... Jan 13 21:41:42.772735 systemd[1]: Reloading... Jan 13 21:41:42.905436 zram_generator::config[1849]: No configuration found. Jan 13 21:41:43.106782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:41:43.220405 systemd[1]: Reloading finished in 446 ms. Jan 13 21:41:43.296204 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:41:43.299647 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:41:43.300011 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:41:43.306681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:41:43.451777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:41:43.467220 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:41:43.522377 kubelet[1918]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:41:43.522377 kubelet[1918]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:41:43.522377 kubelet[1918]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:41:43.548757 kubelet[1918]: I0113 21:41:43.548510 1918 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:41:44.010354 kubelet[1918]: I0113 21:41:44.009009 1918 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:41:44.010354 kubelet[1918]: I0113 21:41:44.009058 1918 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:41:44.010354 kubelet[1918]: I0113 21:41:44.009473 1918 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:41:44.028261 kubelet[1918]: I0113 21:41:44.028207 1918 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:41:44.068785 kubelet[1918]: I0113 21:41:44.068735 1918 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:41:44.070373 kubelet[1918]: I0113 21:41:44.070304 1918 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:41:44.070784 kubelet[1918]: I0113 21:41:44.070492 1918 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.41.226","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:41:44.071131 kubelet[1918]: I0113 21:41:44.071110 1918 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:41:44.071235 kubelet[1918]: I0113 21:41:44.071219 1918 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:41:44.071585 kubelet[1918]: I0113 21:41:44.071565 1918 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:41:44.073121 kubelet[1918]: I0113 21:41:44.072637 1918 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:41:44.073121 kubelet[1918]: I0113 21:41:44.072666 1918 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:41:44.073121 kubelet[1918]: I0113 21:41:44.072721 1918 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:41:44.073121 kubelet[1918]: I0113 21:41:44.072763 1918 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:41:44.075554 kubelet[1918]: E0113 21:41:44.075109 1918 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:44.075554 kubelet[1918]: E0113 21:41:44.075463 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:44.077738 kubelet[1918]: I0113 21:41:44.077514 1918 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 21:41:44.079071 kubelet[1918]: I0113 21:41:44.079048 1918 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:41:44.079196 kubelet[1918]: W0113 21:41:44.079152 1918 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:41:44.080316 kubelet[1918]: I0113 21:41:44.080289 1918 server.go:1264] "Started kubelet" Jan 13 21:41:44.083741 kubelet[1918]: I0113 21:41:44.083671 1918 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:41:44.093284 kubelet[1918]: E0113 21:41:44.093059 1918 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.41.226.181a5e7e95ffd0d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.41.226,UID:10.230.41.226,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.41.226,},FirstTimestamp:2025-01-13 21:41:44.080208082 +0000 UTC m=+0.607587948,LastTimestamp:2025-01-13 21:41:44.080208082 +0000 UTC m=+0.607587948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.41.226,}" Jan 13 21:41:44.095789 kubelet[1918]: E0113 21:41:44.095751 1918 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:41:44.097349 kubelet[1918]: I0113 21:41:44.097275 1918 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:41:44.099099 kubelet[1918]: I0113 21:41:44.099075 1918 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:41:44.100858 kubelet[1918]: I0113 21:41:44.100839 1918 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:41:44.101105 kubelet[1918]: I0113 21:41:44.100675 1918 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:41:44.102764 kubelet[1918]: I0113 21:41:44.102745 1918 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:41:44.103000 kubelet[1918]: I0113 21:41:44.102982 1918 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:41:44.103764 kubelet[1918]: I0113 21:41:44.103732 1918 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:41:44.107109 kubelet[1918]: I0113 21:41:44.107063 1918 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:41:44.109601 kubelet[1918]: W0113 21:41:44.109569 1918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.41.226" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:41:44.109661 kubelet[1918]: E0113 21:41:44.109625 1918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.230.41.226" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:41:44.110693 kubelet[1918]: E0113 21:41:44.110559 1918 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.41.226.181a5e7e96ecb753 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.41.226,UID:10.230.41.226,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.230.41.226,},FirstTimestamp:2025-01-13 21:41:44.095733587 +0000 UTC m=+0.623113459,LastTimestamp:2025-01-13 21:41:44.095733587 +0000 UTC m=+0.623113459,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.41.226,}" Jan 13 21:41:44.110839 kubelet[1918]: W0113 21:41:44.110752 1918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:41:44.110839 kubelet[1918]: E0113 21:41:44.110776 1918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:41:44.113254 kubelet[1918]: I0113 21:41:44.113200 1918 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:41:44.113693 kubelet[1918]: I0113 21:41:44.113676 1918 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:41:44.114421 kubelet[1918]: E0113 21:41:44.114384 1918 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.230.41.226\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 21:41:44.147954 kubelet[1918]: I0113 21:41:44.147673 1918 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:41:44.147954 kubelet[1918]: I0113 21:41:44.147725 1918 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:41:44.147954 kubelet[1918]: I0113 21:41:44.147768 1918 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:41:44.153169 kubelet[1918]: I0113 21:41:44.152064 1918 policy_none.go:49] "None policy: Start" Jan 13 21:41:44.156715 kubelet[1918]: I0113 21:41:44.156686 1918 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:41:44.156846 kubelet[1918]: I0113 21:41:44.156826 1918 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:41:44.169893 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:41:44.187412 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:41:44.193513 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:41:44.201265 kubelet[1918]: I0113 21:41:44.201236 1918 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:41:44.201783 kubelet[1918]: I0113 21:41:44.201725 1918 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:41:44.202575 kubelet[1918]: I0113 21:41:44.202474 1918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:41:44.203288 kubelet[1918]: I0113 21:41:44.203186 1918 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:41:44.208329 kubelet[1918]: I0113 21:41:44.207926 1918 kubelet_node_status.go:73] "Attempting to register node" node="10.230.41.226" Jan 13 21:41:44.209445 kubelet[1918]: I0113 21:41:44.209421 1918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:41:44.210064 kubelet[1918]: I0113 21:41:44.210036 1918 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:41:44.210132 kubelet[1918]: I0113 21:41:44.210116 1918 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:41:44.210393 kubelet[1918]: E0113 21:41:44.210211 1918 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 21:41:44.215820 kubelet[1918]: E0113 21:41:44.215232 1918 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.41.226\" not found" Jan 13 21:41:44.217089 kubelet[1918]: I0113 21:41:44.217049 1918 kubelet_node_status.go:76] "Successfully registered node" node="10.230.41.226" Jan 13 21:41:44.232745 kubelet[1918]: E0113 21:41:44.232716 1918 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.41.226\" not found" Jan 13 21:41:44.334579 kubelet[1918]: E0113 21:41:44.333882 1918 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.41.226\" not found" Jan 13 21:41:44.434866 kubelet[1918]: E0113 21:41:44.434779 1918 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.41.226\" not found" Jan 13 21:41:44.480307 sudo[1772]: pam_unix(sudo:session): session closed for user root Jan 13 21:41:44.536045 kubelet[1918]: E0113 21:41:44.535925 1918 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.41.226\" not found" Jan 13 21:41:44.624849 sshd[1771]: Connection closed by 139.178.68.195 port 53274 Jan 13 21:41:44.625835 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jan 13 21:41:44.631133 systemd[1]: sshd@8-10.230.41.226:22-139.178.68.195:53274.service: Deactivated successfully. Jan 13 21:41:44.633622 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:41:44.634825 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:41:44.636807 systemd-logind[1487]: Removed session 11. Jan 13 21:41:44.637249 kubelet[1918]: E0113 21:41:44.637197 1918 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.41.226\" not found" Jan 13 21:41:44.738383 kubelet[1918]: E0113 21:41:44.738029 1918 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.41.226\" not found" Jan 13 21:41:44.839106 kubelet[1918]: E0113 21:41:44.838988 1918 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.41.226\" not found" Jan 13 21:41:44.940318 kubelet[1918]: E0113 21:41:44.940025 1918 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.41.226\" not found" Jan 13 21:41:45.013089 kubelet[1918]: I0113 21:41:45.012971 1918 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 21:41:45.013428 kubelet[1918]: W0113 21:41:45.013400 1918 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:41:45.041167 kubelet[1918]: E0113 21:41:45.041073 1918 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.41.226\" not found" Jan 13 21:41:45.075812 kubelet[1918]: I0113 21:41:45.075714 1918 apiserver.go:52] "Watching apiserver" Jan 13 21:41:45.076168 kubelet[1918]: E0113 21:41:45.076142 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:45.082487 kubelet[1918]: I0113 21:41:45.082217 1918 topology_manager.go:215] "Topology Admit Handler" podUID="67fcb48b-d316-4871-9271-ac00ca4cbaaa" podNamespace="calico-system" podName="calico-node-ffp5b" Jan 13 21:41:45.082487 kubelet[1918]: I0113 21:41:45.082457 1918 topology_manager.go:215] "Topology Admit Handler" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" podNamespace="calico-system" podName="csi-node-driver-2bmg8" Jan 13 21:41:45.083916 kubelet[1918]: I0113 21:41:45.082583 1918 topology_manager.go:215] "Topology Admit Handler" podUID="2193ba3d-2b90-4b38-b496-5198ebde0988" podNamespace="kube-system" podName="kube-proxy-8blm8" Jan 13 21:41:45.083916 kubelet[1918]: E0113 21:41:45.083216 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:41:45.091678 systemd[1]: Created slice kubepods-besteffort-pod67fcb48b_d316_4871_9271_ac00ca4cbaaa.slice - libcontainer container kubepods-besteffort-pod67fcb48b_d316_4871_9271_ac00ca4cbaaa.slice. Jan 13 21:41:45.103770 kubelet[1918]: I0113 21:41:45.103707 1918 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:41:45.104456 systemd[1]: Created slice kubepods-besteffort-pod2193ba3d_2b90_4b38_b496_5198ebde0988.slice - libcontainer container kubepods-besteffort-pod2193ba3d_2b90_4b38_b496_5198ebde0988.slice. Jan 13 21:41:45.108198 kubelet[1918]: I0113 21:41:45.108163 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/447cb4dd-d91a-4916-9a29-3a8fd8543edd-socket-dir\") pod \"csi-node-driver-2bmg8\" (UID: \"447cb4dd-d91a-4916-9a29-3a8fd8543edd\") " pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:41:45.108434 kubelet[1918]: I0113 21:41:45.108411 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67fcb48b-d316-4871-9271-ac00ca4cbaaa-lib-modules\") pod \"calico-node-ffp5b\" (UID: \"67fcb48b-d316-4871-9271-ac00ca4cbaaa\") " pod="calico-system/calico-node-ffp5b" Jan 13 21:41:45.108656 kubelet[1918]: I0113 21:41:45.108634 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67fcb48b-d316-4871-9271-ac00ca4cbaaa-xtables-lock\") pod \"calico-node-ffp5b\" (UID: \"67fcb48b-d316-4871-9271-ac00ca4cbaaa\") " pod="calico-system/calico-node-ffp5b" Jan 13 21:41:45.108814 kubelet[1918]: I0113 21:41:45.108790 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67fcb48b-d316-4871-9271-ac00ca4cbaaa-tigera-ca-bundle\") pod \"calico-node-ffp5b\" (UID: \"67fcb48b-d316-4871-9271-ac00ca4cbaaa\") " pod="calico-system/calico-node-ffp5b" Jan 13 21:41:45.109054 kubelet[1918]: I0113 21:41:45.109004 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/67fcb48b-d316-4871-9271-ac00ca4cbaaa-cni-bin-dir\") pod \"calico-node-ffp5b\" (UID: \"67fcb48b-d316-4871-9271-ac00ca4cbaaa\") " pod="calico-system/calico-node-ffp5b" Jan 13 21:41:45.109232 kubelet[1918]: I0113 21:41:45.109171 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/67fcb48b-d316-4871-9271-ac00ca4cbaaa-cni-net-dir\") pod \"calico-node-ffp5b\" (UID: \"67fcb48b-d316-4871-9271-ac00ca4cbaaa\") " pod="calico-system/calico-node-ffp5b" Jan 13 21:41:45.109493 kubelet[1918]: I0113 21:41:45.109360 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/67fcb48b-d316-4871-9271-ac00ca4cbaaa-flexvol-driver-host\") pod \"calico-node-ffp5b\" (UID: \"67fcb48b-d316-4871-9271-ac00ca4cbaaa\") " pod="calico-system/calico-node-ffp5b" Jan 13 21:41:45.109617 kubelet[1918]: I0113 21:41:45.109447 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/447cb4dd-d91a-4916-9a29-3a8fd8543edd-kubelet-dir\") pod \"csi-node-driver-2bmg8\" (UID: \"447cb4dd-d91a-4916-9a29-3a8fd8543edd\") " pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:41:45.109781 kubelet[1918]: I0113 21:41:45.109662 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2193ba3d-2b90-4b38-b496-5198ebde0988-lib-modules\") pod \"kube-proxy-8blm8\" (UID: \"2193ba3d-2b90-4b38-b496-5198ebde0988\") " pod="kube-system/kube-proxy-8blm8" Jan 13 21:41:45.109949 kubelet[1918]: I0113 21:41:45.109696 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcsr9\" (UniqueName: \"kubernetes.io/projected/67fcb48b-d316-4871-9271-ac00ca4cbaaa-kube-api-access-fcsr9\") pod \"calico-node-ffp5b\" (UID: \"67fcb48b-d316-4871-9271-ac00ca4cbaaa\") " pod="calico-system/calico-node-ffp5b" Jan 13 21:41:45.110136 kubelet[1918]: I0113 21:41:45.110069 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/447cb4dd-d91a-4916-9a29-3a8fd8543edd-registration-dir\") pod \"csi-node-driver-2bmg8\" (UID: \"447cb4dd-d91a-4916-9a29-3a8fd8543edd\") " pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:41:45.110281 kubelet[1918]: I0113 21:41:45.110251 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2193ba3d-2b90-4b38-b496-5198ebde0988-kube-proxy\") pod \"kube-proxy-8blm8\" (UID: \"2193ba3d-2b90-4b38-b496-5198ebde0988\") " pod="kube-system/kube-proxy-8blm8" Jan 13 21:41:45.110782 kubelet[1918]: I0113 21:41:45.110390 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2193ba3d-2b90-4b38-b496-5198ebde0988-xtables-lock\") pod \"kube-proxy-8blm8\" (UID: \"2193ba3d-2b90-4b38-b496-5198ebde0988\") " pod="kube-system/kube-proxy-8blm8" Jan 13 21:41:45.110782 kubelet[1918]: I0113 21:41:45.110424 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/67fcb48b-d316-4871-9271-ac00ca4cbaaa-policysync\") pod \"calico-node-ffp5b\" (UID: \"67fcb48b-d316-4871-9271-ac00ca4cbaaa\") " pod="calico-system/calico-node-ffp5b" Jan 13 21:41:45.110782 kubelet[1918]: I0113 21:41:45.110450 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/67fcb48b-d316-4871-9271-ac00ca4cbaaa-node-certs\") pod \"calico-node-ffp5b\" (UID: \"67fcb48b-d316-4871-9271-ac00ca4cbaaa\") " pod="calico-system/calico-node-ffp5b" Jan 13 21:41:45.110782 kubelet[1918]: I0113 21:41:45.110479 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/67fcb48b-d316-4871-9271-ac00ca4cbaaa-var-run-calico\") pod \"calico-node-ffp5b\" (UID: \"67fcb48b-d316-4871-9271-ac00ca4cbaaa\") " pod="calico-system/calico-node-ffp5b" Jan 13 21:41:45.110782 kubelet[1918]: I0113 21:41:45.110516 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67fcb48b-d316-4871-9271-ac00ca4cbaaa-var-lib-calico\") pod \"calico-node-ffp5b\" (UID: \"67fcb48b-d316-4871-9271-ac00ca4cbaaa\") " pod="calico-system/calico-node-ffp5b" Jan 13 21:41:45.111030 kubelet[1918]: I0113 21:41:45.110547 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/67fcb48b-d316-4871-9271-ac00ca4cbaaa-cni-log-dir\") pod \"calico-node-ffp5b\" (UID: \"67fcb48b-d316-4871-9271-ac00ca4cbaaa\") " pod="calico-system/calico-node-ffp5b" Jan 13 21:41:45.111030 kubelet[1918]: I0113 21:41:45.110574 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/447cb4dd-d91a-4916-9a29-3a8fd8543edd-varrun\") pod \"csi-node-driver-2bmg8\" (UID: \"447cb4dd-d91a-4916-9a29-3a8fd8543edd\") " pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:41:45.111030 kubelet[1918]: I0113 21:41:45.110602 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthjc\" (UniqueName: \"kubernetes.io/projected/447cb4dd-d91a-4916-9a29-3a8fd8543edd-kube-api-access-vthjc\") pod \"csi-node-driver-2bmg8\" (UID: \"447cb4dd-d91a-4916-9a29-3a8fd8543edd\") " pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:41:45.111030 kubelet[1918]: I0113 21:41:45.110656 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zh58\" (UniqueName: \"kubernetes.io/projected/2193ba3d-2b90-4b38-b496-5198ebde0988-kube-api-access-6zh58\") pod \"kube-proxy-8blm8\" (UID: \"2193ba3d-2b90-4b38-b496-5198ebde0988\") " pod="kube-system/kube-proxy-8blm8" Jan 13 21:41:45.143478 kubelet[1918]: I0113 21:41:45.143429 1918 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 21:41:45.144266 kubelet[1918]: I0113 21:41:45.144157 1918 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 21:41:45.144373 containerd[1511]: time="2025-01-13T21:41:45.143886550Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:41:45.215433 kubelet[1918]: E0113 21:41:45.215291 1918 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:41:45.215433 kubelet[1918]: W0113 21:41:45.215351 1918 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:41:45.215433 kubelet[1918]: E0113 21:41:45.215402 1918 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:41:45.216820 kubelet[1918]: E0113 21:41:45.215847 1918 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:41:45.216820 kubelet[1918]: W0113 21:41:45.215867 1918 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:41:45.216820 kubelet[1918]: E0113 21:41:45.215891 1918 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:41:45.217628 kubelet[1918]: E0113 21:41:45.217195 1918 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:41:45.217628 kubelet[1918]: W0113 21:41:45.217618 1918 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:41:45.217760 kubelet[1918]: E0113 21:41:45.217635 1918 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:41:45.218835 kubelet[1918]: E0113 21:41:45.218608 1918 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:41:45.218835 kubelet[1918]: W0113 21:41:45.218628 1918 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:41:45.218835 kubelet[1918]: E0113 21:41:45.218646 1918 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:41:45.219312 kubelet[1918]: E0113 21:41:45.219248 1918 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:41:45.219312 kubelet[1918]: W0113 21:41:45.219266 1918 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:41:45.219312 kubelet[1918]: E0113 21:41:45.219281 1918 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:41:45.226115 kubelet[1918]: E0113 21:41:45.226009 1918 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:41:45.226115 kubelet[1918]: W0113 21:41:45.226033 1918 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:41:45.226115 kubelet[1918]: E0113 21:41:45.226055 1918 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:41:45.236375 kubelet[1918]: E0113 21:41:45.233549 1918 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:41:45.236375 kubelet[1918]: W0113 21:41:45.233586 1918 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:41:45.236375 kubelet[1918]: E0113 21:41:45.233616 1918 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:41:45.240522 kubelet[1918]: E0113 21:41:45.240464 1918 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:41:45.240659 kubelet[1918]: W0113 21:41:45.240637 1918 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:41:45.241966 kubelet[1918]: E0113 21:41:45.241937 1918 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:41:45.247709 kubelet[1918]: E0113 21:41:45.247687 1918 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:41:45.247867 kubelet[1918]: W0113 21:41:45.247845 1918 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:41:45.248616 kubelet[1918]: E0113 21:41:45.248592 1918 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:41:45.401944 containerd[1511]: time="2025-01-13T21:41:45.401850889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ffp5b,Uid:67fcb48b-d316-4871-9271-ac00ca4cbaaa,Namespace:calico-system,Attempt:0,}" Jan 13 21:41:45.409682 containerd[1511]: time="2025-01-13T21:41:45.409620740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8blm8,Uid:2193ba3d-2b90-4b38-b496-5198ebde0988,Namespace:kube-system,Attempt:0,}" Jan 13 21:41:46.076715 kubelet[1918]: E0113 21:41:46.076598 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:46.123360 containerd[1511]: time="2025-01-13T21:41:46.121483385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:41:46.123360 containerd[1511]: time="2025-01-13T21:41:46.122779966Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:41:46.124242 containerd[1511]: time="2025-01-13T21:41:46.124198728Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:41:46.124462 containerd[1511]: time="2025-01-13T21:41:46.124411945Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 21:41:46.137290 containerd[1511]: time="2025-01-13T21:41:46.137210071Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:41:46.141897 containerd[1511]: time="2025-01-13T21:41:46.141785858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:41:46.144604 containerd[1511]: time="2025-01-13T21:41:46.144559563Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 734.823685ms" Jan 13 21:41:46.146341 containerd[1511]: time="2025-01-13T21:41:46.146286989Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 744.179628ms" Jan 13 21:41:46.230593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403621396.mount: Deactivated successfully. Jan 13 21:41:46.273480 containerd[1511]: time="2025-01-13T21:41:46.273109335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:41:46.273819 containerd[1511]: time="2025-01-13T21:41:46.273492210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:41:46.273819 containerd[1511]: time="2025-01-13T21:41:46.273543511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:41:46.274047 containerd[1511]: time="2025-01-13T21:41:46.273812837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:41:46.274932 containerd[1511]: time="2025-01-13T21:41:46.274463128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:41:46.274932 containerd[1511]: time="2025-01-13T21:41:46.274548433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:41:46.274932 containerd[1511]: time="2025-01-13T21:41:46.274569301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:41:46.274932 containerd[1511]: time="2025-01-13T21:41:46.274693909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:41:46.389651 systemd[1]: Started cri-containerd-64d234d308e736a9d2379ab1fd05040081979bdbdb66d397c67635292b05617e.scope - libcontainer container 64d234d308e736a9d2379ab1fd05040081979bdbdb66d397c67635292b05617e. Jan 13 21:41:46.392630 systemd[1]: Started cri-containerd-f4de0a7afc9b40d77c11f04d152e8db51d2ff4debc6070e84f9492e7f8323f6b.scope - libcontainer container f4de0a7afc9b40d77c11f04d152e8db51d2ff4debc6070e84f9492e7f8323f6b. Jan 13 21:41:46.447759 containerd[1511]: time="2025-01-13T21:41:46.447515356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ffp5b,Uid:67fcb48b-d316-4871-9271-ac00ca4cbaaa,Namespace:calico-system,Attempt:0,} returns sandbox id \"64d234d308e736a9d2379ab1fd05040081979bdbdb66d397c67635292b05617e\"" Jan 13 21:41:46.450298 containerd[1511]: time="2025-01-13T21:41:46.449343134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8blm8,Uid:2193ba3d-2b90-4b38-b496-5198ebde0988,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4de0a7afc9b40d77c11f04d152e8db51d2ff4debc6070e84f9492e7f8323f6b\"" Jan 13 21:41:46.452790 containerd[1511]: time="2025-01-13T21:41:46.452736626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:41:47.077705 kubelet[1918]: E0113 21:41:47.077582 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:47.211566 kubelet[1918]: E0113 21:41:47.211428 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:41:47.728823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1340671048.mount: Deactivated successfully. Jan 13 21:41:47.760858 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:41:47.874361 containerd[1511]: time="2025-01-13T21:41:47.874190028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:41:47.875563 containerd[1511]: time="2025-01-13T21:41:47.875363354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 21:41:47.876471 containerd[1511]: time="2025-01-13T21:41:47.876431347Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:41:47.881342 containerd[1511]: time="2025-01-13T21:41:47.879588148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:41:47.881342 containerd[1511]: time="2025-01-13T21:41:47.880685411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.42742794s" Jan 13 21:41:47.881342 containerd[1511]: time="2025-01-13T21:41:47.881211305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:41:47.884193 containerd[1511]: time="2025-01-13T21:41:47.884137495Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:41:47.885984 containerd[1511]: time="2025-01-13T21:41:47.885933534Z" level=info msg="CreateContainer within sandbox \"64d234d308e736a9d2379ab1fd05040081979bdbdb66d397c67635292b05617e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:41:47.905043 containerd[1511]: time="2025-01-13T21:41:47.904978840Z" level=info msg="CreateContainer within sandbox \"64d234d308e736a9d2379ab1fd05040081979bdbdb66d397c67635292b05617e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6e6165471709282bbc94b7d423228c806142406ee528070d398bff98b99ecaa9\"" Jan 13 21:41:47.906667 containerd[1511]: time="2025-01-13T21:41:47.906245242Z" level=info msg="StartContainer for \"6e6165471709282bbc94b7d423228c806142406ee528070d398bff98b99ecaa9\"" Jan 13 21:41:47.951662 systemd[1]: Started cri-containerd-6e6165471709282bbc94b7d423228c806142406ee528070d398bff98b99ecaa9.scope - libcontainer container 6e6165471709282bbc94b7d423228c806142406ee528070d398bff98b99ecaa9. Jan 13 21:41:48.001404 containerd[1511]: time="2025-01-13T21:41:48.001195703Z" level=info msg="StartContainer for \"6e6165471709282bbc94b7d423228c806142406ee528070d398bff98b99ecaa9\" returns successfully" Jan 13 21:41:48.016210 systemd[1]: cri-containerd-6e6165471709282bbc94b7d423228c806142406ee528070d398bff98b99ecaa9.scope: Deactivated successfully. Jan 13 21:41:48.078125 kubelet[1918]: E0113 21:41:48.078033 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:48.117390 containerd[1511]: time="2025-01-13T21:41:48.114406919Z" level=info msg="shim disconnected" id=6e6165471709282bbc94b7d423228c806142406ee528070d398bff98b99ecaa9 namespace=k8s.io Jan 13 21:41:48.117390 containerd[1511]: time="2025-01-13T21:41:48.114594813Z" level=warning msg="cleaning up after shim disconnected" id=6e6165471709282bbc94b7d423228c806142406ee528070d398bff98b99ecaa9 namespace=k8s.io Jan 13 21:41:48.117390 containerd[1511]: time="2025-01-13T21:41:48.114616585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:41:48.670968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e6165471709282bbc94b7d423228c806142406ee528070d398bff98b99ecaa9-rootfs.mount: Deactivated successfully. Jan 13 21:41:49.079256 kubelet[1918]: E0113 21:41:49.079093 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:49.210979 kubelet[1918]: E0113 21:41:49.210891 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:41:49.368910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount909110144.mount: Deactivated successfully. Jan 13 21:41:50.016672 containerd[1511]: time="2025-01-13T21:41:50.016552006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:41:50.018074 containerd[1511]: time="2025-01-13T21:41:50.018014666Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Jan 13 21:41:50.019198 containerd[1511]: time="2025-01-13T21:41:50.019134995Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:41:50.021878 containerd[1511]: time="2025-01-13T21:41:50.021839002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:41:50.023258 containerd[1511]: time="2025-01-13T21:41:50.023059920Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.138868673s" Jan 13 21:41:50.023258 containerd[1511]: time="2025-01-13T21:41:50.023111336Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 21:41:50.026225 containerd[1511]: time="2025-01-13T21:41:50.025611075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:41:50.026871 containerd[1511]: time="2025-01-13T21:41:50.026839828Z" level=info msg="CreateContainer within sandbox \"f4de0a7afc9b40d77c11f04d152e8db51d2ff4debc6070e84f9492e7f8323f6b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:41:50.045834 containerd[1511]: time="2025-01-13T21:41:50.045788159Z" level=info msg="CreateContainer within sandbox \"f4de0a7afc9b40d77c11f04d152e8db51d2ff4debc6070e84f9492e7f8323f6b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"80ee1f2dbb2e325855cb4b877390b1bff0ca107a146bd0d9e7075ed4c42ba06b\"" Jan 13 21:41:50.047071 containerd[1511]: time="2025-01-13T21:41:50.046906922Z" level=info msg="StartContainer for \"80ee1f2dbb2e325855cb4b877390b1bff0ca107a146bd0d9e7075ed4c42ba06b\"" Jan 13 21:41:50.079902 kubelet[1918]: E0113 21:41:50.079800 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:50.094569 systemd[1]: Started cri-containerd-80ee1f2dbb2e325855cb4b877390b1bff0ca107a146bd0d9e7075ed4c42ba06b.scope - libcontainer container 80ee1f2dbb2e325855cb4b877390b1bff0ca107a146bd0d9e7075ed4c42ba06b. Jan 13 21:41:50.143985 containerd[1511]: time="2025-01-13T21:41:50.143533463Z" level=info msg="StartContainer for \"80ee1f2dbb2e325855cb4b877390b1bff0ca107a146bd0d9e7075ed4c42ba06b\" returns successfully" Jan 13 21:41:50.252271 kubelet[1918]: I0113 21:41:50.252154 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8blm8" podStartSLOduration=2.680741154 podStartE2EDuration="6.252118659s" podCreationTimestamp="2025-01-13 21:41:44 +0000 UTC" firstStartedPulling="2025-01-13 21:41:46.45308854 +0000 UTC m=+2.980468400" lastFinishedPulling="2025-01-13 21:41:50.024466038 +0000 UTC m=+6.551845905" observedRunningTime="2025-01-13 21:41:50.251726909 +0000 UTC m=+6.779106788" watchObservedRunningTime="2025-01-13 21:41:50.252118659 +0000 UTC m=+6.779498548" Jan 13 21:41:51.080430 kubelet[1918]: E0113 21:41:51.080354 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:51.212304 kubelet[1918]: E0113 21:41:51.211734 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:41:52.080714 kubelet[1918]: E0113 21:41:52.080667 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:53.082160 kubelet[1918]: E0113 21:41:53.081822 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:53.211805 kubelet[1918]: E0113 21:41:53.210975 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:41:54.082622 kubelet[1918]: E0113 21:41:54.082524 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:55.082939 kubelet[1918]: E0113 21:41:55.082871 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:55.211554 kubelet[1918]: E0113 21:41:55.211444 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:41:55.271988 containerd[1511]: time="2025-01-13T21:41:55.271923514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:41:55.274381 containerd[1511]: time="2025-01-13T21:41:55.274310986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:41:55.275457 containerd[1511]: time="2025-01-13T21:41:55.275402235Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:41:55.284793 containerd[1511]: time="2025-01-13T21:41:55.284240038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:41:55.285574 containerd[1511]: time="2025-01-13T21:41:55.285537509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.259867324s" Jan 13 21:41:55.285657 containerd[1511]: time="2025-01-13T21:41:55.285579679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:41:55.288641 containerd[1511]: time="2025-01-13T21:41:55.288609614Z" level=info msg="CreateContainer within sandbox \"64d234d308e736a9d2379ab1fd05040081979bdbdb66d397c67635292b05617e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:41:55.302401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156245684.mount: Deactivated successfully. Jan 13 21:41:55.307506 containerd[1511]: time="2025-01-13T21:41:55.307358213Z" level=info msg="CreateContainer within sandbox \"64d234d308e736a9d2379ab1fd05040081979bdbdb66d397c67635292b05617e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3e999054f4a285ea30154665ddf98a49e3159ade40e580bd735d197b1eb28631\"" Jan 13 21:41:55.308498 containerd[1511]: time="2025-01-13T21:41:55.308459930Z" level=info msg="StartContainer for \"3e999054f4a285ea30154665ddf98a49e3159ade40e580bd735d197b1eb28631\"" Jan 13 21:41:55.361552 systemd[1]: Started cri-containerd-3e999054f4a285ea30154665ddf98a49e3159ade40e580bd735d197b1eb28631.scope - libcontainer container 3e999054f4a285ea30154665ddf98a49e3159ade40e580bd735d197b1eb28631. Jan 13 21:41:55.408410 containerd[1511]: time="2025-01-13T21:41:55.408252243Z" level=info msg="StartContainer for \"3e999054f4a285ea30154665ddf98a49e3159ade40e580bd735d197b1eb28631\" returns successfully" Jan 13 21:41:56.084099 kubelet[1918]: E0113 21:41:56.083991 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:56.295239 containerd[1511]: time="2025-01-13T21:41:56.295170141Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:41:56.301558 kubelet[1918]: I0113 21:41:56.299376 1918 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:41:56.300755 systemd[1]: cri-containerd-3e999054f4a285ea30154665ddf98a49e3159ade40e580bd735d197b1eb28631.scope: Deactivated successfully. Jan 13 21:41:56.331819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e999054f4a285ea30154665ddf98a49e3159ade40e580bd735d197b1eb28631-rootfs.mount: Deactivated successfully. Jan 13 21:41:56.757692 containerd[1511]: time="2025-01-13T21:41:56.757575130Z" level=info msg="shim disconnected" id=3e999054f4a285ea30154665ddf98a49e3159ade40e580bd735d197b1eb28631 namespace=k8s.io Jan 13 21:41:56.757692 containerd[1511]: time="2025-01-13T21:41:56.757670493Z" level=warning msg="cleaning up after shim disconnected" id=3e999054f4a285ea30154665ddf98a49e3159ade40e580bd735d197b1eb28631 namespace=k8s.io Jan 13 21:41:56.757692 containerd[1511]: time="2025-01-13T21:41:56.757697987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:41:57.085307 kubelet[1918]: E0113 21:41:57.084511 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:57.219482 systemd[1]: Created slice kubepods-besteffort-pod447cb4dd_d91a_4916_9a29_3a8fd8543edd.slice - libcontainer container kubepods-besteffort-pod447cb4dd_d91a_4916_9a29_3a8fd8543edd.slice. Jan 13 21:41:57.224663 containerd[1511]: time="2025-01-13T21:41:57.224562224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:0,}" Jan 13 21:41:57.262018 containerd[1511]: time="2025-01-13T21:41:57.261313909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:41:57.322116 containerd[1511]: time="2025-01-13T21:41:57.322033413Z" level=error msg="Failed to destroy network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:41:57.324245 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3-shm.mount: Deactivated successfully. Jan 13 21:41:57.324906 containerd[1511]: time="2025-01-13T21:41:57.324860523Z" level=error msg="encountered an error cleaning up failed sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:41:57.325115 containerd[1511]: time="2025-01-13T21:41:57.325024569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:41:57.326743 kubelet[1918]: E0113 21:41:57.325640 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:41:57.326743 kubelet[1918]: E0113 21:41:57.325918 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:41:57.326743 kubelet[1918]: E0113 21:41:57.326006 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:41:57.326985 kubelet[1918]: E0113 21:41:57.326118 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:41:58.085208 kubelet[1918]: E0113 21:41:58.085098 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:58.263502 kubelet[1918]: I0113 21:41:58.263434 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3" Jan 13 21:41:58.264454 containerd[1511]: time="2025-01-13T21:41:58.264408785Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:41:58.264750 containerd[1511]: time="2025-01-13T21:41:58.264712221Z" level=info msg="Ensure that sandbox c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3 in task-service has been cleanup successfully" Jan 13 21:41:58.265360 containerd[1511]: time="2025-01-13T21:41:58.265104460Z" level=info msg="TearDown network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" successfully" Jan 13 21:41:58.265360 containerd[1511]: time="2025-01-13T21:41:58.265185241Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" returns successfully" Jan 13 21:41:58.267396 systemd[1]: run-netns-cni\x2d5464a33a\x2d3ddf\x2d3cd4\x2d0775\x2df74931a3140a.mount: Deactivated successfully. Jan 13 21:41:58.268254 containerd[1511]: time="2025-01-13T21:41:58.267478953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:1,}" Jan 13 21:41:58.368213 containerd[1511]: time="2025-01-13T21:41:58.364718585Z" level=error msg="Failed to destroy network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:41:58.368213 containerd[1511]: time="2025-01-13T21:41:58.367726669Z" level=error msg="encountered an error cleaning up failed sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:41:58.368213 containerd[1511]: time="2025-01-13T21:41:58.367824257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:41:58.369290 kubelet[1918]: E0113 21:41:58.368363 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:41:58.369290 kubelet[1918]: E0113 21:41:58.368472 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:41:58.369290 kubelet[1918]: E0113 21:41:58.368504 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:41:58.369271 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b-shm.mount: Deactivated successfully. Jan 13 21:41:58.369660 kubelet[1918]: E0113 21:41:58.368583 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:41:59.086086 kubelet[1918]: E0113 21:41:59.086025 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:41:59.270379 kubelet[1918]: I0113 21:41:59.270103 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b" Jan 13 21:41:59.271647 containerd[1511]: time="2025-01-13T21:41:59.271165142Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" Jan 13 21:41:59.271647 containerd[1511]: time="2025-01-13T21:41:59.271479591Z" level=info msg="Ensure that sandbox 4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b in task-service has been cleanup successfully" Jan 13 21:41:59.271784 containerd[1511]: time="2025-01-13T21:41:59.271723587Z" level=info msg="TearDown network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" successfully" Jan 13 21:41:59.271784 containerd[1511]: time="2025-01-13T21:41:59.271743965Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" returns successfully" Jan 13 21:41:59.274638 systemd[1]: run-netns-cni\x2d9a195c71\x2d3e5c\x2dfec7\x2de5b7\x2d74cf97d994cb.mount: Deactivated successfully. Jan 13 21:41:59.276765 containerd[1511]: time="2025-01-13T21:41:59.275223819Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:41:59.276765 containerd[1511]: time="2025-01-13T21:41:59.275384269Z" level=info msg="TearDown network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" successfully" Jan 13 21:41:59.276765 containerd[1511]: time="2025-01-13T21:41:59.275423072Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" returns successfully" Jan 13 21:41:59.276931 containerd[1511]: time="2025-01-13T21:41:59.276771629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:2,}" Jan 13 21:41:59.377203 containerd[1511]: time="2025-01-13T21:41:59.376043358Z" level=error msg="Failed to destroy network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:41:59.377203 containerd[1511]: time="2025-01-13T21:41:59.376906433Z" level=error msg="encountered an error cleaning up failed sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:41:59.377203 containerd[1511]: time="2025-01-13T21:41:59.377000066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:41:59.378862 kubelet[1918]: E0113 21:41:59.378805 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:41:59.379077 kubelet[1918]: E0113 21:41:59.379022 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:41:59.379254 kubelet[1918]: E0113 21:41:59.379167 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:41:59.379486 kubelet[1918]: E0113 21:41:59.379395 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:41:59.379796 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986-shm.mount: Deactivated successfully. Jan 13 21:42:00.087213 kubelet[1918]: E0113 21:42:00.086960 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:00.278618 kubelet[1918]: I0113 21:42:00.277475 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986" Jan 13 21:42:00.279163 containerd[1511]: time="2025-01-13T21:42:00.278150039Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\"" Jan 13 21:42:00.279163 containerd[1511]: time="2025-01-13T21:42:00.278427099Z" level=info msg="Ensure that sandbox 66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986 in task-service has been cleanup successfully" Jan 13 21:42:00.283053 systemd[1]: run-netns-cni\x2dc2901764\x2dff23\x2d4784\x2dc818\x2d29bbaad998a1.mount: Deactivated successfully. Jan 13 21:42:00.283648 containerd[1511]: time="2025-01-13T21:42:00.283538525Z" level=info msg="TearDown network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" successfully" Jan 13 21:42:00.283648 containerd[1511]: time="2025-01-13T21:42:00.283566980Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" returns successfully" Jan 13 21:42:00.284541 containerd[1511]: time="2025-01-13T21:42:00.284010022Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" Jan 13 21:42:00.284541 containerd[1511]: time="2025-01-13T21:42:00.284200255Z" level=info msg="TearDown network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" successfully" Jan 13 21:42:00.284541 containerd[1511]: time="2025-01-13T21:42:00.284250693Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" returns successfully" Jan 13 21:42:00.285564 containerd[1511]: time="2025-01-13T21:42:00.284719085Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:42:00.285564 containerd[1511]: time="2025-01-13T21:42:00.284823511Z" level=info msg="TearDown network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" successfully" Jan 13 21:42:00.285564 containerd[1511]: time="2025-01-13T21:42:00.284840491Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" returns successfully" Jan 13 21:42:00.287066 containerd[1511]: time="2025-01-13T21:42:00.286405978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:3,}" Jan 13 21:42:00.401562 containerd[1511]: time="2025-01-13T21:42:00.399839092Z" level=error msg="Failed to destroy network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:00.401562 containerd[1511]: time="2025-01-13T21:42:00.400707001Z" level=error msg="encountered an error cleaning up failed sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:00.401562 containerd[1511]: time="2025-01-13T21:42:00.400824556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:00.404370 kubelet[1918]: E0113 21:42:00.402557 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:00.404370 kubelet[1918]: E0113 21:42:00.402654 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:00.404370 kubelet[1918]: E0113 21:42:00.402687 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:00.404639 kubelet[1918]: E0113 21:42:00.402786 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:42:00.405096 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193-shm.mount: Deactivated successfully. Jan 13 21:42:01.088642 kubelet[1918]: E0113 21:42:01.088464 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:01.285457 kubelet[1918]: I0113 21:42:01.285390 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193" Jan 13 21:42:01.290241 containerd[1511]: time="2025-01-13T21:42:01.286454559Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\"" Jan 13 21:42:01.290241 containerd[1511]: time="2025-01-13T21:42:01.287627082Z" level=info msg="Ensure that sandbox 29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193 in task-service has been cleanup successfully" Jan 13 21:42:01.290316 systemd[1]: run-netns-cni\x2d83f22aab\x2dddf8\x2d4a3e\x2ddae8\x2d4e4507393aa7.mount: Deactivated successfully. Jan 13 21:42:01.290909 containerd[1511]: time="2025-01-13T21:42:01.290852801Z" level=info msg="TearDown network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" successfully" Jan 13 21:42:01.291030 containerd[1511]: time="2025-01-13T21:42:01.291003703Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" returns successfully" Jan 13 21:42:01.293831 containerd[1511]: time="2025-01-13T21:42:01.293799380Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\"" Jan 13 21:42:01.294441 containerd[1511]: time="2025-01-13T21:42:01.294411642Z" level=info msg="TearDown network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" successfully" Jan 13 21:42:01.294563 containerd[1511]: time="2025-01-13T21:42:01.294530726Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" returns successfully" Jan 13 21:42:01.295058 containerd[1511]: time="2025-01-13T21:42:01.295029024Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" Jan 13 21:42:01.295265 containerd[1511]: time="2025-01-13T21:42:01.295238673Z" level=info msg="TearDown network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" successfully" Jan 13 21:42:01.295971 containerd[1511]: time="2025-01-13T21:42:01.295367248Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" returns successfully" Jan 13 21:42:01.297266 containerd[1511]: time="2025-01-13T21:42:01.297235481Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:42:01.297514 containerd[1511]: time="2025-01-13T21:42:01.297479312Z" level=info msg="TearDown network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" successfully" Jan 13 21:42:01.297629 containerd[1511]: time="2025-01-13T21:42:01.297604199Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" returns successfully" Jan 13 21:42:01.298734 containerd[1511]: time="2025-01-13T21:42:01.298695178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:4,}" Jan 13 21:42:01.326591 update_engine[1489]: I20250113 21:42:01.326458 1489 update_attempter.cc:509] Updating boot flags... Jan 13 21:42:01.400525 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2515) Jan 13 21:42:01.523571 containerd[1511]: time="2025-01-13T21:42:01.523461909Z" level=error msg="Failed to destroy network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:01.529688 containerd[1511]: time="2025-01-13T21:42:01.528122257Z" level=error msg="encountered an error cleaning up failed sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:01.529688 containerd[1511]: time="2025-01-13T21:42:01.528281965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:01.529847 kubelet[1918]: E0113 21:42:01.529079 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:01.529847 kubelet[1918]: E0113 21:42:01.529254 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:01.528663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b-shm.mount: Deactivated successfully. Jan 13 21:42:01.530629 kubelet[1918]: E0113 21:42:01.530491 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:01.530746 kubelet[1918]: E0113 21:42:01.530666 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:42:01.554364 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2514) Jan 13 21:42:02.089660 kubelet[1918]: E0113 21:42:02.089612 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:02.307379 kubelet[1918]: I0113 21:42:02.307309 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b" Jan 13 21:42:02.308579 containerd[1511]: time="2025-01-13T21:42:02.308540548Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\"" Jan 13 21:42:02.309574 containerd[1511]: time="2025-01-13T21:42:02.309541221Z" level=info msg="Ensure that sandbox 70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b in task-service has been cleanup successfully" Jan 13 21:42:02.310424 containerd[1511]: time="2025-01-13T21:42:02.310199065Z" level=info msg="TearDown network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" successfully" Jan 13 21:42:02.310424 containerd[1511]: time="2025-01-13T21:42:02.310225077Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" returns successfully" Jan 13 21:42:02.313375 containerd[1511]: time="2025-01-13T21:42:02.312218882Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\"" Jan 13 21:42:02.313375 containerd[1511]: time="2025-01-13T21:42:02.312349976Z" level=info msg="TearDown network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" successfully" Jan 13 21:42:02.313375 containerd[1511]: time="2025-01-13T21:42:02.312371009Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" returns successfully" Jan 13 21:42:02.313375 containerd[1511]: time="2025-01-13T21:42:02.312715443Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\"" Jan 13 21:42:02.313375 containerd[1511]: time="2025-01-13T21:42:02.312816791Z" level=info msg="TearDown network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" successfully" Jan 13 21:42:02.313375 containerd[1511]: time="2025-01-13T21:42:02.312833370Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" returns successfully" Jan 13 21:42:02.314685 systemd[1]: run-netns-cni\x2d1372c6c4\x2d3f16\x2dfc3d\x2dad05\x2d4627e6d20054.mount: Deactivated successfully. Jan 13 21:42:02.316765 containerd[1511]: time="2025-01-13T21:42:02.314649604Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" Jan 13 21:42:02.316765 containerd[1511]: time="2025-01-13T21:42:02.314992272Z" level=info msg="TearDown network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" successfully" Jan 13 21:42:02.316765 containerd[1511]: time="2025-01-13T21:42:02.315031241Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" returns successfully" Jan 13 21:42:02.316765 containerd[1511]: time="2025-01-13T21:42:02.315963140Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:42:02.316765 containerd[1511]: time="2025-01-13T21:42:02.316101645Z" level=info msg="TearDown network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" successfully" Jan 13 21:42:02.316765 containerd[1511]: time="2025-01-13T21:42:02.316125946Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" returns successfully" Jan 13 21:42:02.318184 containerd[1511]: time="2025-01-13T21:42:02.317466323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:5,}" Jan 13 21:42:02.458030 containerd[1511]: time="2025-01-13T21:42:02.457874604Z" level=error msg="Failed to destroy network for sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:02.462480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced-shm.mount: Deactivated successfully. Jan 13 21:42:02.463856 containerd[1511]: time="2025-01-13T21:42:02.463653837Z" level=error msg="encountered an error cleaning up failed sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:02.463856 containerd[1511]: time="2025-01-13T21:42:02.463728559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:02.464926 kubelet[1918]: E0113 21:42:02.464706 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:02.464926 kubelet[1918]: E0113 21:42:02.464910 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:02.465110 kubelet[1918]: E0113 21:42:02.464946 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:02.465698 kubelet[1918]: E0113 21:42:02.465403 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:42:02.573337 kubelet[1918]: I0113 21:42:02.573269 1918 topology_manager.go:215] "Topology Admit Handler" podUID="4a557c18-91ac-453d-9b7a-3eb973d9a2bc" podNamespace="default" podName="nginx-deployment-85f456d6dd-56g8g" Jan 13 21:42:02.584725 systemd[1]: Created slice kubepods-besteffort-pod4a557c18_91ac_453d_9b7a_3eb973d9a2bc.slice - libcontainer container kubepods-besteffort-pod4a557c18_91ac_453d_9b7a_3eb973d9a2bc.slice. Jan 13 21:42:02.737112 kubelet[1918]: I0113 21:42:02.736989 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8pgg\" (UniqueName: \"kubernetes.io/projected/4a557c18-91ac-453d-9b7a-3eb973d9a2bc-kube-api-access-q8pgg\") pod \"nginx-deployment-85f456d6dd-56g8g\" (UID: \"4a557c18-91ac-453d-9b7a-3eb973d9a2bc\") " pod="default/nginx-deployment-85f456d6dd-56g8g" Jan 13 21:42:02.897019 containerd[1511]: time="2025-01-13T21:42:02.896534134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-56g8g,Uid:4a557c18-91ac-453d-9b7a-3eb973d9a2bc,Namespace:default,Attempt:0,}" Jan 13 21:42:03.006685 containerd[1511]: time="2025-01-13T21:42:03.006492655Z" level=error msg="Failed to destroy network for sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:03.007375 containerd[1511]: time="2025-01-13T21:42:03.006977675Z" level=error msg="encountered an error cleaning up failed sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:03.007375 containerd[1511]: time="2025-01-13T21:42:03.007082358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-56g8g,Uid:4a557c18-91ac-453d-9b7a-3eb973d9a2bc,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:03.007561 kubelet[1918]: E0113 21:42:03.007400 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:03.007561 kubelet[1918]: E0113 21:42:03.007480 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-56g8g" Jan 13 21:42:03.007561 kubelet[1918]: E0113 21:42:03.007513 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-56g8g" Jan 13 21:42:03.007738 kubelet[1918]: E0113 21:42:03.007597 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-56g8g_default(4a557c18-91ac-453d-9b7a-3eb973d9a2bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-56g8g_default(4a557c18-91ac-453d-9b7a-3eb973d9a2bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-56g8g" podUID="4a557c18-91ac-453d-9b7a-3eb973d9a2bc" Jan 13 21:42:03.090497 kubelet[1918]: E0113 21:42:03.090302 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:03.312453 kubelet[1918]: I0113 21:42:03.312175 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced" Jan 13 21:42:03.316404 containerd[1511]: time="2025-01-13T21:42:03.313324788Z" level=info msg="StopPodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\"" Jan 13 21:42:03.316404 containerd[1511]: time="2025-01-13T21:42:03.313627536Z" level=info msg="Ensure that sandbox 9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced in task-service has been cleanup successfully" Jan 13 21:42:03.316404 containerd[1511]: time="2025-01-13T21:42:03.313899550Z" level=info msg="TearDown network for sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" successfully" Jan 13 21:42:03.316404 containerd[1511]: time="2025-01-13T21:42:03.313920675Z" level=info msg="StopPodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" returns successfully" Jan 13 21:42:03.317710 containerd[1511]: time="2025-01-13T21:42:03.317671022Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\"" Jan 13 21:42:03.317830 containerd[1511]: time="2025-01-13T21:42:03.317793229Z" level=info msg="TearDown network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" successfully" Jan 13 21:42:03.317830 containerd[1511]: time="2025-01-13T21:42:03.317818419Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" returns successfully" Jan 13 21:42:03.320792 systemd[1]: run-netns-cni\x2da8587942\x2d51f4\x2d7333\x2d3e20\x2d458aa1496bb4.mount: Deactivated successfully. Jan 13 21:42:03.323436 containerd[1511]: time="2025-01-13T21:42:03.323038349Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\"" Jan 13 21:42:03.323436 containerd[1511]: time="2025-01-13T21:42:03.323167621Z" level=info msg="TearDown network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" successfully" Jan 13 21:42:03.323436 containerd[1511]: time="2025-01-13T21:42:03.323187631Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" returns successfully" Jan 13 21:42:03.323784 kubelet[1918]: I0113 21:42:03.323631 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a" Jan 13 21:42:03.324372 containerd[1511]: time="2025-01-13T21:42:03.324094286Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\"" Jan 13 21:42:03.324372 containerd[1511]: time="2025-01-13T21:42:03.324240118Z" level=info msg="TearDown network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" successfully" Jan 13 21:42:03.324372 containerd[1511]: time="2025-01-13T21:42:03.324274001Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" returns successfully" Jan 13 21:42:03.324934 containerd[1511]: time="2025-01-13T21:42:03.324903719Z" level=info msg="StopPodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\"" Jan 13 21:42:03.325665 containerd[1511]: time="2025-01-13T21:42:03.325466841Z" level=info msg="Ensure that sandbox d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a in task-service has been cleanup successfully" Jan 13 21:42:03.327630 containerd[1511]: time="2025-01-13T21:42:03.327600311Z" level=info msg="TearDown network for sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" successfully" Jan 13 21:42:03.327770 containerd[1511]: time="2025-01-13T21:42:03.327743239Z" level=info msg="StopPodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" returns successfully" Jan 13 21:42:03.329020 containerd[1511]: time="2025-01-13T21:42:03.328002562Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" Jan 13 21:42:03.329020 containerd[1511]: time="2025-01-13T21:42:03.328107368Z" level=info msg="TearDown network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" successfully" Jan 13 21:42:03.329020 containerd[1511]: time="2025-01-13T21:42:03.328126243Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" returns successfully" Jan 13 21:42:03.328797 systemd[1]: run-netns-cni\x2d14afd5ce\x2dff60\x2d7301\x2d0384\x2d4d72287c0a33.mount: Deactivated successfully. Jan 13 21:42:03.330950 containerd[1511]: time="2025-01-13T21:42:03.329896680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-56g8g,Uid:4a557c18-91ac-453d-9b7a-3eb973d9a2bc,Namespace:default,Attempt:1,}" Jan 13 21:42:03.330950 containerd[1511]: time="2025-01-13T21:42:03.330014468Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:42:03.330950 containerd[1511]: time="2025-01-13T21:42:03.330137348Z" level=info msg="TearDown network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" successfully" Jan 13 21:42:03.330950 containerd[1511]: time="2025-01-13T21:42:03.330155746Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" returns successfully" Jan 13 21:42:03.331212 containerd[1511]: time="2025-01-13T21:42:03.331171104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:6,}" Jan 13 21:42:03.509739 containerd[1511]: time="2025-01-13T21:42:03.509628078Z" level=error msg="Failed to destroy network for sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:03.514416 containerd[1511]: time="2025-01-13T21:42:03.513273211Z" level=error msg="encountered an error cleaning up failed sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:03.514416 containerd[1511]: time="2025-01-13T21:42:03.513550759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-56g8g,Uid:4a557c18-91ac-453d-9b7a-3eb973d9a2bc,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:03.514614 kubelet[1918]: E0113 21:42:03.513843 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:03.514614 kubelet[1918]: E0113 21:42:03.513930 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-56g8g" Jan 13 21:42:03.514614 kubelet[1918]: E0113 21:42:03.513964 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-56g8g" Jan 13 21:42:03.514897 kubelet[1918]: E0113 21:42:03.514020 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-56g8g_default(4a557c18-91ac-453d-9b7a-3eb973d9a2bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-56g8g_default(4a557c18-91ac-453d-9b7a-3eb973d9a2bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-56g8g" podUID="4a557c18-91ac-453d-9b7a-3eb973d9a2bc" Jan 13 21:42:03.519372 containerd[1511]: time="2025-01-13T21:42:03.519309975Z" level=error msg="Failed to destroy network for sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:03.519923 containerd[1511]: time="2025-01-13T21:42:03.519887089Z" level=error msg="encountered an error cleaning up failed sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:03.520120 containerd[1511]: time="2025-01-13T21:42:03.520083685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:03.521016 kubelet[1918]: E0113 21:42:03.520951 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:03.521094 kubelet[1918]: E0113 21:42:03.521030 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:03.521094 kubelet[1918]: E0113 21:42:03.521083 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:03.521208 kubelet[1918]: E0113 21:42:03.521131 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:42:04.074316 kubelet[1918]: E0113 21:42:04.073732 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:04.091178 kubelet[1918]: E0113 21:42:04.091092 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:04.316733 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00-shm.mount: Deactivated successfully. Jan 13 21:42:04.317198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f-shm.mount: Deactivated successfully. Jan 13 21:42:04.330855 kubelet[1918]: I0113 21:42:04.330713 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f" Jan 13 21:42:04.332029 containerd[1511]: time="2025-01-13T21:42:04.331979264Z" level=info msg="StopPodSandbox for \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\"" Jan 13 21:42:04.332390 containerd[1511]: time="2025-01-13T21:42:04.332268016Z" level=info msg="Ensure that sandbox 9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f in task-service has been cleanup successfully" Jan 13 21:42:04.333046 containerd[1511]: time="2025-01-13T21:42:04.332543343Z" level=info msg="TearDown network for sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\" successfully" Jan 13 21:42:04.333046 containerd[1511]: time="2025-01-13T21:42:04.332563730Z" level=info msg="StopPodSandbox for \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\" returns successfully" Jan 13 21:42:04.335262 systemd[1]: run-netns-cni\x2dc4accfe3\x2da5e7\x2d9899\x2d3499\x2dc92ded3c2cea.mount: Deactivated successfully. Jan 13 21:42:04.335888 containerd[1511]: time="2025-01-13T21:42:04.335753230Z" level=info msg="StopPodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\"" Jan 13 21:42:04.335945 containerd[1511]: time="2025-01-13T21:42:04.335891505Z" level=info msg="TearDown network for sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" successfully" Jan 13 21:42:04.335945 containerd[1511]: time="2025-01-13T21:42:04.335911248Z" level=info msg="StopPodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" returns successfully" Jan 13 21:42:04.338777 containerd[1511]: time="2025-01-13T21:42:04.337733115Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\"" Jan 13 21:42:04.338777 containerd[1511]: time="2025-01-13T21:42:04.337866734Z" level=info msg="TearDown network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" successfully" Jan 13 21:42:04.338777 containerd[1511]: time="2025-01-13T21:42:04.337894564Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" returns successfully" Jan 13 21:42:04.339696 containerd[1511]: time="2025-01-13T21:42:04.339666570Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\"" Jan 13 21:42:04.340118 containerd[1511]: time="2025-01-13T21:42:04.340091748Z" level=info msg="TearDown network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" successfully" Jan 13 21:42:04.340320 containerd[1511]: time="2025-01-13T21:42:04.340189221Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" returns successfully" Jan 13 21:42:04.341377 kubelet[1918]: I0113 21:42:04.341350 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00" Jan 13 21:42:04.344168 containerd[1511]: time="2025-01-13T21:42:04.344136026Z" level=info msg="StopPodSandbox for \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\"" Jan 13 21:42:04.347007 containerd[1511]: time="2025-01-13T21:42:04.344472917Z" level=info msg="Ensure that sandbox c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00 in task-service has been cleanup successfully" Jan 13 21:42:04.347007 containerd[1511]: time="2025-01-13T21:42:04.346631170Z" level=info msg="TearDown network for sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\" successfully" Jan 13 21:42:04.347007 containerd[1511]: time="2025-01-13T21:42:04.346653632Z" level=info msg="StopPodSandbox for \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\" returns successfully" Jan 13 21:42:04.347007 containerd[1511]: time="2025-01-13T21:42:04.346753875Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\"" Jan 13 21:42:04.347007 containerd[1511]: time="2025-01-13T21:42:04.346868074Z" level=info msg="TearDown network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" successfully" Jan 13 21:42:04.347007 containerd[1511]: time="2025-01-13T21:42:04.346887541Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" returns successfully" Jan 13 21:42:04.348285 systemd[1]: run-netns-cni\x2d5394d341\x2da282\x2d2ea9\x2daab6\x2d2aaa2abef78a.mount: Deactivated successfully. Jan 13 21:42:04.350756 containerd[1511]: time="2025-01-13T21:42:04.349929597Z" level=info msg="StopPodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\"" Jan 13 21:42:04.350756 containerd[1511]: time="2025-01-13T21:42:04.350052424Z" level=info msg="TearDown network for sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" successfully" Jan 13 21:42:04.350756 containerd[1511]: time="2025-01-13T21:42:04.350074298Z" level=info msg="StopPodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" returns successfully" Jan 13 21:42:04.350756 containerd[1511]: time="2025-01-13T21:42:04.350158263Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" Jan 13 21:42:04.350756 containerd[1511]: time="2025-01-13T21:42:04.350277018Z" level=info msg="TearDown network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" successfully" Jan 13 21:42:04.350756 containerd[1511]: time="2025-01-13T21:42:04.350293414Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" returns successfully" Jan 13 21:42:04.352883 containerd[1511]: time="2025-01-13T21:42:04.352084624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-56g8g,Uid:4a557c18-91ac-453d-9b7a-3eb973d9a2bc,Namespace:default,Attempt:2,}" Jan 13 21:42:04.359115 containerd[1511]: time="2025-01-13T21:42:04.359080299Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:42:04.359420 containerd[1511]: time="2025-01-13T21:42:04.359394016Z" level=info msg="TearDown network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" successfully" Jan 13 21:42:04.359541 containerd[1511]: time="2025-01-13T21:42:04.359517478Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" returns successfully" Jan 13 21:42:04.385591 containerd[1511]: time="2025-01-13T21:42:04.385543726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:7,}" Jan 13 21:42:04.613636 containerd[1511]: time="2025-01-13T21:42:04.613425904Z" level=error msg="Failed to destroy network for sandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:04.615121 containerd[1511]: time="2025-01-13T21:42:04.615086583Z" level=error msg="encountered an error cleaning up failed sandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:04.616289 containerd[1511]: time="2025-01-13T21:42:04.616250471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:04.617046 kubelet[1918]: E0113 21:42:04.616959 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:04.617217 kubelet[1918]: E0113 21:42:04.617100 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:04.617217 kubelet[1918]: E0113 21:42:04.617136 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:04.617323 kubelet[1918]: E0113 21:42:04.617212 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:42:04.618525 containerd[1511]: time="2025-01-13T21:42:04.618483286Z" level=error msg="Failed to destroy network for sandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:04.619515 containerd[1511]: time="2025-01-13T21:42:04.619476891Z" level=error msg="encountered an error cleaning up failed sandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:04.620279 containerd[1511]: time="2025-01-13T21:42:04.620244886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-56g8g,Uid:4a557c18-91ac-453d-9b7a-3eb973d9a2bc,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:04.620830 kubelet[1918]: E0113 21:42:04.620639 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:04.620830 kubelet[1918]: E0113 21:42:04.620687 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-56g8g" Jan 13 21:42:04.620830 kubelet[1918]: E0113 21:42:04.620713 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-56g8g" Jan 13 21:42:04.621064 kubelet[1918]: E0113 21:42:04.620756 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-56g8g_default(4a557c18-91ac-453d-9b7a-3eb973d9a2bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-56g8g_default(4a557c18-91ac-453d-9b7a-3eb973d9a2bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-56g8g" podUID="4a557c18-91ac-453d-9b7a-3eb973d9a2bc" Jan 13 21:42:05.092201 kubelet[1918]: E0113 21:42:05.092087 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:05.316852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a-shm.mount: Deactivated successfully. Jan 13 21:42:05.317309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468-shm.mount: Deactivated successfully. Jan 13 21:42:05.347272 kubelet[1918]: I0113 21:42:05.346917 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a" Jan 13 21:42:05.348854 containerd[1511]: time="2025-01-13T21:42:05.348798847Z" level=info msg="StopPodSandbox for \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\"" Jan 13 21:42:05.354362 containerd[1511]: time="2025-01-13T21:42:05.351489918Z" level=info msg="Ensure that sandbox 68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a in task-service has been cleanup successfully" Jan 13 21:42:05.354526 containerd[1511]: time="2025-01-13T21:42:05.354498109Z" level=info msg="TearDown network for sandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\" successfully" Jan 13 21:42:05.354674 containerd[1511]: time="2025-01-13T21:42:05.354650263Z" level=info msg="StopPodSandbox for \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\" returns successfully" Jan 13 21:42:05.355269 systemd[1]: run-netns-cni\x2d17f79156\x2d7e69\x2d5549\x2d6b2b\x2d8c46fc9e0b9f.mount: Deactivated successfully. Jan 13 21:42:05.356646 containerd[1511]: time="2025-01-13T21:42:05.356611150Z" level=info msg="StopPodSandbox for \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\"" Jan 13 21:42:05.357062 containerd[1511]: time="2025-01-13T21:42:05.356723948Z" level=info msg="TearDown network for sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\" successfully" Jan 13 21:42:05.357062 containerd[1511]: time="2025-01-13T21:42:05.356750715Z" level=info msg="StopPodSandbox for \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\" returns successfully" Jan 13 21:42:05.358510 containerd[1511]: time="2025-01-13T21:42:05.358481225Z" level=info msg="StopPodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\"" Jan 13 21:42:05.358614 containerd[1511]: time="2025-01-13T21:42:05.358591122Z" level=info msg="TearDown network for sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" successfully" Jan 13 21:42:05.358677 containerd[1511]: time="2025-01-13T21:42:05.358615349Z" level=info msg="StopPodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" returns successfully" Jan 13 21:42:05.359732 containerd[1511]: time="2025-01-13T21:42:05.359698835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-56g8g,Uid:4a557c18-91ac-453d-9b7a-3eb973d9a2bc,Namespace:default,Attempt:3,}" Jan 13 21:42:05.375910 kubelet[1918]: I0113 21:42:05.373638 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468" Jan 13 21:42:05.383392 containerd[1511]: time="2025-01-13T21:42:05.383354142Z" level=info msg="StopPodSandbox for \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\"" Jan 13 21:42:05.383785 containerd[1511]: time="2025-01-13T21:42:05.383754924Z" level=info msg="Ensure that sandbox ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468 in task-service has been cleanup successfully" Jan 13 21:42:05.384099 containerd[1511]: time="2025-01-13T21:42:05.384074502Z" level=info msg="TearDown network for sandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\" successfully" Jan 13 21:42:05.385390 containerd[1511]: time="2025-01-13T21:42:05.385359777Z" level=info msg="StopPodSandbox for \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\" returns successfully" Jan 13 21:42:05.386039 systemd[1]: run-netns-cni\x2d2cbe56cc\x2da887\x2dc2a4\x2d8670\x2d8633ba6242ce.mount: Deactivated successfully. Jan 13 21:42:05.387987 containerd[1511]: time="2025-01-13T21:42:05.387210040Z" level=info msg="StopPodSandbox for \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\"" Jan 13 21:42:05.387987 containerd[1511]: time="2025-01-13T21:42:05.387338168Z" level=info msg="TearDown network for sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\" successfully" Jan 13 21:42:05.387987 containerd[1511]: time="2025-01-13T21:42:05.387358908Z" level=info msg="StopPodSandbox for \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\" returns successfully" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.391297670Z" level=info msg="StopPodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\"" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.391417921Z" level=info msg="TearDown network for sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" successfully" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.391437402Z" level=info msg="StopPodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" returns successfully" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.391758240Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\"" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.391860116Z" level=info msg="TearDown network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" successfully" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.391877510Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" returns successfully" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.392416560Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\"" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.392509780Z" level=info msg="TearDown network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" successfully" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.392530231Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" returns successfully" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.392858958Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\"" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.393010169Z" level=info msg="TearDown network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" successfully" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.393045566Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" returns successfully" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.393785120Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.393880018Z" level=info msg="TearDown network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" successfully" Jan 13 21:42:05.394182 containerd[1511]: time="2025-01-13T21:42:05.393896591Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" returns successfully" Jan 13 21:42:05.395294 containerd[1511]: time="2025-01-13T21:42:05.395015345Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:42:05.395294 containerd[1511]: time="2025-01-13T21:42:05.395158930Z" level=info msg="TearDown network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" successfully" Jan 13 21:42:05.395294 containerd[1511]: time="2025-01-13T21:42:05.395189480Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" returns successfully" Jan 13 21:42:05.396675 containerd[1511]: time="2025-01-13T21:42:05.396642809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:8,}" Jan 13 21:42:05.648808 containerd[1511]: time="2025-01-13T21:42:05.648599999Z" level=error msg="Failed to destroy network for sandbox \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:05.649919 containerd[1511]: time="2025-01-13T21:42:05.649636794Z" level=error msg="encountered an error cleaning up failed sandbox \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:05.649919 containerd[1511]: time="2025-01-13T21:42:05.649738864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:05.651777 kubelet[1918]: E0113 21:42:05.651172 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:05.651777 kubelet[1918]: E0113 21:42:05.651297 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:05.651777 kubelet[1918]: E0113 21:42:05.651360 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:05.652001 kubelet[1918]: E0113 21:42:05.651440 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:42:05.657587 containerd[1511]: time="2025-01-13T21:42:05.657550593Z" level=error msg="Failed to destroy network for sandbox \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:05.658921 containerd[1511]: time="2025-01-13T21:42:05.658876531Z" level=error msg="encountered an error cleaning up failed sandbox \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:05.659420 containerd[1511]: time="2025-01-13T21:42:05.659372311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-56g8g,Uid:4a557c18-91ac-453d-9b7a-3eb973d9a2bc,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:05.660178 kubelet[1918]: E0113 21:42:05.659565 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:05.660178 kubelet[1918]: E0113 21:42:05.659619 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-56g8g" Jan 13 21:42:05.660178 kubelet[1918]: E0113 21:42:05.659652 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-56g8g" Jan 13 21:42:05.660394 kubelet[1918]: E0113 21:42:05.659711 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-56g8g_default(4a557c18-91ac-453d-9b7a-3eb973d9a2bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-56g8g_default(4a557c18-91ac-453d-9b7a-3eb973d9a2bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-56g8g" podUID="4a557c18-91ac-453d-9b7a-3eb973d9a2bc" Jan 13 21:42:06.094546 kubelet[1918]: E0113 21:42:06.094452 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:06.317376 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23-shm.mount: Deactivated successfully. Jan 13 21:42:06.384611 kubelet[1918]: I0113 21:42:06.383038 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0" Jan 13 21:42:06.385679 containerd[1511]: time="2025-01-13T21:42:06.385125833Z" level=info msg="StopPodSandbox for \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\"" Jan 13 21:42:06.388385 containerd[1511]: time="2025-01-13T21:42:06.386198342Z" level=info msg="Ensure that sandbox 6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0 in task-service has been cleanup successfully" Jan 13 21:42:06.388385 containerd[1511]: time="2025-01-13T21:42:06.386433119Z" level=info msg="TearDown network for sandbox \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\" successfully" Jan 13 21:42:06.388385 containerd[1511]: time="2025-01-13T21:42:06.386455919Z" level=info msg="StopPodSandbox for \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\" returns successfully" Jan 13 21:42:06.389343 containerd[1511]: time="2025-01-13T21:42:06.388669373Z" level=info msg="StopPodSandbox for \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\"" Jan 13 21:42:06.389343 containerd[1511]: time="2025-01-13T21:42:06.388769106Z" level=info msg="TearDown network for sandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\" successfully" Jan 13 21:42:06.389343 containerd[1511]: time="2025-01-13T21:42:06.388786398Z" level=info msg="StopPodSandbox for \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\" returns successfully" Jan 13 21:42:06.389343 containerd[1511]: time="2025-01-13T21:42:06.389227687Z" level=info msg="StopPodSandbox for \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\"" Jan 13 21:42:06.389343 containerd[1511]: time="2025-01-13T21:42:06.389319164Z" level=info msg="TearDown network for sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\" successfully" Jan 13 21:42:06.389604 containerd[1511]: time="2025-01-13T21:42:06.389352930Z" level=info msg="StopPodSandbox for \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\" returns successfully" Jan 13 21:42:06.390106 systemd[1]: run-netns-cni\x2d49e3d4f9\x2d8ef4\x2d95cf\x2d49d9\x2dfed51aa80c69.mount: Deactivated successfully. Jan 13 21:42:06.390471 containerd[1511]: time="2025-01-13T21:42:06.390391924Z" level=info msg="StopPodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\"" Jan 13 21:42:06.390534 containerd[1511]: time="2025-01-13T21:42:06.390504143Z" level=info msg="TearDown network for sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" successfully" Jan 13 21:42:06.390534 containerd[1511]: time="2025-01-13T21:42:06.390520959Z" level=info msg="StopPodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" returns successfully" Jan 13 21:42:06.391734 containerd[1511]: time="2025-01-13T21:42:06.390937031Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\"" Jan 13 21:42:06.391734 containerd[1511]: time="2025-01-13T21:42:06.391035875Z" level=info msg="TearDown network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" successfully" Jan 13 21:42:06.391734 containerd[1511]: time="2025-01-13T21:42:06.391052758Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" returns successfully" Jan 13 21:42:06.391734 containerd[1511]: time="2025-01-13T21:42:06.391528905Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\"" Jan 13 21:42:06.391734 containerd[1511]: time="2025-01-13T21:42:06.391623805Z" level=info msg="TearDown network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" successfully" Jan 13 21:42:06.391734 containerd[1511]: time="2025-01-13T21:42:06.391639959Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" returns successfully" Jan 13 21:42:06.392024 kubelet[1918]: I0113 21:42:06.391906 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23" Jan 13 21:42:06.393694 containerd[1511]: time="2025-01-13T21:42:06.392872611Z" level=info msg="StopPodSandbox for \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\"" Jan 13 21:42:06.393694 containerd[1511]: time="2025-01-13T21:42:06.393095053Z" level=info msg="Ensure that sandbox 1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23 in task-service has been cleanup successfully" Jan 13 21:42:06.393694 containerd[1511]: time="2025-01-13T21:42:06.393301373Z" level=info msg="TearDown network for sandbox \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\" successfully" Jan 13 21:42:06.393694 containerd[1511]: time="2025-01-13T21:42:06.393320340Z" level=info msg="StopPodSandbox for \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\" returns successfully" Jan 13 21:42:06.393694 containerd[1511]: time="2025-01-13T21:42:06.393402314Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\"" Jan 13 21:42:06.393694 containerd[1511]: time="2025-01-13T21:42:06.393500078Z" level=info msg="TearDown network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" successfully" Jan 13 21:42:06.393694 containerd[1511]: time="2025-01-13T21:42:06.393515996Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" returns successfully" Jan 13 21:42:06.396740 containerd[1511]: time="2025-01-13T21:42:06.395843656Z" level=info msg="StopPodSandbox for \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\"" Jan 13 21:42:06.396740 containerd[1511]: time="2025-01-13T21:42:06.395971243Z" level=info msg="TearDown network for sandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\" successfully" Jan 13 21:42:06.396740 containerd[1511]: time="2025-01-13T21:42:06.395989824Z" level=info msg="StopPodSandbox for \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\" returns successfully" Jan 13 21:42:06.396740 containerd[1511]: time="2025-01-13T21:42:06.396061881Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" Jan 13 21:42:06.396740 containerd[1511]: time="2025-01-13T21:42:06.396153090Z" level=info msg="TearDown network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" successfully" Jan 13 21:42:06.396740 containerd[1511]: time="2025-01-13T21:42:06.396169028Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" returns successfully" Jan 13 21:42:06.396740 containerd[1511]: time="2025-01-13T21:42:06.396668721Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:42:06.396740 containerd[1511]: time="2025-01-13T21:42:06.396760023Z" level=info msg="TearDown network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" successfully" Jan 13 21:42:06.396315 systemd[1]: run-netns-cni\x2d16395ff5\x2d3464\x2d5780\x2d462c\x2db170ff9bce1f.mount: Deactivated successfully. Jan 13 21:42:06.397403 containerd[1511]: time="2025-01-13T21:42:06.396776210Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" returns successfully" Jan 13 21:42:06.397403 containerd[1511]: time="2025-01-13T21:42:06.396836304Z" level=info msg="StopPodSandbox for \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\"" Jan 13 21:42:06.397403 containerd[1511]: time="2025-01-13T21:42:06.396917685Z" level=info msg="TearDown network for sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\" successfully" Jan 13 21:42:06.397403 containerd[1511]: time="2025-01-13T21:42:06.396933586Z" level=info msg="StopPodSandbox for \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\" returns successfully" Jan 13 21:42:06.398928 containerd[1511]: time="2025-01-13T21:42:06.398855322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:9,}" Jan 13 21:42:06.412548 containerd[1511]: time="2025-01-13T21:42:06.412514547Z" level=info msg="StopPodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\"" Jan 13 21:42:06.412792 containerd[1511]: time="2025-01-13T21:42:06.412742255Z" level=info msg="TearDown network for sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" successfully" Jan 13 21:42:06.412792 containerd[1511]: time="2025-01-13T21:42:06.412782715Z" level=info msg="StopPodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" returns successfully" Jan 13 21:42:06.415294 containerd[1511]: time="2025-01-13T21:42:06.415248743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-56g8g,Uid:4a557c18-91ac-453d-9b7a-3eb973d9a2bc,Namespace:default,Attempt:4,}" Jan 13 21:42:06.577931 containerd[1511]: time="2025-01-13T21:42:06.577856807Z" level=error msg="Failed to destroy network for sandbox \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:06.579320 containerd[1511]: time="2025-01-13T21:42:06.578404279Z" level=error msg="encountered an error cleaning up failed sandbox \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:06.579320 containerd[1511]: time="2025-01-13T21:42:06.578542129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-56g8g,Uid:4a557c18-91ac-453d-9b7a-3eb973d9a2bc,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:06.579583 kubelet[1918]: E0113 21:42:06.578847 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:06.579583 kubelet[1918]: E0113 21:42:06.578915 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-56g8g" Jan 13 21:42:06.579583 kubelet[1918]: E0113 21:42:06.578942 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-56g8g" Jan 13 21:42:06.579735 kubelet[1918]: E0113 21:42:06.579003 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-56g8g_default(4a557c18-91ac-453d-9b7a-3eb973d9a2bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-56g8g_default(4a557c18-91ac-453d-9b7a-3eb973d9a2bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-56g8g" podUID="4a557c18-91ac-453d-9b7a-3eb973d9a2bc" Jan 13 21:42:06.582633 containerd[1511]: time="2025-01-13T21:42:06.582585568Z" level=error msg="Failed to destroy network for sandbox \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:06.583211 containerd[1511]: time="2025-01-13T21:42:06.582980268Z" level=error msg="encountered an error cleaning up failed sandbox \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:06.583211 containerd[1511]: time="2025-01-13T21:42:06.583054758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:06.583488 kubelet[1918]: E0113 21:42:06.583298 1918 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:42:06.583488 kubelet[1918]: E0113 21:42:06.583377 1918 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:06.583488 kubelet[1918]: E0113 21:42:06.583409 1918 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bmg8" Jan 13 21:42:06.583759 kubelet[1918]: E0113 21:42:06.583487 1918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2bmg8_calico-system(447cb4dd-d91a-4916-9a29-3a8fd8543edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bmg8" podUID="447cb4dd-d91a-4916-9a29-3a8fd8543edd" Jan 13 21:42:06.978357 containerd[1511]: time="2025-01-13T21:42:06.978208735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:06.979307 containerd[1511]: time="2025-01-13T21:42:06.979254626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:42:06.980639 containerd[1511]: time="2025-01-13T21:42:06.980250029Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:06.982880 containerd[1511]: time="2025-01-13T21:42:06.982845697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:06.984890 containerd[1511]: time="2025-01-13T21:42:06.984843645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.723452439s" Jan 13 21:42:06.985126 containerd[1511]: time="2025-01-13T21:42:06.985098559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:42:07.009589 containerd[1511]: time="2025-01-13T21:42:07.009318386Z" level=info msg="CreateContainer within sandbox \"64d234d308e736a9d2379ab1fd05040081979bdbdb66d397c67635292b05617e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:42:07.024529 containerd[1511]: time="2025-01-13T21:42:07.024443736Z" level=info msg="CreateContainer within sandbox \"64d234d308e736a9d2379ab1fd05040081979bdbdb66d397c67635292b05617e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"770e225925a6e47d99d585d338961c0eb8f18bc5706245fb6a3aa28fa02806d0\"" Jan 13 21:42:07.026384 containerd[1511]: time="2025-01-13T21:42:07.025562879Z" level=info msg="StartContainer for \"770e225925a6e47d99d585d338961c0eb8f18bc5706245fb6a3aa28fa02806d0\"" Jan 13 21:42:07.095076 kubelet[1918]: E0113 21:42:07.094999 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:07.136841 systemd[1]: Started cri-containerd-770e225925a6e47d99d585d338961c0eb8f18bc5706245fb6a3aa28fa02806d0.scope - libcontainer container 770e225925a6e47d99d585d338961c0eb8f18bc5706245fb6a3aa28fa02806d0. Jan 13 21:42:07.187566 containerd[1511]: time="2025-01-13T21:42:07.187441393Z" level=info msg="StartContainer for \"770e225925a6e47d99d585d338961c0eb8f18bc5706245fb6a3aa28fa02806d0\" returns successfully" Jan 13 21:42:07.281527 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:42:07.281706 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:42:07.321510 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f-shm.mount: Deactivated successfully. Jan 13 21:42:07.321880 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf-shm.mount: Deactivated successfully. Jan 13 21:42:07.322015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631143118.mount: Deactivated successfully. Jan 13 21:42:07.415311 kubelet[1918]: I0113 21:42:07.415014 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf" Jan 13 21:42:07.417590 containerd[1511]: time="2025-01-13T21:42:07.416726208Z" level=info msg="StopPodSandbox for \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\"" Jan 13 21:42:07.417590 containerd[1511]: time="2025-01-13T21:42:07.416973416Z" level=info msg="Ensure that sandbox 57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf in task-service has been cleanup successfully" Jan 13 21:42:07.418454 containerd[1511]: time="2025-01-13T21:42:07.418066934Z" level=info msg="TearDown network for sandbox \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\" successfully" Jan 13 21:42:07.418454 containerd[1511]: time="2025-01-13T21:42:07.418095298Z" level=info msg="StopPodSandbox for \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\" returns successfully" Jan 13 21:42:07.420159 containerd[1511]: time="2025-01-13T21:42:07.419568656Z" level=info msg="StopPodSandbox for \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\"" Jan 13 21:42:07.420159 containerd[1511]: time="2025-01-13T21:42:07.419712256Z" level=info msg="TearDown network for sandbox \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\" successfully" Jan 13 21:42:07.420159 containerd[1511]: time="2025-01-13T21:42:07.419730741Z" level=info msg="StopPodSandbox for \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\" returns successfully" Jan 13 21:42:07.423890 containerd[1511]: time="2025-01-13T21:42:07.423094677Z" level=info msg="StopPodSandbox for \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\"" Jan 13 21:42:07.423890 containerd[1511]: time="2025-01-13T21:42:07.423753135Z" level=info msg="TearDown network for sandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\" successfully" Jan 13 21:42:07.423890 containerd[1511]: time="2025-01-13T21:42:07.423772373Z" level=info msg="StopPodSandbox for \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\" returns successfully" Jan 13 21:42:07.422044 systemd[1]: run-netns-cni\x2dd46c1c26\x2d1de8\x2dac1c\x2de384\x2dec0fe407035d.mount: Deactivated successfully. Jan 13 21:42:07.426698 containerd[1511]: time="2025-01-13T21:42:07.426567042Z" level=info msg="StopPodSandbox for \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\"" Jan 13 21:42:07.426811 containerd[1511]: time="2025-01-13T21:42:07.426758579Z" level=info msg="TearDown network for sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\" successfully" Jan 13 21:42:07.426811 containerd[1511]: time="2025-01-13T21:42:07.426783249Z" level=info msg="StopPodSandbox for \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\" returns successfully" Jan 13 21:42:07.428775 kubelet[1918]: I0113 21:42:07.427830 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ffp5b" podStartSLOduration=2.892692469 podStartE2EDuration="23.427790378s" podCreationTimestamp="2025-01-13 21:41:44 +0000 UTC" firstStartedPulling="2025-01-13 21:41:46.45111712 +0000 UTC m=+2.978496981" lastFinishedPulling="2025-01-13 21:42:06.986215018 +0000 UTC m=+23.513594890" observedRunningTime="2025-01-13 21:42:07.423223963 +0000 UTC m=+23.950603837" watchObservedRunningTime="2025-01-13 21:42:07.427790378 +0000 UTC m=+23.955170258" Jan 13 21:42:07.430033 containerd[1511]: time="2025-01-13T21:42:07.429955330Z" level=info msg="StopPodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\"" Jan 13 21:42:07.431868 kubelet[1918]: I0113 21:42:07.431351 1918 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f" Jan 13 21:42:07.432136 containerd[1511]: time="2025-01-13T21:42:07.432074859Z" level=info msg="StopPodSandbox for \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\"" Jan 13 21:42:07.433018 containerd[1511]: time="2025-01-13T21:42:07.432989192Z" level=info msg="Ensure that sandbox d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f in task-service has been cleanup successfully" Jan 13 21:42:07.433233 containerd[1511]: time="2025-01-13T21:42:07.432382301Z" level=info msg="TearDown network for sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" successfully" Jan 13 21:42:07.433233 containerd[1511]: time="2025-01-13T21:42:07.433215879Z" level=info msg="StopPodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" returns successfully" Jan 13 21:42:07.435553 containerd[1511]: time="2025-01-13T21:42:07.435526596Z" level=info msg="TearDown network for sandbox \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\" successfully" Jan 13 21:42:07.435719 containerd[1511]: time="2025-01-13T21:42:07.435693144Z" level=info msg="StopPodSandbox for \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\" returns successfully" Jan 13 21:42:07.435974 containerd[1511]: time="2025-01-13T21:42:07.435550401Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\"" Jan 13 21:42:07.437066 systemd[1]: run-netns-cni\x2d0b48b586\x2d3e53\x2d1ea3\x2def42\x2d19d0dd6c20f0.mount: Deactivated successfully. Jan 13 21:42:07.437702 containerd[1511]: time="2025-01-13T21:42:07.436571501Z" level=info msg="TearDown network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" successfully" Jan 13 21:42:07.437702 containerd[1511]: time="2025-01-13T21:42:07.437541208Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" returns successfully" Jan 13 21:42:07.439629 containerd[1511]: time="2025-01-13T21:42:07.438511863Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\"" Jan 13 21:42:07.440831 containerd[1511]: time="2025-01-13T21:42:07.438761818Z" level=info msg="StopPodSandbox for \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\"" Jan 13 21:42:07.440831 containerd[1511]: time="2025-01-13T21:42:07.440693781Z" level=info msg="TearDown network for sandbox \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\" successfully" Jan 13 21:42:07.440831 containerd[1511]: time="2025-01-13T21:42:07.440712322Z" level=info msg="StopPodSandbox for \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\" returns successfully" Jan 13 21:42:07.440831 containerd[1511]: time="2025-01-13T21:42:07.440575855Z" level=info msg="TearDown network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" successfully" Jan 13 21:42:07.440831 containerd[1511]: time="2025-01-13T21:42:07.440773310Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" returns successfully" Jan 13 21:42:07.442252 containerd[1511]: time="2025-01-13T21:42:07.442038758Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\"" Jan 13 21:42:07.442806 containerd[1511]: time="2025-01-13T21:42:07.442139965Z" level=info msg="StopPodSandbox for \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\"" Jan 13 21:42:07.442806 containerd[1511]: time="2025-01-13T21:42:07.442548345Z" level=info msg="TearDown network for sandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\" successfully" Jan 13 21:42:07.442806 containerd[1511]: time="2025-01-13T21:42:07.442566292Z" level=info msg="StopPodSandbox for \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\" returns successfully" Jan 13 21:42:07.442806 containerd[1511]: time="2025-01-13T21:42:07.442681208Z" level=info msg="TearDown network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" successfully" Jan 13 21:42:07.442806 containerd[1511]: time="2025-01-13T21:42:07.442701896Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" returns successfully" Jan 13 21:42:07.443171 containerd[1511]: time="2025-01-13T21:42:07.443127552Z" level=info msg="StopPodSandbox for \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\"" Jan 13 21:42:07.443271 containerd[1511]: time="2025-01-13T21:42:07.443246579Z" level=info msg="TearDown network for sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\" successfully" Jan 13 21:42:07.445364 containerd[1511]: time="2025-01-13T21:42:07.443270597Z" level=info msg="StopPodSandbox for \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\" returns successfully" Jan 13 21:42:07.445364 containerd[1511]: time="2025-01-13T21:42:07.443952345Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" Jan 13 21:42:07.445364 containerd[1511]: time="2025-01-13T21:42:07.444359854Z" level=info msg="TearDown network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" successfully" Jan 13 21:42:07.445364 containerd[1511]: time="2025-01-13T21:42:07.444380678Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" returns successfully" Jan 13 21:42:07.445364 containerd[1511]: time="2025-01-13T21:42:07.445008354Z" level=info msg="StopPodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\"" Jan 13 21:42:07.445607 containerd[1511]: time="2025-01-13T21:42:07.445424043Z" level=info msg="TearDown network for sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" successfully" Jan 13 21:42:07.445607 containerd[1511]: time="2025-01-13T21:42:07.445443036Z" level=info msg="StopPodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" returns successfully" Jan 13 21:42:07.445782 containerd[1511]: time="2025-01-13T21:42:07.445754525Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:42:07.446823 containerd[1511]: time="2025-01-13T21:42:07.446439020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-56g8g,Uid:4a557c18-91ac-453d-9b7a-3eb973d9a2bc,Namespace:default,Attempt:5,}" Jan 13 21:42:07.446823 containerd[1511]: time="2025-01-13T21:42:07.446440231Z" level=info msg="TearDown network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" successfully" Jan 13 21:42:07.446823 containerd[1511]: time="2025-01-13T21:42:07.446576381Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" returns successfully" Jan 13 21:42:07.447843 containerd[1511]: time="2025-01-13T21:42:07.447810415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:10,}" Jan 13 21:42:08.020387 systemd-networkd[1437]: calida55ce53600: Link UP Jan 13 21:42:08.021093 systemd-networkd[1437]: calida55ce53600: Gained carrier Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.553 [INFO][2929] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.802 [INFO][2929] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.41.226-k8s-csi--node--driver--2bmg8-eth0 csi-node-driver- calico-system 447cb4dd-d91a-4916-9a29-3a8fd8543edd 1017 0 2025-01-13 21:41:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.230.41.226 csi-node-driver-2bmg8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calida55ce53600 [] []}} ContainerID="7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" Namespace="calico-system" Pod="csi-node-driver-2bmg8" WorkloadEndpoint="10.230.41.226-k8s-csi--node--driver--2bmg8-" Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.802 [INFO][2929] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" Namespace="calico-system" Pod="csi-node-driver-2bmg8" WorkloadEndpoint="10.230.41.226-k8s-csi--node--driver--2bmg8-eth0" Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.946 [INFO][2959] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" HandleID="k8s-pod-network.7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" Workload="10.230.41.226-k8s-csi--node--driver--2bmg8-eth0" Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.962 [INFO][2959] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" HandleID="k8s-pod-network.7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" Workload="10.230.41.226-k8s-csi--node--driver--2bmg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e5ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.230.41.226", "pod":"csi-node-driver-2bmg8", "timestamp":"2025-01-13 21:42:07.946007623 +0000 UTC"}, Hostname:"10.230.41.226", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.962 [INFO][2959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.962 [INFO][2959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.962 [INFO][2959] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.41.226' Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.965 [INFO][2959] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" host="10.230.41.226" Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.972 [INFO][2959] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.41.226" Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.980 [INFO][2959] ipam/ipam.go 489: Trying affinity for 192.168.73.0/26 host="10.230.41.226" Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.983 [INFO][2959] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.0/26 host="10.230.41.226" Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.987 [INFO][2959] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.0/26 host="10.230.41.226" Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.987 [INFO][2959] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.0/26 handle="k8s-pod-network.7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" host="10.230.41.226" Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.989 [INFO][2959] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:07.995 [INFO][2959] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.0/26 handle="k8s-pod-network.7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" host="10.230.41.226" Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:08.004 [INFO][2959] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.1/26] block=192.168.73.0/26 handle="k8s-pod-network.7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" host="10.230.41.226" Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:08.004 [INFO][2959] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.1/26] handle="k8s-pod-network.7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" host="10.230.41.226" Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:08.004 [INFO][2959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:42:08.042268 containerd[1511]: 2025-01-13 21:42:08.004 [INFO][2959] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.1/26] IPv6=[] ContainerID="7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" HandleID="k8s-pod-network.7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" Workload="10.230.41.226-k8s-csi--node--driver--2bmg8-eth0" Jan 13 21:42:08.043441 containerd[1511]: 2025-01-13 21:42:08.007 [INFO][2929] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" Namespace="calico-system" Pod="csi-node-driver-2bmg8" WorkloadEndpoint="10.230.41.226-k8s-csi--node--driver--2bmg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.41.226-k8s-csi--node--driver--2bmg8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"447cb4dd-d91a-4916-9a29-3a8fd8543edd", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 41, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.41.226", ContainerID:"", Pod:"csi-node-driver-2bmg8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida55ce53600", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:42:08.043441 containerd[1511]: 2025-01-13 21:42:08.007 [INFO][2929] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.1/32] ContainerID="7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" Namespace="calico-system" Pod="csi-node-driver-2bmg8" WorkloadEndpoint="10.230.41.226-k8s-csi--node--driver--2bmg8-eth0" Jan 13 21:42:08.043441 containerd[1511]: 2025-01-13 21:42:08.008 [INFO][2929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida55ce53600 ContainerID="7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" Namespace="calico-system" Pod="csi-node-driver-2bmg8" WorkloadEndpoint="10.230.41.226-k8s-csi--node--driver--2bmg8-eth0" Jan 13 21:42:08.043441 containerd[1511]: 2025-01-13 21:42:08.022 [INFO][2929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" Namespace="calico-system" Pod="csi-node-driver-2bmg8" WorkloadEndpoint="10.230.41.226-k8s-csi--node--driver--2bmg8-eth0" Jan 13 21:42:08.043441 containerd[1511]: 2025-01-13 21:42:08.023 [INFO][2929] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" Namespace="calico-system" Pod="csi-node-driver-2bmg8" WorkloadEndpoint="10.230.41.226-k8s-csi--node--driver--2bmg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.41.226-k8s-csi--node--driver--2bmg8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"447cb4dd-d91a-4916-9a29-3a8fd8543edd", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 41, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.41.226", ContainerID:"7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e", Pod:"csi-node-driver-2bmg8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida55ce53600", MAC:"fe:23:75:da:41:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:42:08.043441 containerd[1511]: 2025-01-13 21:42:08.039 [INFO][2929] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e" Namespace="calico-system" Pod="csi-node-driver-2bmg8" WorkloadEndpoint="10.230.41.226-k8s-csi--node--driver--2bmg8-eth0" Jan 13 21:42:08.072501 systemd-networkd[1437]: cali53815e3135b: Link UP Jan 13 21:42:08.074224 systemd-networkd[1437]: cali53815e3135b: Gained carrier Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:07.538 [INFO][2918] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:07.802 [INFO][2918] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0 nginx-deployment-85f456d6dd- default 4a557c18-91ac-453d-9b7a-3eb973d9a2bc 1119 0 2025-01-13 21:42:02 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.41.226 nginx-deployment-85f456d6dd-56g8g eth0 default [] [] [kns.default ksa.default.default] cali53815e3135b [] []}} ContainerID="91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" Namespace="default" Pod="nginx-deployment-85f456d6dd-56g8g" WorkloadEndpoint="10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-" Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:07.802 [INFO][2918] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" Namespace="default" Pod="nginx-deployment-85f456d6dd-56g8g" WorkloadEndpoint="10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0" Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:07.946 [INFO][2958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" HandleID="k8s-pod-network.91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" Workload="10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0" Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:07.962 [INFO][2958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" HandleID="k8s-pod-network.91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" Workload="10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003196b0), Attrs:map[string]string{"namespace":"default", "node":"10.230.41.226", "pod":"nginx-deployment-85f456d6dd-56g8g", "timestamp":"2025-01-13 21:42:07.946299871 +0000 UTC"}, Hostname:"10.230.41.226", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:07.963 [INFO][2958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.005 [INFO][2958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.005 [INFO][2958] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.41.226' Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.008 [INFO][2958] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" host="10.230.41.226" Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.016 [INFO][2958] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.41.226" Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.026 [INFO][2958] ipam/ipam.go 489: Trying affinity for 192.168.73.0/26 host="10.230.41.226" Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.030 [INFO][2958] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.0/26 host="10.230.41.226" Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.033 [INFO][2958] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.0/26 host="10.230.41.226" Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.033 [INFO][2958] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.0/26 handle="k8s-pod-network.91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" host="10.230.41.226" Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.036 [INFO][2958] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38 Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.047 [INFO][2958] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.0/26 handle="k8s-pod-network.91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" host="10.230.41.226" Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.058 [INFO][2958] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.2/26] block=192.168.73.0/26 handle="k8s-pod-network.91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" host="10.230.41.226" Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.058 [INFO][2958] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.2/26] handle="k8s-pod-network.91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" host="10.230.41.226" Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.058 [INFO][2958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:42:08.088418 containerd[1511]: 2025-01-13 21:42:08.058 [INFO][2958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.2/26] IPv6=[] ContainerID="91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" HandleID="k8s-pod-network.91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" Workload="10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0" Jan 13 21:42:08.089715 containerd[1511]: 2025-01-13 21:42:08.062 [INFO][2918] cni-plugin/k8s.go 386: Populated endpoint ContainerID="91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" Namespace="default" Pod="nginx-deployment-85f456d6dd-56g8g" WorkloadEndpoint="10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"4a557c18-91ac-453d-9b7a-3eb973d9a2bc", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.41.226", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-56g8g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.73.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali53815e3135b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:42:08.089715 containerd[1511]: 2025-01-13 21:42:08.063 [INFO][2918] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.2/32] ContainerID="91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" Namespace="default" Pod="nginx-deployment-85f456d6dd-56g8g" WorkloadEndpoint="10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0" Jan 13 21:42:08.089715 containerd[1511]: 2025-01-13 21:42:08.063 [INFO][2918] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53815e3135b ContainerID="91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" Namespace="default" Pod="nginx-deployment-85f456d6dd-56g8g" WorkloadEndpoint="10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0" Jan 13 21:42:08.089715 containerd[1511]: 2025-01-13 21:42:08.074 [INFO][2918] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" Namespace="default" Pod="nginx-deployment-85f456d6dd-56g8g" WorkloadEndpoint="10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0" Jan 13 21:42:08.089715 containerd[1511]: 2025-01-13 21:42:08.075 [INFO][2918] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" Namespace="default" Pod="nginx-deployment-85f456d6dd-56g8g" WorkloadEndpoint="10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"4a557c18-91ac-453d-9b7a-3eb973d9a2bc", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.41.226", ContainerID:"91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38", Pod:"nginx-deployment-85f456d6dd-56g8g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.73.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali53815e3135b", MAC:"1e:a6:ae:5f:df:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:42:08.089715 containerd[1511]: 2025-01-13 21:42:08.086 [INFO][2918] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38" Namespace="default" Pod="nginx-deployment-85f456d6dd-56g8g" WorkloadEndpoint="10.230.41.226-k8s-nginx--deployment--85f456d6dd--56g8g-eth0" Jan 13 21:42:08.093361 containerd[1511]: time="2025-01-13T21:42:08.093152729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:42:08.093361 containerd[1511]: time="2025-01-13T21:42:08.093260852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:42:08.093361 containerd[1511]: time="2025-01-13T21:42:08.093284178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:42:08.094809 containerd[1511]: time="2025-01-13T21:42:08.094287924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:42:08.096212 kubelet[1918]: E0113 21:42:08.096164 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:08.122546 systemd[1]: Started cri-containerd-7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e.scope - libcontainer container 7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e. Jan 13 21:42:08.150682 containerd[1511]: time="2025-01-13T21:42:08.150411495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:42:08.150682 containerd[1511]: time="2025-01-13T21:42:08.150479186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:42:08.150682 containerd[1511]: time="2025-01-13T21:42:08.150496646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:42:08.150682 containerd[1511]: time="2025-01-13T21:42:08.150615894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:42:08.163358 containerd[1511]: time="2025-01-13T21:42:08.163280422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bmg8,Uid:447cb4dd-d91a-4916-9a29-3a8fd8543edd,Namespace:calico-system,Attempt:10,} returns sandbox id \"7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e\"" Jan 13 21:42:08.169031 containerd[1511]: time="2025-01-13T21:42:08.168854761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:42:08.183634 systemd[1]: Started cri-containerd-91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38.scope - libcontainer container 91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38. Jan 13 21:42:08.244731 containerd[1511]: time="2025-01-13T21:42:08.244663223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-56g8g,Uid:4a557c18-91ac-453d-9b7a-3eb973d9a2bc,Namespace:default,Attempt:5,} returns sandbox id \"91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38\"" Jan 13 21:42:08.997378 kernel: bpftool[3201]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:42:09.097264 kubelet[1918]: E0113 21:42:09.097205 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:09.188570 systemd-networkd[1437]: calida55ce53600: Gained IPv6LL Jan 13 21:42:09.349202 systemd-networkd[1437]: vxlan.calico: Link UP Jan 13 21:42:09.349216 systemd-networkd[1437]: vxlan.calico: Gained carrier Jan 13 21:42:09.704829 systemd-networkd[1437]: cali53815e3135b: Gained IPv6LL Jan 13 21:42:09.765829 containerd[1511]: time="2025-01-13T21:42:09.765556619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:09.773954 containerd[1511]: time="2025-01-13T21:42:09.772658689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:42:09.773954 containerd[1511]: time="2025-01-13T21:42:09.772834831Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:09.780150 containerd[1511]: time="2025-01-13T21:42:09.780107516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:09.785983 containerd[1511]: time="2025-01-13T21:42:09.785908539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.616867329s" Jan 13 21:42:09.786292 containerd[1511]: time="2025-01-13T21:42:09.786110633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:42:09.789519 containerd[1511]: time="2025-01-13T21:42:09.788665570Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:42:09.790517 containerd[1511]: time="2025-01-13T21:42:09.790475875Z" level=info msg="CreateContainer within sandbox \"7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:42:09.812614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount93016051.mount: Deactivated successfully. Jan 13 21:42:09.814614 containerd[1511]: time="2025-01-13T21:42:09.814521312Z" level=info msg="CreateContainer within sandbox \"7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f7d61195ab70d07099b43639004833458d3dfb45680a9368c118152a05914b05\"" Jan 13 21:42:09.816526 containerd[1511]: time="2025-01-13T21:42:09.815956743Z" level=info msg="StartContainer for \"f7d61195ab70d07099b43639004833458d3dfb45680a9368c118152a05914b05\"" Jan 13 21:42:09.894546 systemd[1]: Started cri-containerd-f7d61195ab70d07099b43639004833458d3dfb45680a9368c118152a05914b05.scope - libcontainer container f7d61195ab70d07099b43639004833458d3dfb45680a9368c118152a05914b05. Jan 13 21:42:09.938724 containerd[1511]: time="2025-01-13T21:42:09.938574017Z" level=info msg="StartContainer for \"f7d61195ab70d07099b43639004833458d3dfb45680a9368c118152a05914b05\" returns successfully" Jan 13 21:42:10.097481 kubelet[1918]: E0113 21:42:10.097400 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:10.788521 systemd-networkd[1437]: vxlan.calico: Gained IPv6LL Jan 13 21:42:11.098371 kubelet[1918]: E0113 21:42:11.098066 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:12.098249 kubelet[1918]: E0113 21:42:12.098203 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:13.099485 kubelet[1918]: E0113 21:42:13.099406 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:13.518308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842950923.mount: Deactivated successfully. Jan 13 21:42:14.100449 kubelet[1918]: E0113 21:42:14.100400 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:15.100810 kubelet[1918]: E0113 21:42:15.100726 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:15.298184 containerd[1511]: time="2025-01-13T21:42:15.298116278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:15.299819 containerd[1511]: time="2025-01-13T21:42:15.299761763Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 21:42:15.300968 containerd[1511]: time="2025-01-13T21:42:15.300910122Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:15.304348 containerd[1511]: time="2025-01-13T21:42:15.304184844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:15.305877 containerd[1511]: time="2025-01-13T21:42:15.305718052Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 5.517013483s" Jan 13 21:42:15.305877 containerd[1511]: time="2025-01-13T21:42:15.305757338Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:42:15.307862 containerd[1511]: time="2025-01-13T21:42:15.307833749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:42:15.316836 containerd[1511]: time="2025-01-13T21:42:15.316774988Z" level=info msg="CreateContainer within sandbox \"91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 21:42:15.331703 containerd[1511]: time="2025-01-13T21:42:15.331647200Z" level=info msg="CreateContainer within sandbox \"91c5c8c63fbee12119c2623a6d2eed310e3f3039177b0496c7373bee6a0fde38\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b9f9859c244f0210959593d6a2d7df63b69b87d9869c3a18657f6435e71111c2\"" Jan 13 21:42:15.332443 containerd[1511]: time="2025-01-13T21:42:15.332074609Z" level=info msg="StartContainer for \"b9f9859c244f0210959593d6a2d7df63b69b87d9869c3a18657f6435e71111c2\"" Jan 13 21:42:15.369824 systemd[1]: run-containerd-runc-k8s.io-b9f9859c244f0210959593d6a2d7df63b69b87d9869c3a18657f6435e71111c2-runc.CF7rGl.mount: Deactivated successfully. Jan 13 21:42:15.377535 systemd[1]: Started cri-containerd-b9f9859c244f0210959593d6a2d7df63b69b87d9869c3a18657f6435e71111c2.scope - libcontainer container b9f9859c244f0210959593d6a2d7df63b69b87d9869c3a18657f6435e71111c2. Jan 13 21:42:15.414945 containerd[1511]: time="2025-01-13T21:42:15.414846968Z" level=info msg="StartContainer for \"b9f9859c244f0210959593d6a2d7df63b69b87d9869c3a18657f6435e71111c2\" returns successfully" Jan 13 21:42:16.101966 kubelet[1918]: E0113 21:42:16.101871 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:16.691642 kubelet[1918]: I0113 21:42:16.691554 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-56g8g" podStartSLOduration=7.631614506 podStartE2EDuration="14.69152941s" podCreationTimestamp="2025-01-13 21:42:02 +0000 UTC" firstStartedPulling="2025-01-13 21:42:08.247122621 +0000 UTC m=+24.774502488" lastFinishedPulling="2025-01-13 21:42:15.307037515 +0000 UTC m=+31.834417392" observedRunningTime="2025-01-13 21:42:15.510872372 +0000 UTC m=+32.038252256" watchObservedRunningTime="2025-01-13 21:42:16.69152941 +0000 UTC m=+33.218909276" Jan 13 21:42:16.867363 containerd[1511]: time="2025-01-13T21:42:16.867271542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:16.869790 containerd[1511]: time="2025-01-13T21:42:16.869039285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:42:16.870026 containerd[1511]: time="2025-01-13T21:42:16.869931739Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:16.872589 containerd[1511]: time="2025-01-13T21:42:16.872489444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:16.873784 containerd[1511]: time="2025-01-13T21:42:16.873742708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.565871171s" Jan 13 21:42:16.873874 containerd[1511]: time="2025-01-13T21:42:16.873787857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:42:16.877581 containerd[1511]: time="2025-01-13T21:42:16.877535302Z" level=info msg="CreateContainer within sandbox \"7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:42:16.899942 containerd[1511]: time="2025-01-13T21:42:16.899867984Z" level=info msg="CreateContainer within sandbox \"7180440e33ca16138571a9e61693c3ac390eb9ea2da82a2371c7a5d5e637eb9e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"46a20a806c69f51f5a64d5014c3fd2d9e2d71f6b7e74cf2ef584b0b9363e1989\"" Jan 13 21:42:16.904937 containerd[1511]: time="2025-01-13T21:42:16.904876766Z" level=info msg="StartContainer for \"46a20a806c69f51f5a64d5014c3fd2d9e2d71f6b7e74cf2ef584b0b9363e1989\"" Jan 13 21:42:16.947574 systemd[1]: Started cri-containerd-46a20a806c69f51f5a64d5014c3fd2d9e2d71f6b7e74cf2ef584b0b9363e1989.scope - libcontainer container 46a20a806c69f51f5a64d5014c3fd2d9e2d71f6b7e74cf2ef584b0b9363e1989. Jan 13 21:42:16.996122 containerd[1511]: time="2025-01-13T21:42:16.995940957Z" level=info msg="StartContainer for \"46a20a806c69f51f5a64d5014c3fd2d9e2d71f6b7e74cf2ef584b0b9363e1989\" returns successfully" Jan 13 21:42:17.102563 kubelet[1918]: E0113 21:42:17.102450 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:17.234127 kubelet[1918]: I0113 21:42:17.234076 1918 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:42:17.234926 kubelet[1918]: I0113 21:42:17.234151 1918 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:42:17.532895 kubelet[1918]: I0113 21:42:17.532553 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2bmg8" podStartSLOduration=24.824481993 podStartE2EDuration="33.532535028s" podCreationTimestamp="2025-01-13 21:41:44 +0000 UTC" firstStartedPulling="2025-01-13 21:42:08.167705907 +0000 UTC m=+24.695085767" lastFinishedPulling="2025-01-13 21:42:16.875758936 +0000 UTC m=+33.403138802" observedRunningTime="2025-01-13 21:42:17.530879569 +0000 UTC m=+34.058259459" watchObservedRunningTime="2025-01-13 21:42:17.532535028 +0000 UTC m=+34.059914917" Jan 13 21:42:18.102948 kubelet[1918]: E0113 21:42:18.102877 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:19.103256 kubelet[1918]: E0113 21:42:19.103131 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:20.104005 kubelet[1918]: E0113 21:42:20.103929 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:21.104716 kubelet[1918]: E0113 21:42:21.104636 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:22.105823 kubelet[1918]: E0113 21:42:22.105747 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:23.105974 kubelet[1918]: E0113 21:42:23.105904 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:24.073348 kubelet[1918]: E0113 21:42:24.073278 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:24.107176 kubelet[1918]: E0113 21:42:24.107095 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:25.107880 kubelet[1918]: E0113 21:42:25.107806 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:25.480658 kubelet[1918]: I0113 21:42:25.480603 1918 topology_manager.go:215] "Topology Admit Handler" podUID="918c5b2d-fc84-4b0a-8837-e86c617f1433" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 21:42:25.492715 systemd[1]: Created slice kubepods-besteffort-pod918c5b2d_fc84_4b0a_8837_e86c617f1433.slice - libcontainer container kubepods-besteffort-pod918c5b2d_fc84_4b0a_8837_e86c617f1433.slice. Jan 13 21:42:25.579021 kubelet[1918]: I0113 21:42:25.578901 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/918c5b2d-fc84-4b0a-8837-e86c617f1433-data\") pod \"nfs-server-provisioner-0\" (UID: \"918c5b2d-fc84-4b0a-8837-e86c617f1433\") " pod="default/nfs-server-provisioner-0" Jan 13 21:42:25.679637 kubelet[1918]: I0113 21:42:25.679492 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th6rb\" (UniqueName: \"kubernetes.io/projected/918c5b2d-fc84-4b0a-8837-e86c617f1433-kube-api-access-th6rb\") pod \"nfs-server-provisioner-0\" (UID: \"918c5b2d-fc84-4b0a-8837-e86c617f1433\") " pod="default/nfs-server-provisioner-0" Jan 13 21:42:25.797755 containerd[1511]: time="2025-01-13T21:42:25.797164598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:918c5b2d-fc84-4b0a-8837-e86c617f1433,Namespace:default,Attempt:0,}" Jan 13 21:42:25.960391 systemd-networkd[1437]: cali60e51b789ff: Link UP Jan 13 21:42:25.960682 systemd-networkd[1437]: cali60e51b789ff: Gained carrier Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.856 [INFO][3535] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.41.226-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 918c5b2d-fc84-4b0a-8837-e86c617f1433 1256 0 2025-01-13 21:42:25 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.230.41.226 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.41.226-k8s-nfs--server--provisioner--0-" Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.856 [INFO][3535] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.41.226-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.892 [INFO][3545] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" HandleID="k8s-pod-network.ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" Workload="10.230.41.226-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.907 [INFO][3545] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" HandleID="k8s-pod-network.ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" Workload="10.230.41.226-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d2aa0), Attrs:map[string]string{"namespace":"default", "node":"10.230.41.226", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 21:42:25.892677876 +0000 UTC"}, Hostname:"10.230.41.226", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.907 [INFO][3545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.907 [INFO][3545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.907 [INFO][3545] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.41.226' Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.910 [INFO][3545] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" host="10.230.41.226" Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.917 [INFO][3545] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.41.226" Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.924 [INFO][3545] ipam/ipam.go 489: Trying affinity for 192.168.73.0/26 host="10.230.41.226" Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.927 [INFO][3545] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.0/26 host="10.230.41.226" Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.931 [INFO][3545] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.0/26 host="10.230.41.226" Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.932 [INFO][3545] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.0/26 handle="k8s-pod-network.ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" host="10.230.41.226" Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.934 [INFO][3545] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4 Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.941 [INFO][3545] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.0/26 handle="k8s-pod-network.ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" host="10.230.41.226" Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.955 [INFO][3545] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.3/26] block=192.168.73.0/26 handle="k8s-pod-network.ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" host="10.230.41.226" Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.955 [INFO][3545] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.3/26] handle="k8s-pod-network.ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" host="10.230.41.226" Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.955 [INFO][3545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:42:25.979748 containerd[1511]: 2025-01-13 21:42:25.955 [INFO][3545] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.3/26] IPv6=[] ContainerID="ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" HandleID="k8s-pod-network.ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" Workload="10.230.41.226-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:42:25.981201 containerd[1511]: 2025-01-13 21:42:25.956 [INFO][3535] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.41.226-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.41.226-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"918c5b2d-fc84-4b0a-8837-e86c617f1433", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.41.226", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.73.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:42:25.981201 containerd[1511]: 2025-01-13 21:42:25.957 [INFO][3535] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.3/32] ContainerID="ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.41.226-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:42:25.981201 containerd[1511]: 2025-01-13 21:42:25.957 [INFO][3535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.41.226-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:42:25.981201 containerd[1511]: 2025-01-13 21:42:25.961 [INFO][3535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.41.226-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:42:25.981605 containerd[1511]: 2025-01-13 21:42:25.962 [INFO][3535] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.41.226-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.41.226-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"918c5b2d-fc84-4b0a-8837-e86c617f1433", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.41.226", ContainerID:"ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.73.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"be:3f:cb:93:5b:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:42:25.981605 containerd[1511]: 2025-01-13 21:42:25.975 [INFO][3535] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.41.226-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:42:26.019718 containerd[1511]: time="2025-01-13T21:42:26.019576896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:42:26.019718 containerd[1511]: time="2025-01-13T21:42:26.019662548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:42:26.019718 containerd[1511]: time="2025-01-13T21:42:26.019682246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:42:26.020118 containerd[1511]: time="2025-01-13T21:42:26.019781138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:42:26.045079 systemd[1]: run-containerd-runc-k8s.io-ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4-runc.u9aBdy.mount: Deactivated successfully. Jan 13 21:42:26.058587 systemd[1]: Started cri-containerd-ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4.scope - libcontainer container ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4. Jan 13 21:42:26.108445 kubelet[1918]: E0113 21:42:26.108380 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:26.113852 containerd[1511]: time="2025-01-13T21:42:26.113810093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:918c5b2d-fc84-4b0a-8837-e86c617f1433,Namespace:default,Attempt:0,} returns sandbox id \"ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4\"" Jan 13 21:42:26.116015 containerd[1511]: time="2025-01-13T21:42:26.115984984Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 21:42:27.109285 kubelet[1918]: E0113 21:42:27.109171 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:27.365456 systemd-networkd[1437]: cali60e51b789ff: Gained IPv6LL Jan 13 21:42:28.110669 kubelet[1918]: E0113 21:42:28.110479 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:29.073938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275477038.mount: Deactivated successfully. Jan 13 21:42:29.110813 kubelet[1918]: E0113 21:42:29.110733 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:30.112057 kubelet[1918]: E0113 21:42:30.111656 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:31.112191 kubelet[1918]: E0113 21:42:31.112123 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:31.903977 containerd[1511]: time="2025-01-13T21:42:31.903900514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:31.905623 containerd[1511]: time="2025-01-13T21:42:31.905554460Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 13 21:42:31.906807 containerd[1511]: time="2025-01-13T21:42:31.906724747Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:31.910722 containerd[1511]: time="2025-01-13T21:42:31.910648509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:31.912413 containerd[1511]: time="2025-01-13T21:42:31.912188796Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.796161803s" Jan 13 21:42:31.912413 containerd[1511]: time="2025-01-13T21:42:31.912237823Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 21:42:31.915983 containerd[1511]: time="2025-01-13T21:42:31.915938763Z" level=info msg="CreateContainer within sandbox \"ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 21:42:31.936441 containerd[1511]: time="2025-01-13T21:42:31.936242840Z" level=info msg="CreateContainer within sandbox \"ed7a9dd2ab285c6b033cc2f257f4e913503a7b52d80f46fc8570d81b1972b1f4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"851764dccfd948d5753a167af9f3cb0a62d1d1d6ccba48e7a6308c7ce37f92fd\"" Jan 13 21:42:31.937766 containerd[1511]: time="2025-01-13T21:42:31.937712519Z" level=info msg="StartContainer for \"851764dccfd948d5753a167af9f3cb0a62d1d1d6ccba48e7a6308c7ce37f92fd\"" Jan 13 21:42:31.985675 systemd[1]: Started cri-containerd-851764dccfd948d5753a167af9f3cb0a62d1d1d6ccba48e7a6308c7ce37f92fd.scope - libcontainer container 851764dccfd948d5753a167af9f3cb0a62d1d1d6ccba48e7a6308c7ce37f92fd. Jan 13 21:42:32.031774 containerd[1511]: time="2025-01-13T21:42:32.031712135Z" level=info msg="StartContainer for \"851764dccfd948d5753a167af9f3cb0a62d1d1d6ccba48e7a6308c7ce37f92fd\" returns successfully" Jan 13 21:42:32.113182 kubelet[1918]: E0113 21:42:32.113115 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:32.588712 kubelet[1918]: I0113 21:42:32.588605 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.790136656 podStartE2EDuration="7.588578886s" podCreationTimestamp="2025-01-13 21:42:25 +0000 UTC" firstStartedPulling="2025-01-13 21:42:26.115476285 +0000 UTC m=+42.642856144" lastFinishedPulling="2025-01-13 21:42:31.913918505 +0000 UTC m=+48.441298374" observedRunningTime="2025-01-13 21:42:32.587888897 +0000 UTC m=+49.115268776" watchObservedRunningTime="2025-01-13 21:42:32.588578886 +0000 UTC m=+49.115958765" Jan 13 21:42:33.113775 kubelet[1918]: E0113 21:42:33.113660 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:34.114555 kubelet[1918]: E0113 21:42:34.114464 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:35.115089 kubelet[1918]: E0113 21:42:35.114994 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:36.115742 kubelet[1918]: E0113 21:42:36.115656 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:37.116916 kubelet[1918]: E0113 21:42:37.116815 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:38.117907 kubelet[1918]: E0113 21:42:38.117781 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:39.118863 kubelet[1918]: E0113 21:42:39.118763 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:40.119950 kubelet[1918]: E0113 21:42:40.119847 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:41.120677 kubelet[1918]: E0113 21:42:41.120591 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:41.715260 kubelet[1918]: I0113 21:42:41.715185 1918 topology_manager.go:215] "Topology Admit Handler" podUID="314885b9-1cbc-4e18-b87d-f10481fc6df2" podNamespace="default" podName="test-pod-1" Jan 13 21:42:41.724551 systemd[1]: Created slice kubepods-besteffort-pod314885b9_1cbc_4e18_b87d_f10481fc6df2.slice - libcontainer container kubepods-besteffort-pod314885b9_1cbc_4e18_b87d_f10481fc6df2.slice. Jan 13 21:42:41.871082 kubelet[1918]: I0113 21:42:41.870971 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd4zv\" (UniqueName: \"kubernetes.io/projected/314885b9-1cbc-4e18-b87d-f10481fc6df2-kube-api-access-rd4zv\") pod \"test-pod-1\" (UID: \"314885b9-1cbc-4e18-b87d-f10481fc6df2\") " pod="default/test-pod-1" Jan 13 21:42:41.871082 kubelet[1918]: I0113 21:42:41.871080 1918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c5a45050-90b7-468d-a446-60912942c22a\" (UniqueName: \"kubernetes.io/nfs/314885b9-1cbc-4e18-b87d-f10481fc6df2-pvc-c5a45050-90b7-468d-a446-60912942c22a\") pod \"test-pod-1\" (UID: \"314885b9-1cbc-4e18-b87d-f10481fc6df2\") " pod="default/test-pod-1" Jan 13 21:42:42.022717 kernel: FS-Cache: Loaded Jan 13 21:42:42.106635 kernel: RPC: Registered named UNIX socket transport module. Jan 13 21:42:42.106909 kernel: RPC: Registered udp transport module. Jan 13 21:42:42.106958 kernel: RPC: Registered tcp transport module. Jan 13 21:42:42.107551 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 21:42:42.108594 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 21:42:42.121160 kubelet[1918]: E0113 21:42:42.121015 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:42.374039 kernel: NFS: Registering the id_resolver key type Jan 13 21:42:42.374390 kernel: Key type id_resolver registered Jan 13 21:42:42.374446 kernel: Key type id_legacy registered Jan 13 21:42:42.432898 nfsidmap[3737]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 13 21:42:42.442499 nfsidmap[3740]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 13 21:42:42.629482 containerd[1511]: time="2025-01-13T21:42:42.629400022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:314885b9-1cbc-4e18-b87d-f10481fc6df2,Namespace:default,Attempt:0,}" Jan 13 21:42:42.803492 systemd-networkd[1437]: cali5ec59c6bf6e: Link UP Jan 13 21:42:42.804767 systemd-networkd[1437]: cali5ec59c6bf6e: Gained carrier Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.697 [INFO][3744] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.41.226-k8s-test--pod--1-eth0 default 314885b9-1cbc-4e18-b87d-f10481fc6df2 1317 0 2025-01-13 21:42:28 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.41.226 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.41.226-k8s-test--pod--1-" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.698 [INFO][3744] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.41.226-k8s-test--pod--1-eth0" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.742 [INFO][3755] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" HandleID="k8s-pod-network.598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" Workload="10.230.41.226-k8s-test--pod--1-eth0" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.755 [INFO][3755] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" HandleID="k8s-pod-network.598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" Workload="10.230.41.226-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293c40), Attrs:map[string]string{"namespace":"default", "node":"10.230.41.226", "pod":"test-pod-1", "timestamp":"2025-01-13 21:42:42.742001902 +0000 UTC"}, Hostname:"10.230.41.226", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.755 [INFO][3755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.755 [INFO][3755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.755 [INFO][3755] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.41.226' Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.758 [INFO][3755] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" host="10.230.41.226" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.766 [INFO][3755] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.41.226" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.772 [INFO][3755] ipam/ipam.go 489: Trying affinity for 192.168.73.0/26 host="10.230.41.226" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.775 [INFO][3755] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.0/26 host="10.230.41.226" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.778 [INFO][3755] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.0/26 host="10.230.41.226" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.778 [INFO][3755] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.0/26 handle="k8s-pod-network.598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" host="10.230.41.226" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.781 [INFO][3755] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.787 [INFO][3755] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.0/26 handle="k8s-pod-network.598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" host="10.230.41.226" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.795 [INFO][3755] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.4/26] block=192.168.73.0/26 handle="k8s-pod-network.598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" host="10.230.41.226" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.795 [INFO][3755] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.4/26] handle="k8s-pod-network.598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" host="10.230.41.226" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.795 [INFO][3755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.795 [INFO][3755] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.4/26] IPv6=[] ContainerID="598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" HandleID="k8s-pod-network.598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" Workload="10.230.41.226-k8s-test--pod--1-eth0" Jan 13 21:42:42.814659 containerd[1511]: 2025-01-13 21:42:42.797 [INFO][3744] cni-plugin/k8s.go 386: Populated endpoint ContainerID="598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.41.226-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.41.226-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"314885b9-1cbc-4e18-b87d-f10481fc6df2", ResourceVersion:"1317", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 42, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.41.226", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.73.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:42:42.820963 containerd[1511]: 2025-01-13 21:42:42.797 [INFO][3744] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.4/32] ContainerID="598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.41.226-k8s-test--pod--1-eth0" Jan 13 21:42:42.820963 containerd[1511]: 2025-01-13 21:42:42.797 [INFO][3744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.41.226-k8s-test--pod--1-eth0" Jan 13 21:42:42.820963 containerd[1511]: 2025-01-13 21:42:42.801 [INFO][3744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.41.226-k8s-test--pod--1-eth0" Jan 13 21:42:42.820963 containerd[1511]: 2025-01-13 21:42:42.801 [INFO][3744] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.41.226-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.41.226-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"314885b9-1cbc-4e18-b87d-f10481fc6df2", ResourceVersion:"1317", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 42, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.41.226", ContainerID:"598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.73.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"4e:07:2e:3b:08:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:42:42.820963 containerd[1511]: 2025-01-13 21:42:42.811 [INFO][3744] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.41.226-k8s-test--pod--1-eth0" Jan 13 21:42:42.854967 containerd[1511]: time="2025-01-13T21:42:42.854774270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:42:42.854967 containerd[1511]: time="2025-01-13T21:42:42.854901212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:42:42.854967 containerd[1511]: time="2025-01-13T21:42:42.854925413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:42:42.855962 containerd[1511]: time="2025-01-13T21:42:42.855839822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:42:42.883663 systemd[1]: Started cri-containerd-598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f.scope - libcontainer container 598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f. Jan 13 21:42:42.941083 containerd[1511]: time="2025-01-13T21:42:42.940895653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:314885b9-1cbc-4e18-b87d-f10481fc6df2,Namespace:default,Attempt:0,} returns sandbox id \"598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f\"" Jan 13 21:42:42.944075 containerd[1511]: time="2025-01-13T21:42:42.943754411Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:42:43.121821 kubelet[1918]: E0113 21:42:43.121627 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:43.304719 containerd[1511]: time="2025-01-13T21:42:43.304520825Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:42:43.306177 containerd[1511]: time="2025-01-13T21:42:43.306111376Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 21:42:43.320680 containerd[1511]: time="2025-01-13T21:42:43.320602384Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 376.799192ms" Jan 13 21:42:43.320680 containerd[1511]: time="2025-01-13T21:42:43.320673111Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:42:43.323976 containerd[1511]: time="2025-01-13T21:42:43.323761105Z" level=info msg="CreateContainer within sandbox \"598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 21:42:43.343376 containerd[1511]: time="2025-01-13T21:42:43.343255589Z" level=info msg="CreateContainer within sandbox \"598638ccbb9b5417f329ce2f8f81f7fa74270801c58f953821d9dc2a3d29099f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"813ee735c17dec3d647e47cb6909164b91e0c45a6f37d2fc7ef360728918a3ef\"" Jan 13 21:42:43.351878 containerd[1511]: time="2025-01-13T21:42:43.351786473Z" level=info msg="StartContainer for \"813ee735c17dec3d647e47cb6909164b91e0c45a6f37d2fc7ef360728918a3ef\"" Jan 13 21:42:43.397558 systemd[1]: Started cri-containerd-813ee735c17dec3d647e47cb6909164b91e0c45a6f37d2fc7ef360728918a3ef.scope - libcontainer container 813ee735c17dec3d647e47cb6909164b91e0c45a6f37d2fc7ef360728918a3ef. Jan 13 21:42:43.438760 containerd[1511]: time="2025-01-13T21:42:43.438604488Z" level=info msg="StartContainer for \"813ee735c17dec3d647e47cb6909164b91e0c45a6f37d2fc7ef360728918a3ef\" returns successfully" Jan 13 21:42:43.631842 kubelet[1918]: I0113 21:42:43.631685 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.253142043 podStartE2EDuration="15.631664381s" podCreationTimestamp="2025-01-13 21:42:28 +0000 UTC" firstStartedPulling="2025-01-13 21:42:42.943305152 +0000 UTC m=+59.470685011" lastFinishedPulling="2025-01-13 21:42:43.321827465 +0000 UTC m=+59.849207349" observedRunningTime="2025-01-13 21:42:43.630850849 +0000 UTC m=+60.158230727" watchObservedRunningTime="2025-01-13 21:42:43.631664381 +0000 UTC m=+60.159044261" Jan 13 21:42:44.073811 kubelet[1918]: E0113 21:42:44.073719 1918 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:44.099658 containerd[1511]: time="2025-01-13T21:42:44.099384816Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:42:44.099658 containerd[1511]: time="2025-01-13T21:42:44.099551686Z" level=info msg="TearDown network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" successfully" Jan 13 21:42:44.099658 containerd[1511]: time="2025-01-13T21:42:44.099570868Z" level=info msg="StopPodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" returns successfully" Jan 13 21:42:44.105432 containerd[1511]: time="2025-01-13T21:42:44.105374680Z" level=info msg="RemovePodSandbox for \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:42:44.113139 containerd[1511]: time="2025-01-13T21:42:44.113074421Z" level=info msg="Forcibly stopping sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\"" Jan 13 21:42:44.120222 containerd[1511]: time="2025-01-13T21:42:44.113200020Z" level=info msg="TearDown network for sandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" successfully" Jan 13 21:42:44.121839 kubelet[1918]: E0113 21:42:44.121781 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:44.147724 containerd[1511]: time="2025-01-13T21:42:44.147643835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.147927 containerd[1511]: time="2025-01-13T21:42:44.147735294Z" level=info msg="RemovePodSandbox \"c538602d8ef0bf490da956418735e2b767f9e170e86257d7a2c2f477869131c3\" returns successfully" Jan 13 21:42:44.148477 containerd[1511]: time="2025-01-13T21:42:44.148432419Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" Jan 13 21:42:44.148586 containerd[1511]: time="2025-01-13T21:42:44.148556026Z" level=info msg="TearDown network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" successfully" Jan 13 21:42:44.148653 containerd[1511]: time="2025-01-13T21:42:44.148580228Z" level=info msg="StopPodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" returns successfully" Jan 13 21:42:44.149015 containerd[1511]: time="2025-01-13T21:42:44.148975669Z" level=info msg="RemovePodSandbox for \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" Jan 13 21:42:44.149015 containerd[1511]: time="2025-01-13T21:42:44.149011895Z" level=info msg="Forcibly stopping sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\"" Jan 13 21:42:44.149470 containerd[1511]: time="2025-01-13T21:42:44.149090628Z" level=info msg="TearDown network for sandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" successfully" Jan 13 21:42:44.151718 containerd[1511]: time="2025-01-13T21:42:44.151679267Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.151801 containerd[1511]: time="2025-01-13T21:42:44.151726567Z" level=info msg="RemovePodSandbox \"4484eb28f23e121bfc950dd891971b6da066f1f629f86cb46427bc62f020ef8b\" returns successfully" Jan 13 21:42:44.152148 containerd[1511]: time="2025-01-13T21:42:44.152116118Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\"" Jan 13 21:42:44.152258 containerd[1511]: time="2025-01-13T21:42:44.152218274Z" level=info msg="TearDown network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" successfully" Jan 13 21:42:44.152258 containerd[1511]: time="2025-01-13T21:42:44.152250159Z" level=info msg="StopPodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" returns successfully" Jan 13 21:42:44.153637 containerd[1511]: time="2025-01-13T21:42:44.152596287Z" level=info msg="RemovePodSandbox for \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\"" Jan 13 21:42:44.153637 containerd[1511]: time="2025-01-13T21:42:44.152629471Z" level=info msg="Forcibly stopping sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\"" Jan 13 21:42:44.153637 containerd[1511]: time="2025-01-13T21:42:44.152722426Z" level=info msg="TearDown network for sandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" successfully" Jan 13 21:42:44.155417 containerd[1511]: time="2025-01-13T21:42:44.155386688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.155571 containerd[1511]: time="2025-01-13T21:42:44.155545064Z" level=info msg="RemovePodSandbox \"66a554768e942def5d3b9a2299a2e36f7e629fdd9186319db80035b82abf1986\" returns successfully" Jan 13 21:42:44.155979 containerd[1511]: time="2025-01-13T21:42:44.155950512Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\"" Jan 13 21:42:44.156174 containerd[1511]: time="2025-01-13T21:42:44.156150127Z" level=info msg="TearDown network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" successfully" Jan 13 21:42:44.156332 containerd[1511]: time="2025-01-13T21:42:44.156299652Z" level=info msg="StopPodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" returns successfully" Jan 13 21:42:44.156799 containerd[1511]: time="2025-01-13T21:42:44.156761552Z" level=info msg="RemovePodSandbox for \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\"" Jan 13 21:42:44.156871 containerd[1511]: time="2025-01-13T21:42:44.156803778Z" level=info msg="Forcibly stopping sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\"" Jan 13 21:42:44.156921 containerd[1511]: time="2025-01-13T21:42:44.156885419Z" level=info msg="TearDown network for sandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" successfully" Jan 13 21:42:44.159297 containerd[1511]: time="2025-01-13T21:42:44.159249557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.159387 containerd[1511]: time="2025-01-13T21:42:44.159298251Z" level=info msg="RemovePodSandbox \"29f5da01e14f466cb8893899b26bac1585b40de3152322d1c1ef28a9144ec193\" returns successfully" Jan 13 21:42:44.159995 containerd[1511]: time="2025-01-13T21:42:44.159728681Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\"" Jan 13 21:42:44.159995 containerd[1511]: time="2025-01-13T21:42:44.159858788Z" level=info msg="TearDown network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" successfully" Jan 13 21:42:44.159995 containerd[1511]: time="2025-01-13T21:42:44.159877071Z" level=info msg="StopPodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" returns successfully" Jan 13 21:42:44.160190 containerd[1511]: time="2025-01-13T21:42:44.160153877Z" level=info msg="RemovePodSandbox for \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\"" Jan 13 21:42:44.160190 containerd[1511]: time="2025-01-13T21:42:44.160187334Z" level=info msg="Forcibly stopping sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\"" Jan 13 21:42:44.160360 containerd[1511]: time="2025-01-13T21:42:44.160286806Z" level=info msg="TearDown network for sandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" successfully" Jan 13 21:42:44.162700 containerd[1511]: time="2025-01-13T21:42:44.162650482Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.162767 containerd[1511]: time="2025-01-13T21:42:44.162698913Z" level=info msg="RemovePodSandbox \"70333607835e9fb7f8bbc377041e81da8e2870f499914e12f72d8f673d300b9b\" returns successfully" Jan 13 21:42:44.163412 containerd[1511]: time="2025-01-13T21:42:44.163384322Z" level=info msg="StopPodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\"" Jan 13 21:42:44.163954 containerd[1511]: time="2025-01-13T21:42:44.163669384Z" level=info msg="TearDown network for sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" successfully" Jan 13 21:42:44.163954 containerd[1511]: time="2025-01-13T21:42:44.163693915Z" level=info msg="StopPodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" returns successfully" Jan 13 21:42:44.164918 containerd[1511]: time="2025-01-13T21:42:44.164206459Z" level=info msg="RemovePodSandbox for \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\"" Jan 13 21:42:44.164918 containerd[1511]: time="2025-01-13T21:42:44.164251156Z" level=info msg="Forcibly stopping sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\"" Jan 13 21:42:44.164918 containerd[1511]: time="2025-01-13T21:42:44.164377720Z" level=info msg="TearDown network for sandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" successfully" Jan 13 21:42:44.166926 containerd[1511]: time="2025-01-13T21:42:44.166873381Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.167025 containerd[1511]: time="2025-01-13T21:42:44.166938340Z" level=info msg="RemovePodSandbox \"9fd9d48b5e2de082d89880034db8fd3504348841a50a298368a9e880a40f1ced\" returns successfully" Jan 13 21:42:44.167412 containerd[1511]: time="2025-01-13T21:42:44.167379225Z" level=info msg="StopPodSandbox for \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\"" Jan 13 21:42:44.167702 containerd[1511]: time="2025-01-13T21:42:44.167578098Z" level=info msg="TearDown network for sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\" successfully" Jan 13 21:42:44.167702 containerd[1511]: time="2025-01-13T21:42:44.167602812Z" level=info msg="StopPodSandbox for \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\" returns successfully" Jan 13 21:42:44.168156 containerd[1511]: time="2025-01-13T21:42:44.168118038Z" level=info msg="RemovePodSandbox for \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\"" Jan 13 21:42:44.168220 containerd[1511]: time="2025-01-13T21:42:44.168155413Z" level=info msg="Forcibly stopping sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\"" Jan 13 21:42:44.168300 containerd[1511]: time="2025-01-13T21:42:44.168261110Z" level=info msg="TearDown network for sandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\" successfully" Jan 13 21:42:44.170581 containerd[1511]: time="2025-01-13T21:42:44.170522347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.170581 containerd[1511]: time="2025-01-13T21:42:44.170578870Z" level=info msg="RemovePodSandbox \"9fbbbd1ceb7e34ab1960c60cc93427ad3e738f9eb36a2d2244652dae53cf641f\" returns successfully" Jan 13 21:42:44.172114 containerd[1511]: time="2025-01-13T21:42:44.171386998Z" level=info msg="StopPodSandbox for \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\"" Jan 13 21:42:44.172114 containerd[1511]: time="2025-01-13T21:42:44.171539599Z" level=info msg="TearDown network for sandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\" successfully" Jan 13 21:42:44.172114 containerd[1511]: time="2025-01-13T21:42:44.171560310Z" level=info msg="StopPodSandbox for \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\" returns successfully" Jan 13 21:42:44.172114 containerd[1511]: time="2025-01-13T21:42:44.172011377Z" level=info msg="RemovePodSandbox for \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\"" Jan 13 21:42:44.172114 containerd[1511]: time="2025-01-13T21:42:44.172042315Z" level=info msg="Forcibly stopping sandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\"" Jan 13 21:42:44.172513 containerd[1511]: time="2025-01-13T21:42:44.172269801Z" level=info msg="TearDown network for sandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\" successfully" Jan 13 21:42:44.175349 containerd[1511]: time="2025-01-13T21:42:44.174772029Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.175349 containerd[1511]: time="2025-01-13T21:42:44.174844828Z" level=info msg="RemovePodSandbox \"ac2eb7516b38c9b155777ff57edcc6a572db57861ed18ccb1f36ab40d4d9b468\" returns successfully" Jan 13 21:42:44.176933 containerd[1511]: time="2025-01-13T21:42:44.176902243Z" level=info msg="StopPodSandbox for \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\"" Jan 13 21:42:44.177082 containerd[1511]: time="2025-01-13T21:42:44.177058724Z" level=info msg="TearDown network for sandbox \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\" successfully" Jan 13 21:42:44.177150 containerd[1511]: time="2025-01-13T21:42:44.177092170Z" level=info msg="StopPodSandbox for \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\" returns successfully" Jan 13 21:42:44.177623 containerd[1511]: time="2025-01-13T21:42:44.177597772Z" level=info msg="RemovePodSandbox for \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\"" Jan 13 21:42:44.177698 containerd[1511]: time="2025-01-13T21:42:44.177634403Z" level=info msg="Forcibly stopping sandbox \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\"" Jan 13 21:42:44.177781 containerd[1511]: time="2025-01-13T21:42:44.177731008Z" level=info msg="TearDown network for sandbox \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\" successfully" Jan 13 21:42:44.181410 containerd[1511]: time="2025-01-13T21:42:44.181367362Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.181789 containerd[1511]: time="2025-01-13T21:42:44.181414503Z" level=info msg="RemovePodSandbox \"6895ad40ceb9e5dd2a9ae05c0e823628b6ed3fbca755a2b7773177fef304bcf0\" returns successfully" Jan 13 21:42:44.182393 containerd[1511]: time="2025-01-13T21:42:44.182021958Z" level=info msg="StopPodSandbox for \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\"" Jan 13 21:42:44.190205 containerd[1511]: time="2025-01-13T21:42:44.190096638Z" level=info msg="TearDown network for sandbox \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\" successfully" Jan 13 21:42:44.190205 containerd[1511]: time="2025-01-13T21:42:44.190159005Z" level=info msg="StopPodSandbox for \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\" returns successfully" Jan 13 21:42:44.191312 containerd[1511]: time="2025-01-13T21:42:44.190590400Z" level=info msg="RemovePodSandbox for \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\"" Jan 13 21:42:44.191312 containerd[1511]: time="2025-01-13T21:42:44.190621552Z" level=info msg="Forcibly stopping sandbox \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\"" Jan 13 21:42:44.191312 containerd[1511]: time="2025-01-13T21:42:44.190717648Z" level=info msg="TearDown network for sandbox \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\" successfully" Jan 13 21:42:44.193228 containerd[1511]: time="2025-01-13T21:42:44.193189190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.193319 containerd[1511]: time="2025-01-13T21:42:44.193278398Z" level=info msg="RemovePodSandbox \"57479c986d1e75f1f28a9ba9b6bd83082f21b2c9493ecfcca9fea263f8eb3ebf\" returns successfully" Jan 13 21:42:44.193762 containerd[1511]: time="2025-01-13T21:42:44.193733379Z" level=info msg="StopPodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\"" Jan 13 21:42:44.194661 containerd[1511]: time="2025-01-13T21:42:44.194145407Z" level=info msg="TearDown network for sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" successfully" Jan 13 21:42:44.194661 containerd[1511]: time="2025-01-13T21:42:44.194177478Z" level=info msg="StopPodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" returns successfully" Jan 13 21:42:44.194661 containerd[1511]: time="2025-01-13T21:42:44.194483742Z" level=info msg="RemovePodSandbox for \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\"" Jan 13 21:42:44.194661 containerd[1511]: time="2025-01-13T21:42:44.194511024Z" level=info msg="Forcibly stopping sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\"" Jan 13 21:42:44.195572 containerd[1511]: time="2025-01-13T21:42:44.195129070Z" level=info msg="TearDown network for sandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" successfully" Jan 13 21:42:44.197657 containerd[1511]: time="2025-01-13T21:42:44.197627129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.197785 containerd[1511]: time="2025-01-13T21:42:44.197761012Z" level=info msg="RemovePodSandbox \"d63aa4c02dd60c31a2f74a192620f7802faa4af831d66cd1fbaf54d2c46ff17a\" returns successfully" Jan 13 21:42:44.198534 containerd[1511]: time="2025-01-13T21:42:44.198404459Z" level=info msg="StopPodSandbox for \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\"" Jan 13 21:42:44.198659 containerd[1511]: time="2025-01-13T21:42:44.198633749Z" level=info msg="TearDown network for sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\" successfully" Jan 13 21:42:44.198872 containerd[1511]: time="2025-01-13T21:42:44.198659740Z" level=info msg="StopPodSandbox for \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\" returns successfully" Jan 13 21:42:44.199457 containerd[1511]: time="2025-01-13T21:42:44.199305441Z" level=info msg="RemovePodSandbox for \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\"" Jan 13 21:42:44.199457 containerd[1511]: time="2025-01-13T21:42:44.199392316Z" level=info msg="Forcibly stopping sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\"" Jan 13 21:42:44.199580 containerd[1511]: time="2025-01-13T21:42:44.199535900Z" level=info msg="TearDown network for sandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\" successfully" Jan 13 21:42:44.202175 containerd[1511]: time="2025-01-13T21:42:44.202053938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.202175 containerd[1511]: time="2025-01-13T21:42:44.202105003Z" level=info msg="RemovePodSandbox \"c77ca904b89b4d6842d1d8269436e5eb4f2a700a7c168aea9a0bed60a52d8a00\" returns successfully" Jan 13 21:42:44.202807 containerd[1511]: time="2025-01-13T21:42:44.202463398Z" level=info msg="StopPodSandbox for \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\"" Jan 13 21:42:44.202807 containerd[1511]: time="2025-01-13T21:42:44.202572959Z" level=info msg="TearDown network for sandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\" successfully" Jan 13 21:42:44.202807 containerd[1511]: time="2025-01-13T21:42:44.202599987Z" level=info msg="StopPodSandbox for \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\" returns successfully" Jan 13 21:42:44.203145 containerd[1511]: time="2025-01-13T21:42:44.203102676Z" level=info msg="RemovePodSandbox for \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\"" Jan 13 21:42:44.203145 containerd[1511]: time="2025-01-13T21:42:44.203140055Z" level=info msg="Forcibly stopping sandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\"" Jan 13 21:42:44.203270 containerd[1511]: time="2025-01-13T21:42:44.203224552Z" level=info msg="TearDown network for sandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\" successfully" Jan 13 21:42:44.205904 containerd[1511]: time="2025-01-13T21:42:44.205834981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.205904 containerd[1511]: time="2025-01-13T21:42:44.205888779Z" level=info msg="RemovePodSandbox \"68ba9a1f403afd4e0585bcd4cf92eb44e93169976476736e2725e10a45bdd63a\" returns successfully" Jan 13 21:42:44.206542 containerd[1511]: time="2025-01-13T21:42:44.206350735Z" level=info msg="StopPodSandbox for \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\"" Jan 13 21:42:44.206542 containerd[1511]: time="2025-01-13T21:42:44.206457621Z" level=info msg="TearDown network for sandbox \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\" successfully" Jan 13 21:42:44.206542 containerd[1511]: time="2025-01-13T21:42:44.206474452Z" level=info msg="StopPodSandbox for \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\" returns successfully" Jan 13 21:42:44.207099 containerd[1511]: time="2025-01-13T21:42:44.207020284Z" level=info msg="RemovePodSandbox for \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\"" Jan 13 21:42:44.207099 containerd[1511]: time="2025-01-13T21:42:44.207055364Z" level=info msg="Forcibly stopping sandbox \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\"" Jan 13 21:42:44.207357 containerd[1511]: time="2025-01-13T21:42:44.207167865Z" level=info msg="TearDown network for sandbox \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\" successfully" Jan 13 21:42:44.209741 containerd[1511]: time="2025-01-13T21:42:44.209684171Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.209831 containerd[1511]: time="2025-01-13T21:42:44.209743513Z" level=info msg="RemovePodSandbox \"1979e9b814220c27117a8bc0ba2a399544618603c3e44ef98b0bea0711cfef23\" returns successfully" Jan 13 21:42:44.210933 containerd[1511]: time="2025-01-13T21:42:44.210341797Z" level=info msg="StopPodSandbox for \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\"" Jan 13 21:42:44.210933 containerd[1511]: time="2025-01-13T21:42:44.210451412Z" level=info msg="TearDown network for sandbox \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\" successfully" Jan 13 21:42:44.210933 containerd[1511]: time="2025-01-13T21:42:44.210470896Z" level=info msg="StopPodSandbox for \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\" returns successfully" Jan 13 21:42:44.212357 containerd[1511]: time="2025-01-13T21:42:44.211354156Z" level=info msg="RemovePodSandbox for \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\"" Jan 13 21:42:44.212357 containerd[1511]: time="2025-01-13T21:42:44.211385968Z" level=info msg="Forcibly stopping sandbox \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\"" Jan 13 21:42:44.212357 containerd[1511]: time="2025-01-13T21:42:44.211485918Z" level=info msg="TearDown network for sandbox \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\" successfully" Jan 13 21:42:44.215709 containerd[1511]: time="2025-01-13T21:42:44.215676409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:42:44.215813 containerd[1511]: time="2025-01-13T21:42:44.215726558Z" level=info msg="RemovePodSandbox \"d622229bd85664cd97fddb7ee578cc143c55dc863939fe0a4d937adf6bf3780f\" returns successfully" Jan 13 21:42:44.389889 systemd-networkd[1437]: cali5ec59c6bf6e: Gained IPv6LL Jan 13 21:42:45.122193 kubelet[1918]: E0113 21:42:45.122125 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:46.123047 kubelet[1918]: E0113 21:42:46.122972 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:47.124093 kubelet[1918]: E0113 21:42:47.123984 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:42:48.124603 kubelet[1918]: E0113 21:42:48.124508 1918 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"