Jan 13 20:45:38.885330 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:45:38.885350 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:45:38.885362 kernel: BIOS-provided physical RAM map: Jan 13 20:45:38.885368 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:45:38.885374 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:45:38.885380 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:45:38.885388 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 20:45:38.885394 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 20:45:38.885400 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 20:45:38.885409 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 20:45:38.885415 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:45:38.885421 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:45:38.885431 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:45:38.885437 kernel: NX (Execute Disable) protection: active Jan 13 20:45:38.885445 kernel: APIC: Static calls initialized Jan 13 20:45:38.885457 kernel: SMBIOS 2.8 present. Jan 13 20:45:38.885464 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 20:45:38.885470 kernel: Hypervisor detected: KVM Jan 13 20:45:38.885477 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:45:38.885484 kernel: kvm-clock: using sched offset of 3024061905 cycles Jan 13 20:45:38.885491 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:45:38.885498 kernel: tsc: Detected 2794.748 MHz processor Jan 13 20:45:38.885506 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:45:38.885513 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:45:38.885520 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 20:45:38.885530 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:45:38.885537 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:45:38.885544 kernel: Using GB pages for direct mapping Jan 13 20:45:38.885550 kernel: ACPI: Early table checksum verification disabled Jan 13 20:45:38.885558 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 20:45:38.885565 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:38.885572 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:38.885579 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:38.885588 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 20:45:38.885595 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:38.885602 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:38.885609 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:38.885616 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:38.885623 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 20:45:38.885630 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 20:45:38.885640 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 20:45:38.885649 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 20:45:38.885657 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 20:45:38.885664 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 20:45:38.885671 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 20:45:38.885680 kernel: No NUMA configuration found Jan 13 20:45:38.885688 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 20:45:38.885695 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 20:45:38.885704 kernel: Zone ranges: Jan 13 20:45:38.885711 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:45:38.885719 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 20:45:38.885726 kernel: Normal empty Jan 13 20:45:38.885733 kernel: Movable zone start for each node Jan 13 20:45:38.885740 kernel: Early memory node ranges Jan 13 20:45:38.885747 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:45:38.885754 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 20:45:38.885761 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 20:45:38.885770 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:45:38.885779 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:45:38.885787 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 20:45:38.885794 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:45:38.885801 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:45:38.885808 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:45:38.885815 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:45:38.885822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:45:38.885829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:45:38.885839 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:45:38.885884 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:45:38.885892 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:45:38.885906 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:45:38.885913 kernel: TSC deadline timer available Jan 13 20:45:38.885920 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 20:45:38.885927 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:45:38.885934 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 20:45:38.885944 kernel: kvm-guest: setup PV sched yield Jan 13 20:45:38.885951 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 20:45:38.885961 kernel: Booting paravirtualized kernel on KVM Jan 13 20:45:38.885969 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:45:38.885976 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 20:45:38.885983 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 20:45:38.885991 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 20:45:38.885998 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 20:45:38.886005 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:45:38.886012 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:45:38.886020 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:45:38.886030 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:45:38.886037 kernel: random: crng init done Jan 13 20:45:38.886045 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:45:38.886052 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:45:38.886059 kernel: Fallback order for Node 0: 0 Jan 13 20:45:38.886066 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 20:45:38.886073 kernel: Policy zone: DMA32 Jan 13 20:45:38.886081 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:45:38.886090 kernel: Memory: 2434588K/2571752K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 136904K reserved, 0K cma-reserved) Jan 13 20:45:38.886098 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:45:38.886105 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:45:38.886112 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:45:38.886119 kernel: Dynamic Preempt: voluntary Jan 13 20:45:38.886126 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:45:38.886134 kernel: rcu: RCU event tracing is enabled. Jan 13 20:45:38.886141 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:45:38.886149 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:45:38.886158 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:45:38.886165 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:45:38.886173 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:45:38.886182 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:45:38.886189 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 20:45:38.886196 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:45:38.886204 kernel: Console: colour VGA+ 80x25 Jan 13 20:45:38.886211 kernel: printk: console [ttyS0] enabled Jan 13 20:45:38.886218 kernel: ACPI: Core revision 20230628 Jan 13 20:45:38.886227 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 20:45:38.886235 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:45:38.886242 kernel: x2apic enabled Jan 13 20:45:38.886249 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:45:38.886256 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 20:45:38.886263 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 20:45:38.886271 kernel: kvm-guest: setup PV IPIs Jan 13 20:45:38.886287 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:45:38.886295 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:45:38.886302 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 20:45:38.886310 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:45:38.886317 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 20:45:38.886327 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 20:45:38.886334 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:45:38.886342 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:45:38.886350 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:45:38.886359 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:45:38.886367 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 20:45:38.886376 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 20:45:38.886384 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:45:38.886392 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:45:38.886400 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 20:45:38.886408 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 20:45:38.886415 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 20:45:38.886423 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:45:38.886433 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:45:38.886440 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:45:38.886448 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:45:38.886455 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:45:38.886463 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:45:38.886470 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:45:38.886478 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:45:38.886486 kernel: landlock: Up and running. Jan 13 20:45:38.886494 kernel: SELinux: Initializing. Jan 13 20:45:38.886506 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:45:38.886515 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:45:38.886523 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 20:45:38.886531 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:45:38.886539 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:45:38.886546 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:45:38.886554 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 20:45:38.886563 kernel: ... version: 0 Jan 13 20:45:38.886573 kernel: ... bit width: 48 Jan 13 20:45:38.886581 kernel: ... generic registers: 6 Jan 13 20:45:38.886588 kernel: ... value mask: 0000ffffffffffff Jan 13 20:45:38.886595 kernel: ... max period: 00007fffffffffff Jan 13 20:45:38.886603 kernel: ... fixed-purpose events: 0 Jan 13 20:45:38.886610 kernel: ... event mask: 000000000000003f Jan 13 20:45:38.886618 kernel: signal: max sigframe size: 1776 Jan 13 20:45:38.886625 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:45:38.886633 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:45:38.886640 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:45:38.886650 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:45:38.886657 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 20:45:38.886665 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:45:38.886672 kernel: smpboot: Max logical packages: 1 Jan 13 20:45:38.886680 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 20:45:38.886687 kernel: devtmpfs: initialized Jan 13 20:45:38.886694 kernel: x86/mm: Memory block size: 128MB Jan 13 20:45:38.886702 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:45:38.886709 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:45:38.886719 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:45:38.886727 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:45:38.886734 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:45:38.886742 kernel: audit: type=2000 audit(1736801138.213:1): state=initialized audit_enabled=0 res=1 Jan 13 20:45:38.886749 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:45:38.886757 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:45:38.886764 kernel: cpuidle: using governor menu Jan 13 20:45:38.886772 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:45:38.886779 kernel: dca service started, version 1.12.1 Jan 13 20:45:38.886789 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 20:45:38.886797 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 20:45:38.886804 kernel: PCI: Using configuration type 1 for base access Jan 13 20:45:38.886812 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:45:38.886819 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:45:38.886827 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:45:38.886835 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:45:38.886842 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:45:38.886861 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:45:38.886871 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:45:38.886878 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:45:38.886886 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:45:38.886893 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:45:38.886907 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:45:38.886914 kernel: ACPI: Interpreter enabled Jan 13 20:45:38.886922 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:45:38.886929 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:45:38.886937 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:45:38.886947 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:45:38.886954 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:45:38.886962 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:45:38.887166 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:45:38.887304 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 20:45:38.887433 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 20:45:38.887444 kernel: PCI host bridge to bus 0000:00 Jan 13 20:45:38.887588 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:45:38.887708 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:45:38.887825 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:45:38.887965 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 20:45:38.888080 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:45:38.888194 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 20:45:38.888309 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:45:38.888611 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:45:38.888760 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 20:45:38.888910 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 20:45:38.889040 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 20:45:38.889166 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 20:45:38.889291 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:45:38.889436 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:45:38.889564 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 20:45:38.889690 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 20:45:38.889816 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 20:45:38.889983 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:45:38.890136 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:45:38.890335 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 20:45:38.890471 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 20:45:38.890673 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:45:38.890804 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 20:45:38.890958 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 20:45:38.891085 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 20:45:38.891210 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 20:45:38.891352 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:45:38.891486 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:45:38.891625 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:45:38.891751 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 20:45:38.891891 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 20:45:38.892052 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:45:38.892233 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 20:45:38.892245 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:45:38.892258 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:45:38.892266 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:45:38.892274 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:45:38.892281 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:45:38.892289 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:45:38.892302 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:45:38.892310 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:45:38.892317 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:45:38.892325 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:45:38.892336 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:45:38.892343 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:45:38.892351 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:45:38.892358 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:45:38.892366 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:45:38.892374 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:45:38.892381 kernel: iommu: Default domain type: Translated Jan 13 20:45:38.892389 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:45:38.892397 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:45:38.892407 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:45:38.892414 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:45:38.892422 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 20:45:38.892551 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:45:38.892676 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:45:38.892801 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:45:38.892811 kernel: vgaarb: loaded Jan 13 20:45:38.892819 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 20:45:38.892830 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 20:45:38.892838 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:45:38.892862 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:45:38.892870 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:45:38.892877 kernel: pnp: PnP ACPI init Jan 13 20:45:38.893036 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 20:45:38.893048 kernel: pnp: PnP ACPI: found 6 devices Jan 13 20:45:38.893058 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:45:38.893073 kernel: NET: Registered PF_INET protocol family Jan 13 20:45:38.893083 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:45:38.893092 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:45:38.893100 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:45:38.893108 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:45:38.893115 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:45:38.893123 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:45:38.893130 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:45:38.893138 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:45:38.893149 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:45:38.893156 kernel: NET: Registered PF_XDP protocol family Jan 13 20:45:38.893277 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:45:38.893452 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:45:38.893580 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:45:38.893695 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 20:45:38.893811 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 20:45:38.893951 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 20:45:38.893967 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:45:38.893975 kernel: Initialise system trusted keyrings Jan 13 20:45:38.893983 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:45:38.893990 kernel: Key type asymmetric registered Jan 13 20:45:38.893998 kernel: Asymmetric key parser 'x509' registered Jan 13 20:45:38.894006 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:45:38.894013 kernel: io scheduler mq-deadline registered Jan 13 20:45:38.894021 kernel: io scheduler kyber registered Jan 13 20:45:38.894028 kernel: io scheduler bfq registered Jan 13 20:45:38.894039 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:45:38.894047 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:45:38.894055 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:45:38.894062 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:45:38.894070 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:45:38.894078 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:45:38.894086 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:45:38.894094 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:45:38.894102 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:45:38.894112 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:45:38.894253 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:45:38.894375 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:45:38.894494 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:45:38 UTC (1736801138) Jan 13 20:45:38.894612 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 20:45:38.894622 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:45:38.894630 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:45:38.894637 kernel: Segment Routing with IPv6 Jan 13 20:45:38.894649 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:45:38.894656 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:45:38.894664 kernel: Key type dns_resolver registered Jan 13 20:45:38.894672 kernel: IPI shorthand broadcast: enabled Jan 13 20:45:38.894679 kernel: sched_clock: Marking stable (763003675, 137705405)->(924181581, -23472501) Jan 13 20:45:38.894687 kernel: registered taskstats version 1 Jan 13 20:45:38.894694 kernel: Loading compiled-in X.509 certificates Jan 13 20:45:38.894702 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:45:38.894710 kernel: Key type .fscrypt registered Jan 13 20:45:38.894720 kernel: Key type fscrypt-provisioning registered Jan 13 20:45:38.894728 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:45:38.894735 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:45:38.894743 kernel: ima: No architecture policies found Jan 13 20:45:38.894750 kernel: clk: Disabling unused clocks Jan 13 20:45:38.894758 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:45:38.894766 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:45:38.894774 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:45:38.894782 kernel: Run /init as init process Jan 13 20:45:38.894791 kernel: with arguments: Jan 13 20:45:38.894799 kernel: /init Jan 13 20:45:38.894806 kernel: with environment: Jan 13 20:45:38.894814 kernel: HOME=/ Jan 13 20:45:38.894821 kernel: TERM=linux Jan 13 20:45:38.894829 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:45:38.894838 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:45:38.894861 systemd[1]: Detected virtualization kvm. Jan 13 20:45:38.894873 systemd[1]: Detected architecture x86-64. Jan 13 20:45:38.894881 systemd[1]: Running in initrd. Jan 13 20:45:38.894889 systemd[1]: No hostname configured, using default hostname. Jan 13 20:45:38.894919 systemd[1]: Hostname set to . Jan 13 20:45:38.894928 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:45:38.894936 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:45:38.894944 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:45:38.894953 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:45:38.894964 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:45:38.894984 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:45:38.894995 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:45:38.895004 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:45:38.895014 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:45:38.895025 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:45:38.895034 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:45:38.895042 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:45:38.895051 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:45:38.895059 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:45:38.895067 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:45:38.895076 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:45:38.895084 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:45:38.895095 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:45:38.895103 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:45:38.895112 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:45:38.895120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:45:38.895129 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:45:38.895137 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:45:38.895145 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:45:38.895156 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:45:38.895164 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:45:38.895175 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:45:38.895183 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:45:38.895192 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:45:38.895200 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:45:38.895209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:38.895217 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:45:38.895225 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:45:38.895233 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:45:38.895245 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:45:38.895273 systemd-journald[194]: Collecting audit messages is disabled. Jan 13 20:45:38.895294 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:45:38.895303 systemd-journald[194]: Journal started Jan 13 20:45:38.895323 systemd-journald[194]: Runtime Journal (/run/log/journal/ea2b7794e175455ea9c5174b0f001e77) is 6.0M, max 48.4M, 42.3M free. Jan 13 20:45:38.890704 systemd-modules-load[195]: Inserted module 'overlay' Jan 13 20:45:38.932832 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:45:38.932883 kernel: Bridge firewalling registered Jan 13 20:45:38.924215 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 13 20:45:38.938007 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:45:38.938504 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:45:38.941129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:38.958051 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:45:38.961304 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:45:38.964041 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:45:38.967077 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:45:38.976573 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:45:38.979113 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:38.982011 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:45:38.984833 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:45:38.989194 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:45:38.992598 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:45:39.007289 dracut-cmdline[229]: dracut-dracut-053 Jan 13 20:45:39.010842 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:45:39.026799 systemd-resolved[230]: Positive Trust Anchors: Jan 13 20:45:39.026821 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:45:39.026879 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:45:39.030065 systemd-resolved[230]: Defaulting to hostname 'linux'. Jan 13 20:45:39.031323 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:45:39.035766 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:45:39.116884 kernel: SCSI subsystem initialized Jan 13 20:45:39.125875 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:45:39.137895 kernel: iscsi: registered transport (tcp) Jan 13 20:45:39.159208 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:45:39.159304 kernel: QLogic iSCSI HBA Driver Jan 13 20:45:39.214446 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:45:39.226252 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:45:39.251667 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:45:39.251771 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:45:39.251784 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:45:39.294878 kernel: raid6: avx2x4 gen() 29879 MB/s Jan 13 20:45:39.311878 kernel: raid6: avx2x2 gen() 31078 MB/s Jan 13 20:45:39.328981 kernel: raid6: avx2x1 gen() 24959 MB/s Jan 13 20:45:39.329014 kernel: raid6: using algorithm avx2x2 gen() 31078 MB/s Jan 13 20:45:39.346977 kernel: raid6: .... xor() 18581 MB/s, rmw enabled Jan 13 20:45:39.347032 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:45:39.366880 kernel: xor: automatically using best checksumming function avx Jan 13 20:45:39.518898 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:45:39.531196 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:45:39.545056 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:45:39.557147 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 13 20:45:39.561920 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:45:39.569003 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:45:39.581743 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jan 13 20:45:39.612513 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:45:39.626052 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:45:39.690642 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:45:39.696116 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:45:39.710079 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:45:39.713584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:45:39.715127 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:45:39.717819 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:45:39.726627 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:45:39.734897 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 20:45:39.759690 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:45:39.759707 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:45:39.759903 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:45:39.759923 kernel: GPT:9289727 != 19775487 Jan 13 20:45:39.759935 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:45:39.759946 kernel: GPT:9289727 != 19775487 Jan 13 20:45:39.759956 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:45:39.759967 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:45:39.737357 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:45:39.763866 kernel: libata version 3.00 loaded. Jan 13 20:45:39.763896 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:45:39.763912 kernel: AES CTR mode by8 optimization enabled Jan 13 20:45:39.768749 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:45:39.768886 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:39.777992 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:45:39.795749 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:45:39.796984 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:45:39.797164 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:45:39.797309 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (462) Jan 13 20:45:39.797321 kernel: scsi host0: ahci Jan 13 20:45:39.797483 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (476) Jan 13 20:45:39.797495 kernel: scsi host1: ahci Jan 13 20:45:39.797668 kernel: scsi host2: ahci Jan 13 20:45:39.797821 kernel: scsi host3: ahci Jan 13 20:45:39.797997 kernel: scsi host4: ahci Jan 13 20:45:39.798157 kernel: scsi host5: ahci Jan 13 20:45:39.798312 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 20:45:39.798324 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 20:45:39.798340 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 20:45:39.798350 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 20:45:39.798361 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 20:45:39.798372 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 20:45:39.773516 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:45:39.781068 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:45:39.781230 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:39.784212 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:39.795199 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:39.820969 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:45:39.847511 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:45:39.849040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:39.855678 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:45:39.856945 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:45:39.865136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:45:39.877021 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:45:39.879010 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:45:39.888088 disk-uuid[556]: Primary Header is updated. Jan 13 20:45:39.888088 disk-uuid[556]: Secondary Entries is updated. Jan 13 20:45:39.888088 disk-uuid[556]: Secondary Header is updated. Jan 13 20:45:39.890872 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:45:39.895874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:45:39.898735 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:40.101885 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:45:40.101957 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 20:45:40.102874 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:45:40.102893 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 20:45:40.104067 kernel: ata3.00: applying bridge limits Jan 13 20:45:40.104888 kernel: ata3.00: configured for UDMA/100 Jan 13 20:45:40.105883 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:45:40.109875 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:45:40.110873 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:45:40.110887 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:45:40.147428 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 20:45:40.164522 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:45:40.164541 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:45:40.915906 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:45:40.918339 disk-uuid[560]: The operation has completed successfully. Jan 13 20:45:40.970473 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:45:40.970658 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:45:40.995047 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:45:40.999608 sh[592]: Success Jan 13 20:45:41.015888 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 20:45:41.058701 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:45:41.070572 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:45:41.073921 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:45:41.086779 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:45:41.086821 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:41.086861 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:45:41.088880 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:45:41.088909 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:45:41.094847 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:45:41.095685 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:45:41.101136 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:45:41.104220 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:45:41.111876 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:41.111907 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:41.113891 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:45:41.115876 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:45:41.126483 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:45:41.128600 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:41.139430 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:45:41.149237 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:45:41.204278 ignition[686]: Ignition 2.20.0 Jan 13 20:45:41.204379 ignition[686]: Stage: fetch-offline Jan 13 20:45:41.204443 ignition[686]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:41.204461 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:45:41.205464 ignition[686]: parsed url from cmdline: "" Jan 13 20:45:41.205470 ignition[686]: no config URL provided Jan 13 20:45:41.205478 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:45:41.205494 ignition[686]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:45:41.205531 ignition[686]: op(1): [started] loading QEMU firmware config module Jan 13 20:45:41.205538 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:45:41.213129 ignition[686]: op(1): [finished] loading QEMU firmware config module Jan 13 20:45:41.214270 ignition[686]: parsing config with SHA512: 2b478fc3c45aabe29c4461206f93a311edab44440eab6988272f7ddb396fb1d9d6af8deb5d579314ecd1e4f95888ea3d8487e01db452867dcb040933f4392c3b Jan 13 20:45:41.217328 unknown[686]: fetched base config from "system" Jan 13 20:45:41.217339 unknown[686]: fetched user config from "qemu" Jan 13 20:45:41.217597 ignition[686]: fetch-offline: fetch-offline passed Jan 13 20:45:41.220005 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:45:41.217664 ignition[686]: Ignition finished successfully Jan 13 20:45:41.242309 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:45:41.263181 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:45:41.285457 systemd-networkd[781]: lo: Link UP Jan 13 20:45:41.285469 systemd-networkd[781]: lo: Gained carrier Jan 13 20:45:41.287260 systemd-networkd[781]: Enumeration completed Jan 13 20:45:41.287375 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:45:41.287705 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:41.287709 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:45:41.288553 systemd-networkd[781]: eth0: Link UP Jan 13 20:45:41.288557 systemd-networkd[781]: eth0: Gained carrier Jan 13 20:45:41.288563 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:41.289517 systemd[1]: Reached target network.target - Network. Jan 13 20:45:41.291607 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:45:41.300935 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:45:41.301058 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:45:41.327196 ignition[784]: Ignition 2.20.0 Jan 13 20:45:41.327212 ignition[784]: Stage: kargs Jan 13 20:45:41.327420 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:41.327435 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:45:41.328138 ignition[784]: kargs: kargs passed Jan 13 20:45:41.328185 ignition[784]: Ignition finished successfully Jan 13 20:45:41.332943 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:45:41.345026 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:45:41.357294 ignition[793]: Ignition 2.20.0 Jan 13 20:45:41.357306 ignition[793]: Stage: disks Jan 13 20:45:41.357468 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:41.357480 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:45:41.360344 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:45:41.358369 ignition[793]: disks: disks passed Jan 13 20:45:41.363183 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:45:41.358415 ignition[793]: Ignition finished successfully Jan 13 20:45:41.365559 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:45:41.368666 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:45:41.370028 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:45:41.372103 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:45:41.381152 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:45:41.394424 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:45:41.402021 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:45:41.411047 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:45:41.499889 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:45:41.500382 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:45:41.501118 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:45:41.514953 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:45:41.517014 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:45:41.518276 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:45:41.518319 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:45:41.518343 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:45:41.525941 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:45:41.527623 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:45:41.536879 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (811) Jan 13 20:45:41.539203 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:41.539236 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:41.539254 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:45:41.542883 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:45:41.544987 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:45:41.572841 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:45:41.579701 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:45:41.586218 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:45:41.591452 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:45:41.694510 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:45:41.702168 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:45:41.704271 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:45:41.710867 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:41.733594 ignition[925]: INFO : Ignition 2.20.0 Jan 13 20:45:41.733594 ignition[925]: INFO : Stage: mount Jan 13 20:45:41.737355 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:41.737355 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:45:41.737355 ignition[925]: INFO : mount: mount passed Jan 13 20:45:41.737355 ignition[925]: INFO : Ignition finished successfully Jan 13 20:45:41.735604 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:45:41.737631 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:45:41.746028 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:45:42.086061 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:45:42.095186 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:45:42.101887 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Jan 13 20:45:42.101947 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:42.104557 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:42.104583 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:45:42.107874 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:45:42.109043 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:45:42.134016 ignition[956]: INFO : Ignition 2.20.0 Jan 13 20:45:42.134016 ignition[956]: INFO : Stage: files Jan 13 20:45:42.136078 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:42.136078 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:45:42.139090 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:45:42.140728 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:45:42.140728 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:45:42.145022 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:45:42.146642 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:45:42.148635 unknown[956]: wrote ssh authorized keys file for user: core Jan 13 20:45:42.150075 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:45:42.151677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:45:42.151677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:45:42.151677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:45:42.151677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:45:42.151677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:45:42.151677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:45:42.151677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:45:42.151677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:45:42.151677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:45:42.151677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:45:42.507528 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 13 20:45:42.687040 systemd-networkd[781]: eth0: Gained IPv6LL Jan 13 20:45:42.919610 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:45:42.919610 ignition[956]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 13 20:45:42.923375 ignition[956]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:45:42.923375 ignition[956]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:45:42.923375 ignition[956]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 13 20:45:42.923375 ignition[956]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Jan 13 20:45:42.923375 ignition[956]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:45:42.923375 ignition[956]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:45:42.923375 ignition[956]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Jan 13 20:45:42.923375 ignition[956]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:45:42.952503 ignition[956]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:45:42.959762 ignition[956]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:45:42.961445 ignition[956]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:45:42.961445 ignition[956]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:45:42.961445 ignition[956]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:45:42.961445 ignition[956]: INFO : files: files passed Jan 13 20:45:42.961445 ignition[956]: INFO : Ignition finished successfully Jan 13 20:45:42.963440 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:45:42.973032 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:45:42.975252 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:45:42.978824 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:45:42.979001 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:45:42.989943 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:45:42.993740 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:45:42.993740 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:45:42.999371 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:45:42.996807 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:45:42.999597 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:45:43.010076 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:45:43.044810 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:45:43.045057 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:45:43.047643 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:45:43.049719 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:45:43.051809 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:45:43.053091 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:45:43.076591 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:45:43.090047 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:45:43.101550 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:45:43.103036 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:45:43.105489 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:45:43.107889 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:45:43.108034 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:45:43.110419 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:45:43.112140 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:45:43.114073 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:45:43.116133 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:45:43.118188 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:45:43.120385 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:45:43.122520 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:45:43.124948 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:45:43.127186 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:45:43.129348 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:45:43.131178 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:45:43.131334 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:45:43.133669 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:45:43.135176 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:45:43.137350 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:45:43.137555 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:45:43.139638 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:45:43.139836 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:45:43.142168 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:45:43.142345 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:45:43.144193 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:45:43.145896 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:45:43.149923 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:45:43.151675 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:45:43.153676 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:45:43.155502 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:45:43.155651 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:45:43.157566 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:45:43.157696 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:45:43.160090 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:45:43.160269 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:45:43.162169 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:45:43.162328 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:45:43.179169 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:45:43.181177 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:45:43.181332 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:45:43.184671 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:45:43.186596 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:45:43.186820 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:45:43.189278 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:45:43.189623 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:45:43.194941 ignition[1011]: INFO : Ignition 2.20.0 Jan 13 20:45:43.194941 ignition[1011]: INFO : Stage: umount Jan 13 20:45:43.196748 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:43.196748 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:45:43.196748 ignition[1011]: INFO : umount: umount passed Jan 13 20:45:43.196748 ignition[1011]: INFO : Ignition finished successfully Jan 13 20:45:43.197591 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:45:43.197754 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:45:43.200539 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:45:43.200697 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:45:43.205576 systemd[1]: Stopped target network.target - Network. Jan 13 20:45:43.206069 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:45:43.206147 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:45:43.206448 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:45:43.206514 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:45:43.206788 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:45:43.206875 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:45:43.207284 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:45:43.207354 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:45:43.207819 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:45:43.216659 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:45:43.219973 systemd-networkd[781]: eth0: DHCPv6 lease lost Jan 13 20:45:43.222487 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:45:43.222644 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:45:43.224400 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:45:43.224580 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:45:43.229651 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:45:43.229723 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:45:43.237013 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:45:43.239308 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:45:43.239380 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:45:43.241794 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:45:43.241871 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:45:43.244408 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:45:43.244467 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:45:43.246646 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:45:43.246713 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:45:43.249266 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:45:43.252747 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:45:43.263099 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:45:43.264269 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:45:43.274832 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:45:43.276009 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:45:43.278981 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:45:43.279042 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:45:43.282112 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:45:43.282161 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:45:43.285133 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:45:43.285191 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:45:43.288359 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:45:43.288418 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:45:43.291405 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:45:43.291463 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:43.307019 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:45:43.308209 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:45:43.309403 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:45:43.311826 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:45:43.313535 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:43.317103 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:45:43.318260 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:45:43.439131 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:45:43.439276 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:45:43.441425 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:45:43.442245 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:45:43.442352 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:45:43.457098 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:45:43.465250 systemd[1]: Switching root. Jan 13 20:45:43.499835 systemd-journald[194]: Journal stopped Jan 13 20:45:44.964171 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 13 20:45:44.964235 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:45:44.964258 kernel: SELinux: policy capability open_perms=1 Jan 13 20:45:44.964269 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:45:44.964285 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:45:44.964302 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:45:44.964313 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:45:44.964325 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:45:44.964337 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:45:44.964348 kernel: audit: type=1403 audit(1736801144.189:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:45:44.964361 systemd[1]: Successfully loaded SELinux policy in 50.044ms. Jan 13 20:45:44.964381 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.725ms. Jan 13 20:45:44.964395 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:45:44.964412 systemd[1]: Detected virtualization kvm. Jan 13 20:45:44.964425 systemd[1]: Detected architecture x86-64. Jan 13 20:45:44.964438 systemd[1]: Detected first boot. Jan 13 20:45:44.964450 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:45:44.964462 zram_generator::config[1072]: No configuration found. Jan 13 20:45:44.964479 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:45:44.964491 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:45:44.964503 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:45:44.964516 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:45:44.964528 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:45:44.964541 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:45:44.964553 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:45:44.964565 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:45:44.964578 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:45:44.964593 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:45:44.964606 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:45:44.964622 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:45:44.964635 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:45:44.964647 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:45:44.964660 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:45:44.964672 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:45:44.964685 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:45:44.964697 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:45:44.964713 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:45:44.964734 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:45:44.964746 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:45:44.964758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:45:44.964771 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:45:44.964783 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:45:44.964795 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:45:44.964810 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:45:44.964823 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:45:44.964835 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:45:44.964858 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:45:44.964871 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:45:44.964884 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:45:44.964896 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:45:44.964909 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:45:44.964921 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:45:44.964933 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:45:44.964949 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:44.964962 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:45:44.964979 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:45:44.964991 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:45:44.965004 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:45:44.965017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:44.965029 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:45:44.965041 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:45:44.965057 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:45:44.965069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:45:44.965081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:45:44.965094 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:45:44.965106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:45:44.965118 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:45:44.965131 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 20:45:44.965144 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 20:45:44.965159 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:45:44.965171 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:45:44.965182 kernel: loop: module loaded Jan 13 20:45:44.965214 systemd-journald[1148]: Collecting audit messages is disabled. Jan 13 20:45:44.965236 systemd-journald[1148]: Journal started Jan 13 20:45:44.965257 systemd-journald[1148]: Runtime Journal (/run/log/journal/ea2b7794e175455ea9c5174b0f001e77) is 6.0M, max 48.4M, 42.3M free. Jan 13 20:45:44.973880 kernel: fuse: init (API version 7.39) Jan 13 20:45:44.977869 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:45:44.981946 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:45:44.989865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:45:44.992866 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:45.014318 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:45:45.016386 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:45:45.017797 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:45:45.019157 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:45:45.020537 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:45:45.022404 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:45:45.023988 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:45:45.025937 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:45:45.027967 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:45:45.028304 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:45:45.030131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:45:45.030412 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:45:45.031951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:45:45.032177 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:45:45.033885 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:45:45.034131 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:45:45.035651 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:45:45.035931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:45:45.037522 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:45:45.039364 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:45:45.041098 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:45:45.050056 kernel: ACPI: bus type drm_connector registered Jan 13 20:45:45.050330 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:45:45.050735 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:45:45.052512 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:45:45.061675 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:45:45.071011 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:45:45.074043 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:45:45.075232 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:45:45.078012 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:45:45.083461 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:45:45.086430 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:45:45.092251 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:45:45.093660 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:45:45.095679 systemd-journald[1148]: Time spent on flushing to /var/log/journal/ea2b7794e175455ea9c5174b0f001e77 is 13.250ms for 921 entries. Jan 13 20:45:45.095679 systemd-journald[1148]: System Journal (/var/log/journal/ea2b7794e175455ea9c5174b0f001e77) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:45:45.120750 systemd-journald[1148]: Received client request to flush runtime journal. Jan 13 20:45:45.103055 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:45:45.106063 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:45:45.118352 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:45:45.120457 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:45:45.129251 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:45:45.131629 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:45:45.133598 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:45:45.137096 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:45:45.142411 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:45:45.151209 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:45:45.158558 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 13 20:45:45.158584 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 13 20:45:45.164026 udevadm[1224]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:45:45.168126 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:45:45.188207 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:45:45.222320 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:45:45.234015 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:45:45.256882 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Jan 13 20:45:45.256914 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Jan 13 20:45:45.264413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:45:45.777272 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:45:45.789347 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:45:45.818325 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Jan 13 20:45:45.838803 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:45:45.849084 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:45:45.871022 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:45:45.877876 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1243) Jan 13 20:45:45.895451 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 20:45:45.919776 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:45:45.946309 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:45:45.959022 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:45:45.966994 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:45:45.968918 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:45:45.973873 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:45:45.999924 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:45:46.000122 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:45:45.987349 systemd-networkd[1246]: lo: Link UP Jan 13 20:45:45.987355 systemd-networkd[1246]: lo: Gained carrier Jan 13 20:45:45.996479 systemd-networkd[1246]: Enumeration completed Jan 13 20:45:45.996739 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:45:46.001391 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:46.001593 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:45:46.004145 systemd-networkd[1246]: eth0: Link UP Jan 13 20:45:46.004157 systemd-networkd[1246]: eth0: Gained carrier Jan 13 20:45:46.004175 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:46.006090 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:45:46.014866 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:45:46.071939 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:45:46.075090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:46.084151 kernel: kvm_amd: TSC scaling supported Jan 13 20:45:46.084197 kernel: kvm_amd: Nested Virtualization enabled Jan 13 20:45:46.084210 kernel: kvm_amd: Nested Paging enabled Jan 13 20:45:46.084223 kernel: kvm_amd: LBR virtualization supported Jan 13 20:45:46.085225 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 20:45:46.085244 kernel: kvm_amd: Virtual GIF supported Jan 13 20:45:46.105915 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:45:46.150223 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:45:46.173031 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:45:46.174891 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:46.183237 lvm[1282]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:45:46.216019 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:45:46.217561 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:45:46.234974 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:45:46.240514 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:45:46.284022 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:45:46.285539 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:45:46.286902 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:45:46.286934 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:45:46.288031 systemd[1]: Reached target machines.target - Containers. Jan 13 20:45:46.290284 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:45:46.302025 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:45:46.304798 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:45:46.306170 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:46.307362 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:45:46.311827 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:45:46.316014 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:45:46.318741 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:45:46.334531 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:45:46.342465 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:45:46.343277 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:45:46.348868 kernel: loop0: detected capacity change from 0 to 140992 Jan 13 20:45:46.363876 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:45:46.394983 kernel: loop1: detected capacity change from 0 to 211296 Jan 13 20:45:46.425228 kernel: loop2: detected capacity change from 0 to 138184 Jan 13 20:45:46.458877 kernel: loop3: detected capacity change from 0 to 140992 Jan 13 20:45:46.471871 kernel: loop4: detected capacity change from 0 to 211296 Jan 13 20:45:46.477881 kernel: loop5: detected capacity change from 0 to 138184 Jan 13 20:45:46.486793 (sd-merge)[1307]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:45:46.487513 (sd-merge)[1307]: Merged extensions into '/usr'. Jan 13 20:45:46.492003 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:45:46.492021 systemd[1]: Reloading... Jan 13 20:45:46.555015 zram_generator::config[1335]: No configuration found. Jan 13 20:45:46.555133 ldconfig[1291]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:45:46.685098 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:46.749505 systemd[1]: Reloading finished in 256 ms. Jan 13 20:45:46.769070 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:45:46.770691 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:45:46.787008 systemd[1]: Starting ensure-sysext.service... Jan 13 20:45:46.789281 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:45:46.794068 systemd[1]: Reloading requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:45:46.794085 systemd[1]: Reloading... Jan 13 20:45:46.813234 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:45:46.813605 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:45:46.814619 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:45:46.815002 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 13 20:45:46.815083 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 13 20:45:46.818457 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:45:46.818471 systemd-tmpfiles[1380]: Skipping /boot Jan 13 20:45:46.832612 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:45:46.832751 systemd-tmpfiles[1380]: Skipping /boot Jan 13 20:45:46.845890 zram_generator::config[1411]: No configuration found. Jan 13 20:45:46.962692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:47.028138 systemd[1]: Reloading finished in 233 ms. Jan 13 20:45:47.039147 systemd-networkd[1246]: eth0: Gained IPv6LL Jan 13 20:45:47.048618 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:45:47.062090 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:45:47.071158 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:45:47.074196 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:45:47.077122 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:45:47.081182 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:45:47.085833 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:45:47.092599 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:47.093314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:47.098605 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:45:47.101683 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:45:47.112701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:45:47.114015 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:47.114143 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:47.116515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:45:47.116838 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:45:47.119649 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:45:47.119926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:45:47.122102 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:45:47.122391 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:45:47.126080 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:45:47.138114 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:45:47.143311 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:47.143543 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:47.152138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:45:47.155520 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:45:47.160095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:45:47.166193 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:45:47.167636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:47.172003 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:45:47.173879 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:47.177286 augenrules[1499]: No rules Jan 13 20:45:47.177160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:45:47.178226 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:45:47.180779 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:45:47.181186 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:45:47.184893 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:45:47.185201 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:45:47.189188 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:45:47.191349 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:45:47.191666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:45:47.193557 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:45:47.194449 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:45:47.195810 systemd-resolved[1458]: Positive Trust Anchors: Jan 13 20:45:47.195827 systemd-resolved[1458]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:45:47.195870 systemd-resolved[1458]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:45:47.197544 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:45:47.202048 systemd[1]: Finished ensure-sysext.service. Jan 13 20:45:47.204110 systemd-resolved[1458]: Defaulting to hostname 'linux'. Jan 13 20:45:47.206948 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:45:47.212351 systemd[1]: Reached target network.target - Network. Jan 13 20:45:47.213350 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:45:47.214450 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:45:47.215768 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:45:47.215839 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:45:47.225003 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:45:47.226148 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:45:47.288834 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:45:47.797734 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:45:47.797747 systemd-resolved[1458]: Clock change detected. Flushing caches. Jan 13 20:45:47.797773 systemd-timesyncd[1522]: Initial clock synchronization to Mon 2025-01-13 20:45:47.797640 UTC. Jan 13 20:45:47.798712 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:45:47.799881 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:45:47.801197 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:45:47.802491 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:45:47.803773 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:45:47.803797 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:45:47.804752 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:45:47.806131 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:45:47.807432 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:45:47.808757 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:45:47.810252 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:45:47.813539 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:45:47.816068 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:45:47.822523 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:45:47.823693 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:45:47.824733 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:45:47.825949 systemd[1]: System is tainted: cgroupsv1 Jan 13 20:45:47.825987 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:45:47.826023 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:45:47.827678 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:45:47.830345 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:45:47.832911 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:45:47.836105 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:45:47.839627 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:45:47.843356 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:45:47.846674 jq[1529]: false Jan 13 20:45:47.847052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:47.851718 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:45:47.855243 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:45:47.863502 dbus-daemon[1528]: [system] SELinux support is enabled Jan 13 20:45:47.864168 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:45:47.868180 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:45:47.872723 extend-filesystems[1531]: Found loop3 Jan 13 20:45:47.872723 extend-filesystems[1531]: Found loop4 Jan 13 20:45:47.872723 extend-filesystems[1531]: Found loop5 Jan 13 20:45:47.872723 extend-filesystems[1531]: Found sr0 Jan 13 20:45:47.872723 extend-filesystems[1531]: Found vda Jan 13 20:45:47.872723 extend-filesystems[1531]: Found vda1 Jan 13 20:45:47.872723 extend-filesystems[1531]: Found vda2 Jan 13 20:45:47.872723 extend-filesystems[1531]: Found vda3 Jan 13 20:45:47.872723 extend-filesystems[1531]: Found usr Jan 13 20:45:47.872723 extend-filesystems[1531]: Found vda4 Jan 13 20:45:47.872723 extend-filesystems[1531]: Found vda6 Jan 13 20:45:47.872723 extend-filesystems[1531]: Found vda7 Jan 13 20:45:47.872723 extend-filesystems[1531]: Found vda9 Jan 13 20:45:47.872723 extend-filesystems[1531]: Checking size of /dev/vda9 Jan 13 20:45:47.880185 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:45:47.880944 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:45:47.887294 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:45:47.889385 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:45:47.890322 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:45:47.898983 extend-filesystems[1531]: Resized partition /dev/vda9 Jan 13 20:45:47.903136 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:45:47.903476 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:45:47.904466 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:45:47.904788 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:45:47.922723 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1256) Jan 13 20:45:47.919094 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:45:47.921052 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:45:47.921399 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:45:47.926452 extend-filesystems[1561]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:45:47.940322 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:45:47.940349 update_engine[1553]: I20250113 20:45:47.926327 1553 main.cc:92] Flatcar Update Engine starting Jan 13 20:45:47.940349 update_engine[1553]: I20250113 20:45:47.927513 1553 update_check_scheduler.cc:74] Next update check in 8m13s Jan 13 20:45:47.940682 jq[1557]: true Jan 13 20:45:47.947166 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:45:47.957321 jq[1571]: true Jan 13 20:45:47.964843 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:45:47.971621 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:45:47.971972 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:45:47.982581 extend-filesystems[1561]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:45:47.982581 extend-filesystems[1561]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:45:47.982581 extend-filesystems[1561]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:45:47.987107 extend-filesystems[1531]: Resized filesystem in /dev/vda9 Jan 13 20:45:47.988927 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:45:47.989418 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:45:47.989804 systemd-logind[1549]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:45:47.989836 systemd-logind[1549]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:45:47.993821 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:45:47.996605 systemd-logind[1549]: New seat seat0. Jan 13 20:45:48.000355 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:45:48.002280 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:45:48.002659 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:45:48.002690 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:45:48.004328 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:45:48.004354 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:45:48.006923 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:45:48.015101 bash[1605]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:45:48.017043 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:45:48.020544 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:45:48.025944 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:45:48.047204 locksmithd[1607]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:45:48.151078 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:45:48.154452 containerd[1572]: time="2025-01-13T20:45:48.154338295Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:45:48.179096 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:45:48.179310 containerd[1572]: time="2025-01-13T20:45:48.179066119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:48.182198 containerd[1572]: time="2025-01-13T20:45:48.181758818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:48.182198 containerd[1572]: time="2025-01-13T20:45:48.181792251Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:45:48.182198 containerd[1572]: time="2025-01-13T20:45:48.181817859Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:45:48.182198 containerd[1572]: time="2025-01-13T20:45:48.182023074Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:45:48.182198 containerd[1572]: time="2025-01-13T20:45:48.182040386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:48.182198 containerd[1572]: time="2025-01-13T20:45:48.182111740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:48.182198 containerd[1572]: time="2025-01-13T20:45:48.182124354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:48.182412 containerd[1572]: time="2025-01-13T20:45:48.182394951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:48.182412 containerd[1572]: time="2025-01-13T20:45:48.182408847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:48.182473 containerd[1572]: time="2025-01-13T20:45:48.182423244Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:48.182473 containerd[1572]: time="2025-01-13T20:45:48.182433323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:48.183169 containerd[1572]: time="2025-01-13T20:45:48.182532269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:48.183169 containerd[1572]: time="2025-01-13T20:45:48.182790684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:48.183169 containerd[1572]: time="2025-01-13T20:45:48.182967986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:48.183169 containerd[1572]: time="2025-01-13T20:45:48.182981521Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:45:48.183169 containerd[1572]: time="2025-01-13T20:45:48.183095655Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:45:48.183169 containerd[1572]: time="2025-01-13T20:45:48.183157732Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:45:48.192317 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:45:48.197096 containerd[1572]: time="2025-01-13T20:45:48.197006283Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:45:48.197174 containerd[1572]: time="2025-01-13T20:45:48.197117521Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:45:48.197174 containerd[1572]: time="2025-01-13T20:45:48.197141516Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:45:48.197174 containerd[1572]: time="2025-01-13T20:45:48.197162887Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:45:48.197277 containerd[1572]: time="2025-01-13T20:45:48.197182894Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:45:48.197424 containerd[1572]: time="2025-01-13T20:45:48.197402566Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:45:48.197848 containerd[1572]: time="2025-01-13T20:45:48.197821562Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:45:48.198005 containerd[1572]: time="2025-01-13T20:45:48.197973667Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:45:48.198005 containerd[1572]: time="2025-01-13T20:45:48.197997923Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:45:48.198108 containerd[1572]: time="2025-01-13T20:45:48.198090767Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:45:48.198173 containerd[1572]: time="2025-01-13T20:45:48.198113700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:45:48.198173 containerd[1572]: time="2025-01-13T20:45:48.198130531Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:45:48.198173 containerd[1572]: time="2025-01-13T20:45:48.198146762Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:45:48.198173 containerd[1572]: time="2025-01-13T20:45:48.198165677Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:45:48.198267 containerd[1572]: time="2025-01-13T20:45:48.198182118Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:45:48.198267 containerd[1572]: time="2025-01-13T20:45:48.198198118Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:45:48.198267 containerd[1572]: time="2025-01-13T20:45:48.198213867Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:45:48.198267 containerd[1572]: time="2025-01-13T20:45:48.198228846Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:45:48.198267 containerd[1572]: time="2025-01-13T20:45:48.198252400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198267 containerd[1572]: time="2025-01-13T20:45:48.198268670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198424 containerd[1572]: time="2025-01-13T20:45:48.198284219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198424 containerd[1572]: time="2025-01-13T20:45:48.198300801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198424 containerd[1572]: time="2025-01-13T20:45:48.198329114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198424 containerd[1572]: time="2025-01-13T20:45:48.198346747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198424 containerd[1572]: time="2025-01-13T20:45:48.198361524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198424 containerd[1572]: time="2025-01-13T20:45:48.198376773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198424 containerd[1572]: time="2025-01-13T20:45:48.198393394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198424 containerd[1572]: time="2025-01-13T20:45:48.198410556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198424 containerd[1572]: time="2025-01-13T20:45:48.198424823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198440422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198454859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198472372Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198495195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198511466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198524159Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198582248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198602146Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198613908Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198627002Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198637302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198651328Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198664553Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:45:48.198839 containerd[1572]: time="2025-01-13T20:45:48.198675513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:45:48.199344 containerd[1572]: time="2025-01-13T20:45:48.199077146Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:45:48.199344 containerd[1572]: time="2025-01-13T20:45:48.199138101Z" level=info msg="Connect containerd service" Jan 13 20:45:48.199344 containerd[1572]: time="2025-01-13T20:45:48.199175080Z" level=info msg="using legacy CRI server" Jan 13 20:45:48.199344 containerd[1572]: time="2025-01-13T20:45:48.199186622Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:45:48.199344 containerd[1572]: time="2025-01-13T20:45:48.199314321Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:45:48.200065 containerd[1572]: time="2025-01-13T20:45:48.199998544Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:45:48.200366 containerd[1572]: time="2025-01-13T20:45:48.200315639Z" level=info msg="Start subscribing containerd event" Jan 13 20:45:48.200407 containerd[1572]: time="2025-01-13T20:45:48.200371384Z" level=info msg="Start recovering state" Jan 13 20:45:48.200452 containerd[1572]: time="2025-01-13T20:45:48.200421778Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:45:48.200483 containerd[1572]: time="2025-01-13T20:45:48.200450783Z" level=info msg="Start event monitor" Jan 13 20:45:48.200519 containerd[1572]: time="2025-01-13T20:45:48.200485367Z" level=info msg="Start snapshots syncer" Jan 13 20:45:48.200519 containerd[1572]: time="2025-01-13T20:45:48.200499334Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:45:48.200519 containerd[1572]: time="2025-01-13T20:45:48.200505565Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:45:48.200601 containerd[1572]: time="2025-01-13T20:45:48.200512578Z" level=info msg="Start streaming server" Jan 13 20:45:48.200660 containerd[1572]: time="2025-01-13T20:45:48.200640258Z" level=info msg="containerd successfully booted in 0.047349s" Jan 13 20:45:48.202828 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:45:48.204774 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:45:48.205255 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:45:48.218427 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:45:48.230058 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:45:48.240387 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:45:48.243214 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:45:48.244611 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:45:48.646776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:48.648542 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:45:48.651240 systemd[1]: Startup finished in 6.379s (kernel) + 4.000s (userspace) = 10.380s. Jan 13 20:45:48.682565 (kubelet)[1651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:49.186339 kubelet[1651]: E0113 20:45:49.186224 1651 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:49.191398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:49.191801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:57.095393 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:45:57.108238 systemd[1]: Started sshd@0-10.0.0.153:22-10.0.0.1:35926.service - OpenSSH per-connection server daemon (10.0.0.1:35926). Jan 13 20:45:57.157274 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 35926 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:45:57.159407 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:57.167736 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:45:57.177248 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:45:57.179102 systemd-logind[1549]: New session 1 of user core. Jan 13 20:45:57.189816 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:45:57.192647 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:45:57.200619 (systemd)[1671]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:45:57.317123 systemd[1671]: Queued start job for default target default.target. Jan 13 20:45:57.317693 systemd[1671]: Created slice app.slice - User Application Slice. Jan 13 20:45:57.317730 systemd[1671]: Reached target paths.target - Paths. Jan 13 20:45:57.317747 systemd[1671]: Reached target timers.target - Timers. Jan 13 20:45:57.329110 systemd[1671]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:45:57.336174 systemd[1671]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:45:57.336262 systemd[1671]: Reached target sockets.target - Sockets. Jan 13 20:45:57.336281 systemd[1671]: Reached target basic.target - Basic System. Jan 13 20:45:57.336330 systemd[1671]: Reached target default.target - Main User Target. Jan 13 20:45:57.336379 systemd[1671]: Startup finished in 129ms. Jan 13 20:45:57.336970 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:45:57.338741 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:45:57.396326 systemd[1]: Started sshd@1-10.0.0.153:22-10.0.0.1:35932.service - OpenSSH per-connection server daemon (10.0.0.1:35932). Jan 13 20:45:57.436222 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 35932 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:45:57.437690 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:57.441706 systemd-logind[1549]: New session 2 of user core. Jan 13 20:45:57.455279 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:45:57.507827 sshd[1686]: Connection closed by 10.0.0.1 port 35932 Jan 13 20:45:57.508228 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:57.520285 systemd[1]: Started sshd@2-10.0.0.153:22-10.0.0.1:35934.service - OpenSSH per-connection server daemon (10.0.0.1:35934). Jan 13 20:45:57.521023 systemd[1]: sshd@1-10.0.0.153:22-10.0.0.1:35932.service: Deactivated successfully. Jan 13 20:45:57.523236 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:45:57.523918 systemd-logind[1549]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:45:57.525245 systemd-logind[1549]: Removed session 2. Jan 13 20:45:57.559091 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 35934 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:45:57.560684 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:57.565091 systemd-logind[1549]: New session 3 of user core. Jan 13 20:45:57.575368 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:45:57.625886 sshd[1694]: Connection closed by 10.0.0.1 port 35934 Jan 13 20:45:57.626291 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:57.638705 systemd[1]: Started sshd@3-10.0.0.153:22-10.0.0.1:35946.service - OpenSSH per-connection server daemon (10.0.0.1:35946). Jan 13 20:45:57.639546 systemd[1]: sshd@2-10.0.0.153:22-10.0.0.1:35934.service: Deactivated successfully. Jan 13 20:45:57.642562 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:45:57.643262 systemd-logind[1549]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:45:57.645058 systemd-logind[1549]: Removed session 3. Jan 13 20:45:57.678680 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 35946 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:45:57.680169 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:57.684414 systemd-logind[1549]: New session 4 of user core. Jan 13 20:45:57.694300 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:45:57.749900 sshd[1702]: Connection closed by 10.0.0.1 port 35946 Jan 13 20:45:57.750375 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:57.760247 systemd[1]: Started sshd@4-10.0.0.153:22-10.0.0.1:35948.service - OpenSSH per-connection server daemon (10.0.0.1:35948). Jan 13 20:45:57.760728 systemd[1]: sshd@3-10.0.0.153:22-10.0.0.1:35946.service: Deactivated successfully. Jan 13 20:45:57.763134 systemd-logind[1549]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:45:57.763768 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:45:57.765285 systemd-logind[1549]: Removed session 4. Jan 13 20:45:57.799376 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 35948 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:45:57.801787 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:57.811641 systemd-logind[1549]: New session 5 of user core. Jan 13 20:45:57.822736 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:45:57.898656 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:45:57.899225 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:57.922981 sudo[1711]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:57.925968 sshd[1710]: Connection closed by 10.0.0.1 port 35948 Jan 13 20:45:57.928581 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:57.941676 systemd[1]: Started sshd@5-10.0.0.153:22-10.0.0.1:35960.service - OpenSSH per-connection server daemon (10.0.0.1:35960). Jan 13 20:45:57.942509 systemd[1]: sshd@4-10.0.0.153:22-10.0.0.1:35948.service: Deactivated successfully. Jan 13 20:45:57.947932 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:45:57.950155 systemd-logind[1549]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:45:57.954069 systemd-logind[1549]: Removed session 5. Jan 13 20:45:57.990983 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 35960 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:45:57.993378 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:58.001932 systemd-logind[1549]: New session 6 of user core. Jan 13 20:45:58.013679 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:45:58.073499 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:45:58.073992 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:58.080193 sudo[1721]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:58.087831 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:45:58.088207 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:58.109481 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:45:58.141096 augenrules[1743]: No rules Jan 13 20:45:58.143244 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:45:58.143725 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:45:58.145226 sudo[1720]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:58.147086 sshd[1719]: Connection closed by 10.0.0.1 port 35960 Jan 13 20:45:58.147453 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:58.156381 systemd[1]: Started sshd@6-10.0.0.153:22-10.0.0.1:35972.service - OpenSSH per-connection server daemon (10.0.0.1:35972). Jan 13 20:45:58.157106 systemd[1]: sshd@5-10.0.0.153:22-10.0.0.1:35960.service: Deactivated successfully. Jan 13 20:45:58.160077 systemd-logind[1549]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:45:58.160661 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:45:58.162532 systemd-logind[1549]: Removed session 6. Jan 13 20:45:58.195592 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 35972 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:45:58.197537 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:58.202058 systemd-logind[1549]: New session 7 of user core. Jan 13 20:45:58.211347 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:45:58.269176 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:45:58.269653 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:58.300459 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:45:58.320693 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:45:58.321056 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:45:58.810309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:58.824336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:58.844988 systemd[1]: Reloading requested from client PID 1807 ('systemctl') (unit session-7.scope)... Jan 13 20:45:58.845006 systemd[1]: Reloading... Jan 13 20:45:58.920912 zram_generator::config[1845]: No configuration found. Jan 13 20:45:59.188262 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:59.259764 systemd[1]: Reloading finished in 414 ms. Jan 13 20:45:59.313761 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:45:59.313893 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:45:59.314355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:59.317630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:59.463919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:59.468833 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:45:59.512568 kubelet[1906]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:59.512568 kubelet[1906]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:45:59.512568 kubelet[1906]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:59.513007 kubelet[1906]: I0113 20:45:59.512656 1906 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:46:00.230483 kubelet[1906]: I0113 20:46:00.230417 1906 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:46:00.230483 kubelet[1906]: I0113 20:46:00.230451 1906 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:46:00.230662 kubelet[1906]: I0113 20:46:00.230646 1906 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:46:00.244620 kubelet[1906]: I0113 20:46:00.244569 1906 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:46:00.261079 kubelet[1906]: I0113 20:46:00.261037 1906 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:46:00.261546 kubelet[1906]: I0113 20:46:00.261519 1906 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:46:00.261723 kubelet[1906]: I0113 20:46:00.261696 1906 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:46:00.262242 kubelet[1906]: I0113 20:46:00.262214 1906 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:46:00.262242 kubelet[1906]: I0113 20:46:00.262233 1906 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:46:00.262395 kubelet[1906]: I0113 20:46:00.262371 1906 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:46:00.262511 kubelet[1906]: I0113 20:46:00.262490 1906 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:46:00.262613 kubelet[1906]: I0113 20:46:00.262512 1906 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:46:00.262613 kubelet[1906]: I0113 20:46:00.262545 1906 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:46:00.262613 kubelet[1906]: I0113 20:46:00.262563 1906 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:46:00.262680 kubelet[1906]: E0113 20:46:00.262641 1906 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:00.262768 kubelet[1906]: E0113 20:46:00.262737 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:00.264553 kubelet[1906]: I0113 20:46:00.264506 1906 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:46:00.266470 kubelet[1906]: W0113 20:46:00.266277 1906 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:46:00.266470 kubelet[1906]: E0113 20:46:00.266324 1906 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:46:00.266470 kubelet[1906]: W0113 20:46:00.266361 1906 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.153" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:46:00.266470 kubelet[1906]: E0113 20:46:00.266378 1906 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.153" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:46:00.267081 kubelet[1906]: I0113 20:46:00.267037 1906 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:46:00.267986 kubelet[1906]: W0113 20:46:00.267946 1906 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:46:00.268698 kubelet[1906]: I0113 20:46:00.268675 1906 server.go:1256] "Started kubelet" Jan 13 20:46:00.268981 kubelet[1906]: I0113 20:46:00.268759 1906 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:46:00.268981 kubelet[1906]: I0113 20:46:00.268955 1906 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:46:00.269265 kubelet[1906]: I0113 20:46:00.269237 1906 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:46:00.269666 kubelet[1906]: I0113 20:46:00.269632 1906 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:46:00.270763 kubelet[1906]: I0113 20:46:00.270319 1906 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:46:00.270763 kubelet[1906]: I0113 20:46:00.270707 1906 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:46:00.270825 kubelet[1906]: I0113 20:46:00.270794 1906 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:46:00.270866 kubelet[1906]: I0113 20:46:00.270850 1906 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:46:00.273322 kubelet[1906]: I0113 20:46:00.273296 1906 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:46:00.273432 kubelet[1906]: I0113 20:46:00.273412 1906 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:46:00.274361 kubelet[1906]: E0113 20:46:00.274290 1906 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:46:00.274748 kubelet[1906]: I0113 20:46:00.274732 1906 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:46:00.284588 kubelet[1906]: E0113 20:46:00.284554 1906 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.153\" not found" node="10.0.0.153" Jan 13 20:46:00.297117 kubelet[1906]: I0113 20:46:00.297075 1906 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:46:00.297117 kubelet[1906]: I0113 20:46:00.297097 1906 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:46:00.297117 kubelet[1906]: I0113 20:46:00.297116 1906 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:46:00.372396 kubelet[1906]: I0113 20:46:00.372356 1906 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.153" Jan 13 20:46:00.483960 kubelet[1906]: I0113 20:46:00.483803 1906 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.153" Jan 13 20:46:00.601613 kubelet[1906]: E0113 20:46:00.601565 1906 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Jan 13 20:46:00.702134 kubelet[1906]: E0113 20:46:00.702075 1906 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Jan 13 20:46:00.713403 kubelet[1906]: I0113 20:46:00.713358 1906 policy_none.go:49] "None policy: Start" Jan 13 20:46:00.714381 kubelet[1906]: I0113 20:46:00.714308 1906 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:46:00.714381 kubelet[1906]: I0113 20:46:00.714335 1906 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:46:00.726098 kubelet[1906]: I0113 20:46:00.726065 1906 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:46:00.726346 kubelet[1906]: I0113 20:46:00.726328 1906 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:46:00.728229 kubelet[1906]: E0113 20:46:00.728203 1906 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.153\" not found" Jan 13 20:46:00.739822 kubelet[1906]: I0113 20:46:00.739681 1906 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:46:00.741577 kubelet[1906]: I0113 20:46:00.741536 1906 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:46:00.741577 kubelet[1906]: I0113 20:46:00.741581 1906 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:46:00.741704 kubelet[1906]: I0113 20:46:00.741603 1906 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:46:00.742048 kubelet[1906]: E0113 20:46:00.741743 1906 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 20:46:00.802300 kubelet[1906]: E0113 20:46:00.802229 1906 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Jan 13 20:46:00.903046 kubelet[1906]: E0113 20:46:00.902968 1906 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Jan 13 20:46:00.985244 sudo[1756]: pam_unix(sudo:session): session closed for user root Jan 13 20:46:00.986707 sshd[1755]: Connection closed by 10.0.0.1 port 35972 Jan 13 20:46:00.987056 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:00.991376 systemd[1]: sshd@6-10.0.0.153:22-10.0.0.1:35972.service: Deactivated successfully. Jan 13 20:46:00.994184 systemd-logind[1549]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:46:00.994256 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:46:00.995517 systemd-logind[1549]: Removed session 7. Jan 13 20:46:01.003661 kubelet[1906]: E0113 20:46:01.003623 1906 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Jan 13 20:46:01.104535 kubelet[1906]: E0113 20:46:01.104410 1906 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Jan 13 20:46:01.205341 kubelet[1906]: E0113 20:46:01.205271 1906 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Jan 13 20:46:01.232578 kubelet[1906]: I0113 20:46:01.232502 1906 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:46:01.232806 kubelet[1906]: W0113 20:46:01.232777 1906 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:46:01.232878 kubelet[1906]: W0113 20:46:01.232818 1906 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:46:01.263185 kubelet[1906]: E0113 20:46:01.263005 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:01.306219 kubelet[1906]: E0113 20:46:01.306118 1906 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Jan 13 20:46:01.407781 kubelet[1906]: I0113 20:46:01.407733 1906 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:46:01.408094 containerd[1572]: time="2025-01-13T20:46:01.408050547Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:46:01.408527 kubelet[1906]: I0113 20:46:01.408235 1906 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:46:02.263614 kubelet[1906]: E0113 20:46:02.263564 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:02.263614 kubelet[1906]: I0113 20:46:02.263583 1906 apiserver.go:52] "Watching apiserver" Jan 13 20:46:02.267375 kubelet[1906]: I0113 20:46:02.267356 1906 topology_manager.go:215] "Topology Admit Handler" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" podNamespace="calico-system" podName="csi-node-driver-7ccjv" Jan 13 20:46:02.267476 kubelet[1906]: I0113 20:46:02.267461 1906 topology_manager.go:215] "Topology Admit Handler" podUID="b239f6be-e4f6-4d81-a925-2e9be3fb2abd" podNamespace="kube-system" podName="kube-proxy-kx9m6" Jan 13 20:46:02.267516 kubelet[1906]: I0113 20:46:02.267504 1906 topology_manager.go:215] "Topology Admit Handler" podUID="e43153fd-38f2-475b-993c-217db25b4020" podNamespace="calico-system" podName="calico-node-jvjfl" Jan 13 20:46:02.267719 kubelet[1906]: E0113 20:46:02.267692 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:02.272495 kubelet[1906]: I0113 20:46:02.272259 1906 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:46:02.282273 kubelet[1906]: I0113 20:46:02.282230 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdwkr\" (UniqueName: \"kubernetes.io/projected/b239f6be-e4f6-4d81-a925-2e9be3fb2abd-kube-api-access-rdwkr\") pod \"kube-proxy-kx9m6\" (UID: \"b239f6be-e4f6-4d81-a925-2e9be3fb2abd\") " pod="kube-system/kube-proxy-kx9m6" Jan 13 20:46:02.282273 kubelet[1906]: I0113 20:46:02.282272 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e43153fd-38f2-475b-993c-217db25b4020-xtables-lock\") pod \"calico-node-jvjfl\" (UID: \"e43153fd-38f2-475b-993c-217db25b4020\") " pod="calico-system/calico-node-jvjfl" Jan 13 20:46:02.282421 kubelet[1906]: I0113 20:46:02.282315 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e43153fd-38f2-475b-993c-217db25b4020-cni-log-dir\") pod \"calico-node-jvjfl\" (UID: \"e43153fd-38f2-475b-993c-217db25b4020\") " pod="calico-system/calico-node-jvjfl" Jan 13 20:46:02.282421 kubelet[1906]: I0113 20:46:02.282375 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b239f6be-e4f6-4d81-a925-2e9be3fb2abd-xtables-lock\") pod \"kube-proxy-kx9m6\" (UID: \"b239f6be-e4f6-4d81-a925-2e9be3fb2abd\") " pod="kube-system/kube-proxy-kx9m6" Jan 13 20:46:02.282495 kubelet[1906]: I0113 20:46:02.282457 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b239f6be-e4f6-4d81-a925-2e9be3fb2abd-lib-modules\") pod \"kube-proxy-kx9m6\" (UID: \"b239f6be-e4f6-4d81-a925-2e9be3fb2abd\") " pod="kube-system/kube-proxy-kx9m6" Jan 13 20:46:02.282530 kubelet[1906]: I0113 20:46:02.282513 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e43153fd-38f2-475b-993c-217db25b4020-tigera-ca-bundle\") pod \"calico-node-jvjfl\" (UID: \"e43153fd-38f2-475b-993c-217db25b4020\") " pod="calico-system/calico-node-jvjfl" Jan 13 20:46:02.282570 kubelet[1906]: I0113 20:46:02.282553 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e43153fd-38f2-475b-993c-217db25b4020-node-certs\") pod \"calico-node-jvjfl\" (UID: \"e43153fd-38f2-475b-993c-217db25b4020\") " pod="calico-system/calico-node-jvjfl" Jan 13 20:46:02.282593 kubelet[1906]: I0113 20:46:02.282586 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e43153fd-38f2-475b-993c-217db25b4020-var-lib-calico\") pod \"calico-node-jvjfl\" (UID: \"e43153fd-38f2-475b-993c-217db25b4020\") " pod="calico-system/calico-node-jvjfl" Jan 13 20:46:02.282615 kubelet[1906]: I0113 20:46:02.282606 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e43153fd-38f2-475b-993c-217db25b4020-cni-net-dir\") pod \"calico-node-jvjfl\" (UID: \"e43153fd-38f2-475b-993c-217db25b4020\") " pod="calico-system/calico-node-jvjfl" Jan 13 20:46:02.282645 kubelet[1906]: I0113 20:46:02.282636 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b4wn\" (UniqueName: \"kubernetes.io/projected/e43153fd-38f2-475b-993c-217db25b4020-kube-api-access-9b4wn\") pod \"calico-node-jvjfl\" (UID: \"e43153fd-38f2-475b-993c-217db25b4020\") " pod="calico-system/calico-node-jvjfl" Jan 13 20:46:02.282667 kubelet[1906]: I0113 20:46:02.282654 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a7de8883-b1b1-470b-8914-6f939b888e9c-varrun\") pod \"csi-node-driver-7ccjv\" (UID: \"a7de8883-b1b1-470b-8914-6f939b888e9c\") " pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:02.282688 kubelet[1906]: I0113 20:46:02.282674 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e43153fd-38f2-475b-993c-217db25b4020-lib-modules\") pod \"calico-node-jvjfl\" (UID: \"e43153fd-38f2-475b-993c-217db25b4020\") " pod="calico-system/calico-node-jvjfl" Jan 13 20:46:02.282710 kubelet[1906]: I0113 20:46:02.282704 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e43153fd-38f2-475b-993c-217db25b4020-policysync\") pod \"calico-node-jvjfl\" (UID: \"e43153fd-38f2-475b-993c-217db25b4020\") " pod="calico-system/calico-node-jvjfl" Jan 13 20:46:02.282732 kubelet[1906]: I0113 20:46:02.282725 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e43153fd-38f2-475b-993c-217db25b4020-flexvol-driver-host\") pod \"calico-node-jvjfl\" (UID: \"e43153fd-38f2-475b-993c-217db25b4020\") " pod="calico-system/calico-node-jvjfl" Jan 13 20:46:02.282759 kubelet[1906]: I0113 20:46:02.282744 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7de8883-b1b1-470b-8914-6f939b888e9c-kubelet-dir\") pod \"csi-node-driver-7ccjv\" (UID: \"a7de8883-b1b1-470b-8914-6f939b888e9c\") " pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:02.282781 kubelet[1906]: I0113 20:46:02.282764 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7de8883-b1b1-470b-8914-6f939b888e9c-socket-dir\") pod \"csi-node-driver-7ccjv\" (UID: \"a7de8883-b1b1-470b-8914-6f939b888e9c\") " pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:02.282803 kubelet[1906]: I0113 20:46:02.282782 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7de8883-b1b1-470b-8914-6f939b888e9c-registration-dir\") pod \"csi-node-driver-7ccjv\" (UID: \"a7de8883-b1b1-470b-8914-6f939b888e9c\") " pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:02.282824 kubelet[1906]: I0113 20:46:02.282802 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljl8c\" (UniqueName: \"kubernetes.io/projected/a7de8883-b1b1-470b-8914-6f939b888e9c-kube-api-access-ljl8c\") pod \"csi-node-driver-7ccjv\" (UID: \"a7de8883-b1b1-470b-8914-6f939b888e9c\") " pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:02.282908 kubelet[1906]: I0113 20:46:02.282847 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b239f6be-e4f6-4d81-a925-2e9be3fb2abd-kube-proxy\") pod \"kube-proxy-kx9m6\" (UID: \"b239f6be-e4f6-4d81-a925-2e9be3fb2abd\") " pod="kube-system/kube-proxy-kx9m6" Jan 13 20:46:02.282908 kubelet[1906]: I0113 20:46:02.282870 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e43153fd-38f2-475b-993c-217db25b4020-var-run-calico\") pod \"calico-node-jvjfl\" (UID: \"e43153fd-38f2-475b-993c-217db25b4020\") " pod="calico-system/calico-node-jvjfl" Jan 13 20:46:02.282908 kubelet[1906]: I0113 20:46:02.282889 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e43153fd-38f2-475b-993c-217db25b4020-cni-bin-dir\") pod \"calico-node-jvjfl\" (UID: \"e43153fd-38f2-475b-993c-217db25b4020\") " pod="calico-system/calico-node-jvjfl" Jan 13 20:46:02.384889 kubelet[1906]: E0113 20:46:02.384852 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.384889 kubelet[1906]: W0113 20:46:02.384876 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.384889 kubelet[1906]: E0113 20:46:02.384898 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.385155 kubelet[1906]: E0113 20:46:02.385130 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.385155 kubelet[1906]: W0113 20:46:02.385145 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.385213 kubelet[1906]: E0113 20:46:02.385163 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.385411 kubelet[1906]: E0113 20:46:02.385389 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.385411 kubelet[1906]: W0113 20:46:02.385408 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.385477 kubelet[1906]: E0113 20:46:02.385425 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.385680 kubelet[1906]: E0113 20:46:02.385662 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.385680 kubelet[1906]: W0113 20:46:02.385677 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.385756 kubelet[1906]: E0113 20:46:02.385699 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.385952 kubelet[1906]: E0113 20:46:02.385937 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.385995 kubelet[1906]: W0113 20:46:02.385961 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.386046 kubelet[1906]: E0113 20:46:02.386006 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.386215 kubelet[1906]: E0113 20:46:02.386198 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.386215 kubelet[1906]: W0113 20:46:02.386210 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.386292 kubelet[1906]: E0113 20:46:02.386238 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.386475 kubelet[1906]: E0113 20:46:02.386458 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.386475 kubelet[1906]: W0113 20:46:02.386469 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.386562 kubelet[1906]: E0113 20:46:02.386502 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.386708 kubelet[1906]: E0113 20:46:02.386692 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.386708 kubelet[1906]: W0113 20:46:02.386704 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.386774 kubelet[1906]: E0113 20:46:02.386734 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.386937 kubelet[1906]: E0113 20:46:02.386922 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.386937 kubelet[1906]: W0113 20:46:02.386933 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.387033 kubelet[1906]: E0113 20:46:02.386960 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.387168 kubelet[1906]: E0113 20:46:02.387152 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.387168 kubelet[1906]: W0113 20:46:02.387164 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.387233 kubelet[1906]: E0113 20:46:02.387200 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.387384 kubelet[1906]: E0113 20:46:02.387369 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.387384 kubelet[1906]: W0113 20:46:02.387380 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.387475 kubelet[1906]: E0113 20:46:02.387421 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.387608 kubelet[1906]: E0113 20:46:02.387593 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.387608 kubelet[1906]: W0113 20:46:02.387604 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.387673 kubelet[1906]: E0113 20:46:02.387642 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.387819 kubelet[1906]: E0113 20:46:02.387803 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.387819 kubelet[1906]: W0113 20:46:02.387814 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.387900 kubelet[1906]: E0113 20:46:02.387846 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.388041 kubelet[1906]: E0113 20:46:02.388024 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.388041 kubelet[1906]: W0113 20:46:02.388037 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.388109 kubelet[1906]: E0113 20:46:02.388067 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.388260 kubelet[1906]: E0113 20:46:02.388245 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.388260 kubelet[1906]: W0113 20:46:02.388256 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.388348 kubelet[1906]: E0113 20:46:02.388284 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.388479 kubelet[1906]: E0113 20:46:02.388464 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.388479 kubelet[1906]: W0113 20:46:02.388475 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.388545 kubelet[1906]: E0113 20:46:02.388505 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.388690 kubelet[1906]: E0113 20:46:02.388675 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.388690 kubelet[1906]: W0113 20:46:02.388686 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.388767 kubelet[1906]: E0113 20:46:02.388713 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.388897 kubelet[1906]: E0113 20:46:02.388881 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.388897 kubelet[1906]: W0113 20:46:02.388893 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.388962 kubelet[1906]: E0113 20:46:02.388921 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.389128 kubelet[1906]: E0113 20:46:02.389108 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.389128 kubelet[1906]: W0113 20:46:02.389120 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.389196 kubelet[1906]: E0113 20:46:02.389149 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.389332 kubelet[1906]: E0113 20:46:02.389315 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.389332 kubelet[1906]: W0113 20:46:02.389326 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.389406 kubelet[1906]: E0113 20:46:02.389354 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.389561 kubelet[1906]: E0113 20:46:02.389539 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.389561 kubelet[1906]: W0113 20:46:02.389551 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.389636 kubelet[1906]: E0113 20:46:02.389579 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.389769 kubelet[1906]: E0113 20:46:02.389750 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.389769 kubelet[1906]: W0113 20:46:02.389761 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.389835 kubelet[1906]: E0113 20:46:02.389789 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.390072 kubelet[1906]: E0113 20:46:02.390055 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.390072 kubelet[1906]: W0113 20:46:02.390070 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.390144 kubelet[1906]: E0113 20:46:02.390099 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.390299 kubelet[1906]: E0113 20:46:02.390282 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.390299 kubelet[1906]: W0113 20:46:02.390293 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.390363 kubelet[1906]: E0113 20:46:02.390322 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.390537 kubelet[1906]: E0113 20:46:02.390521 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.390537 kubelet[1906]: W0113 20:46:02.390535 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.390612 kubelet[1906]: E0113 20:46:02.390568 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.390779 kubelet[1906]: E0113 20:46:02.390758 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.390779 kubelet[1906]: W0113 20:46:02.390772 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.390852 kubelet[1906]: E0113 20:46:02.390805 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.391004 kubelet[1906]: E0113 20:46:02.390985 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.391004 kubelet[1906]: W0113 20:46:02.390999 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.391092 kubelet[1906]: E0113 20:46:02.391071 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.391263 kubelet[1906]: E0113 20:46:02.391246 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.391263 kubelet[1906]: W0113 20:46:02.391257 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.391329 kubelet[1906]: E0113 20:46:02.391286 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.391479 kubelet[1906]: E0113 20:46:02.391462 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.391479 kubelet[1906]: W0113 20:46:02.391474 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.391548 kubelet[1906]: E0113 20:46:02.391504 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.391688 kubelet[1906]: E0113 20:46:02.391673 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.391688 kubelet[1906]: W0113 20:46:02.391684 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.391754 kubelet[1906]: E0113 20:46:02.391712 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.391909 kubelet[1906]: E0113 20:46:02.391894 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.391909 kubelet[1906]: W0113 20:46:02.391905 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.391977 kubelet[1906]: E0113 20:46:02.391935 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.392149 kubelet[1906]: E0113 20:46:02.392133 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.392149 kubelet[1906]: W0113 20:46:02.392145 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.392219 kubelet[1906]: E0113 20:46:02.392176 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.392375 kubelet[1906]: E0113 20:46:02.392360 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.392375 kubelet[1906]: W0113 20:46:02.392371 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.392461 kubelet[1906]: E0113 20:46:02.392407 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.392596 kubelet[1906]: E0113 20:46:02.392581 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.392596 kubelet[1906]: W0113 20:46:02.392592 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.392670 kubelet[1906]: E0113 20:46:02.392621 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.392826 kubelet[1906]: E0113 20:46:02.392809 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.392826 kubelet[1906]: W0113 20:46:02.392820 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.392891 kubelet[1906]: E0113 20:46:02.392855 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.393109 kubelet[1906]: E0113 20:46:02.393090 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.393109 kubelet[1906]: W0113 20:46:02.393105 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.393337 kubelet[1906]: E0113 20:46:02.393317 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.393337 kubelet[1906]: E0113 20:46:02.393337 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.393436 kubelet[1906]: W0113 20:46:02.393347 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.393703 kubelet[1906]: E0113 20:46:02.393525 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.393703 kubelet[1906]: E0113 20:46:02.393582 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.393703 kubelet[1906]: W0113 20:46:02.393593 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.393703 kubelet[1906]: E0113 20:46:02.393612 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.393845 kubelet[1906]: E0113 20:46:02.393830 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.393875 kubelet[1906]: W0113 20:46:02.393844 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.393875 kubelet[1906]: E0113 20:46:02.393858 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.396966 kubelet[1906]: E0113 20:46:02.396941 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.396966 kubelet[1906]: W0113 20:46:02.396956 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.397080 kubelet[1906]: E0113 20:46:02.396971 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.438635 kubelet[1906]: E0113 20:46:02.438577 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.438635 kubelet[1906]: W0113 20:46:02.438626 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.438754 kubelet[1906]: E0113 20:46:02.438654 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.439209 kubelet[1906]: E0113 20:46:02.438998 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.439209 kubelet[1906]: W0113 20:46:02.439045 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.439209 kubelet[1906]: E0113 20:46:02.439059 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.439385 kubelet[1906]: E0113 20:46:02.439371 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:02.439460 kubelet[1906]: W0113 20:46:02.439447 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:02.439521 kubelet[1906]: E0113 20:46:02.439510 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:02.572675 kubelet[1906]: E0113 20:46:02.572627 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:02.572933 kubelet[1906]: E0113 20:46:02.572878 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:02.573966 containerd[1572]: time="2025-01-13T20:46:02.573561596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jvjfl,Uid:e43153fd-38f2-475b-993c-217db25b4020,Namespace:calico-system,Attempt:0,}" Jan 13 20:46:02.574655 containerd[1572]: time="2025-01-13T20:46:02.574090237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kx9m6,Uid:b239f6be-e4f6-4d81-a925-2e9be3fb2abd,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:03.264656 kubelet[1906]: E0113 20:46:03.264603 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:03.434737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681496497.mount: Deactivated successfully. Jan 13 20:46:03.442871 containerd[1572]: time="2025-01-13T20:46:03.442789508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:03.444740 containerd[1572]: time="2025-01-13T20:46:03.444693389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:46:03.447703 containerd[1572]: time="2025-01-13T20:46:03.447671754Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:03.448905 containerd[1572]: time="2025-01-13T20:46:03.448835647Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:03.449568 containerd[1572]: time="2025-01-13T20:46:03.449524319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:46:03.451505 containerd[1572]: time="2025-01-13T20:46:03.451450952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:03.453063 containerd[1572]: time="2025-01-13T20:46:03.453031846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 878.855438ms" Jan 13 20:46:03.454034 containerd[1572]: time="2025-01-13T20:46:03.453982920Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 880.289217ms" Jan 13 20:46:03.561306 containerd[1572]: time="2025-01-13T20:46:03.561191725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:03.561306 containerd[1572]: time="2025-01-13T20:46:03.561244544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:03.562137 containerd[1572]: time="2025-01-13T20:46:03.561258500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:03.562137 containerd[1572]: time="2025-01-13T20:46:03.561243312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:03.562137 containerd[1572]: time="2025-01-13T20:46:03.561336266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:03.562137 containerd[1572]: time="2025-01-13T20:46:03.561355131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:03.562137 containerd[1572]: time="2025-01-13T20:46:03.561487720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:03.562284 containerd[1572]: time="2025-01-13T20:46:03.562215395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:03.662105 containerd[1572]: time="2025-01-13T20:46:03.661921349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jvjfl,Uid:e43153fd-38f2-475b-993c-217db25b4020,Namespace:calico-system,Attempt:0,} returns sandbox id \"bfd46e49f0a6beff2cd3cb55ede4974317fe5a8d43f615fc945e0617dcc58133\"" Jan 13 20:46:03.662105 containerd[1572]: time="2025-01-13T20:46:03.661975691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kx9m6,Uid:b239f6be-e4f6-4d81-a925-2e9be3fb2abd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9c5c0b6e7afa6350516d32e40afe2795280730f8c30cc45615f20c2167acab8\"" Jan 13 20:46:03.663404 kubelet[1906]: E0113 20:46:03.663375 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:03.663817 kubelet[1906]: E0113 20:46:03.663378 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:03.664717 containerd[1572]: time="2025-01-13T20:46:03.664684010Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:46:04.265459 kubelet[1906]: E0113 20:46:04.265410 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:04.743683 kubelet[1906]: E0113 20:46:04.743104 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:05.065190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167451167.mount: Deactivated successfully. Jan 13 20:46:05.266438 kubelet[1906]: E0113 20:46:05.266372 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:05.328418 containerd[1572]: time="2025-01-13T20:46:05.328261682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:05.329161 containerd[1572]: time="2025-01-13T20:46:05.329093442Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:46:05.330349 containerd[1572]: time="2025-01-13T20:46:05.330294995Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:05.332344 containerd[1572]: time="2025-01-13T20:46:05.332290597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:05.332889 containerd[1572]: time="2025-01-13T20:46:05.332812135Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.668093531s" Jan 13 20:46:05.332889 containerd[1572]: time="2025-01-13T20:46:05.332874372Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:46:05.333810 containerd[1572]: time="2025-01-13T20:46:05.333762608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:46:05.335124 containerd[1572]: time="2025-01-13T20:46:05.335096069Z" level=info msg="CreateContainer within sandbox \"e9c5c0b6e7afa6350516d32e40afe2795280730f8c30cc45615f20c2167acab8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:46:05.353113 containerd[1572]: time="2025-01-13T20:46:05.353049658Z" level=info msg="CreateContainer within sandbox \"e9c5c0b6e7afa6350516d32e40afe2795280730f8c30cc45615f20c2167acab8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e18ebfebffa7b5e27335a2cecd31141c3f072fdd926918a8214d1a05b2bb8949\"" Jan 13 20:46:05.353819 containerd[1572]: time="2025-01-13T20:46:05.353780218Z" level=info msg="StartContainer for \"e18ebfebffa7b5e27335a2cecd31141c3f072fdd926918a8214d1a05b2bb8949\"" Jan 13 20:46:05.499597 containerd[1572]: time="2025-01-13T20:46:05.499534499Z" level=info msg="StartContainer for \"e18ebfebffa7b5e27335a2cecd31141c3f072fdd926918a8214d1a05b2bb8949\" returns successfully" Jan 13 20:46:05.754434 kubelet[1906]: E0113 20:46:05.753923 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:05.797859 kubelet[1906]: E0113 20:46:05.797825 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.797859 kubelet[1906]: W0113 20:46:05.797848 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.797859 kubelet[1906]: E0113 20:46:05.797871 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.798163 kubelet[1906]: E0113 20:46:05.798146 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.798163 kubelet[1906]: W0113 20:46:05.798161 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.798239 kubelet[1906]: E0113 20:46:05.798176 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.798423 kubelet[1906]: E0113 20:46:05.798407 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.798423 kubelet[1906]: W0113 20:46:05.798421 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.798496 kubelet[1906]: E0113 20:46:05.798435 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.798678 kubelet[1906]: E0113 20:46:05.798662 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.798678 kubelet[1906]: W0113 20:46:05.798675 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.798746 kubelet[1906]: E0113 20:46:05.798701 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.799096 kubelet[1906]: E0113 20:46:05.799061 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.799096 kubelet[1906]: W0113 20:46:05.799084 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.799096 kubelet[1906]: E0113 20:46:05.799111 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.799370 kubelet[1906]: E0113 20:46:05.799354 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.799370 kubelet[1906]: W0113 20:46:05.799365 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.799370 kubelet[1906]: E0113 20:46:05.799376 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.799637 kubelet[1906]: E0113 20:46:05.799616 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.799637 kubelet[1906]: W0113 20:46:05.799633 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.799713 kubelet[1906]: E0113 20:46:05.799650 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.799914 kubelet[1906]: E0113 20:46:05.799887 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.799914 kubelet[1906]: W0113 20:46:05.799902 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.799914 kubelet[1906]: E0113 20:46:05.799916 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.800213 kubelet[1906]: E0113 20:46:05.800184 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.800213 kubelet[1906]: W0113 20:46:05.800210 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.800284 kubelet[1906]: E0113 20:46:05.800226 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.800482 kubelet[1906]: E0113 20:46:05.800460 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.800482 kubelet[1906]: W0113 20:46:05.800475 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.800564 kubelet[1906]: E0113 20:46:05.800489 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.800743 kubelet[1906]: E0113 20:46:05.800726 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.800743 kubelet[1906]: W0113 20:46:05.800740 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.800798 kubelet[1906]: E0113 20:46:05.800754 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.800978 kubelet[1906]: E0113 20:46:05.800961 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.800978 kubelet[1906]: W0113 20:46:05.800975 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.801061 kubelet[1906]: E0113 20:46:05.800988 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.801241 kubelet[1906]: E0113 20:46:05.801225 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.801241 kubelet[1906]: W0113 20:46:05.801238 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.801312 kubelet[1906]: E0113 20:46:05.801251 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.801487 kubelet[1906]: E0113 20:46:05.801470 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.801487 kubelet[1906]: W0113 20:46:05.801483 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.801554 kubelet[1906]: E0113 20:46:05.801496 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.801745 kubelet[1906]: E0113 20:46:05.801728 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.801745 kubelet[1906]: W0113 20:46:05.801742 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.801802 kubelet[1906]: E0113 20:46:05.801756 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.801982 kubelet[1906]: E0113 20:46:05.801966 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.801982 kubelet[1906]: W0113 20:46:05.801979 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.802078 kubelet[1906]: E0113 20:46:05.801992 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.802337 kubelet[1906]: E0113 20:46:05.802316 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.802337 kubelet[1906]: W0113 20:46:05.802330 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.802337 kubelet[1906]: E0113 20:46:05.802344 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.802583 kubelet[1906]: E0113 20:46:05.802567 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.802583 kubelet[1906]: W0113 20:46:05.802578 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.802669 kubelet[1906]: E0113 20:46:05.802589 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.802815 kubelet[1906]: E0113 20:46:05.802797 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.802815 kubelet[1906]: W0113 20:46:05.802808 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.802880 kubelet[1906]: E0113 20:46:05.802821 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.803071 kubelet[1906]: E0113 20:46:05.803053 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.803071 kubelet[1906]: W0113 20:46:05.803064 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.803071 kubelet[1906]: E0113 20:46:05.803074 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.809400 kubelet[1906]: E0113 20:46:05.809368 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.809400 kubelet[1906]: W0113 20:46:05.809383 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.809400 kubelet[1906]: E0113 20:46:05.809401 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.809688 kubelet[1906]: E0113 20:46:05.809664 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.809688 kubelet[1906]: W0113 20:46:05.809677 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.809799 kubelet[1906]: E0113 20:46:05.809711 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.810020 kubelet[1906]: E0113 20:46:05.809980 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.810020 kubelet[1906]: W0113 20:46:05.809993 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.810090 kubelet[1906]: E0113 20:46:05.810075 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.810331 kubelet[1906]: E0113 20:46:05.810304 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.810331 kubelet[1906]: W0113 20:46:05.810318 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.810419 kubelet[1906]: E0113 20:46:05.810337 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.810561 kubelet[1906]: E0113 20:46:05.810537 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.810561 kubelet[1906]: W0113 20:46:05.810550 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.810626 kubelet[1906]: E0113 20:46:05.810569 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.810856 kubelet[1906]: E0113 20:46:05.810831 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.810856 kubelet[1906]: W0113 20:46:05.810843 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.810935 kubelet[1906]: E0113 20:46:05.810878 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.811114 kubelet[1906]: E0113 20:46:05.811097 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.811114 kubelet[1906]: W0113 20:46:05.811109 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.811181 kubelet[1906]: E0113 20:46:05.811122 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.811379 kubelet[1906]: E0113 20:46:05.811362 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.811379 kubelet[1906]: W0113 20:46:05.811373 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.811463 kubelet[1906]: E0113 20:46:05.811394 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.811693 kubelet[1906]: E0113 20:46:05.811671 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.811693 kubelet[1906]: W0113 20:46:05.811685 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.811761 kubelet[1906]: E0113 20:46:05.811708 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.811974 kubelet[1906]: E0113 20:46:05.811956 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.811974 kubelet[1906]: W0113 20:46:05.811970 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.812083 kubelet[1906]: E0113 20:46:05.811990 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.812284 kubelet[1906]: E0113 20:46:05.812260 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.812284 kubelet[1906]: W0113 20:46:05.812280 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.812375 kubelet[1906]: E0113 20:46:05.812314 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:05.812570 kubelet[1906]: E0113 20:46:05.812550 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:05.812570 kubelet[1906]: W0113 20:46:05.812565 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:05.812635 kubelet[1906]: E0113 20:46:05.812580 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.267149 kubelet[1906]: E0113 20:46:06.267062 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:06.742258 kubelet[1906]: E0113 20:46:06.742194 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:06.754695 kubelet[1906]: E0113 20:46:06.754662 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:06.810300 kubelet[1906]: E0113 20:46:06.810252 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.810300 kubelet[1906]: W0113 20:46:06.810290 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.810451 kubelet[1906]: E0113 20:46:06.810318 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.810676 kubelet[1906]: E0113 20:46:06.810654 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.810676 kubelet[1906]: W0113 20:46:06.810669 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.810720 kubelet[1906]: E0113 20:46:06.810681 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.810919 kubelet[1906]: E0113 20:46:06.810906 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.810919 kubelet[1906]: W0113 20:46:06.810918 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.810989 kubelet[1906]: E0113 20:46:06.810931 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.811148 kubelet[1906]: E0113 20:46:06.811135 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.811179 kubelet[1906]: W0113 20:46:06.811147 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.811179 kubelet[1906]: E0113 20:46:06.811160 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.811381 kubelet[1906]: E0113 20:46:06.811368 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.811431 kubelet[1906]: W0113 20:46:06.811382 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.811431 kubelet[1906]: E0113 20:46:06.811397 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.811608 kubelet[1906]: E0113 20:46:06.811595 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.811645 kubelet[1906]: W0113 20:46:06.811607 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.811645 kubelet[1906]: E0113 20:46:06.811621 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.811828 kubelet[1906]: E0113 20:46:06.811804 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.811852 kubelet[1906]: W0113 20:46:06.811827 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.811852 kubelet[1906]: E0113 20:46:06.811840 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.812054 kubelet[1906]: E0113 20:46:06.812041 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.812054 kubelet[1906]: W0113 20:46:06.812053 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.812108 kubelet[1906]: E0113 20:46:06.812065 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.812280 kubelet[1906]: E0113 20:46:06.812250 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.812280 kubelet[1906]: W0113 20:46:06.812279 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.812326 kubelet[1906]: E0113 20:46:06.812290 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.812487 kubelet[1906]: E0113 20:46:06.812473 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.812487 kubelet[1906]: W0113 20:46:06.812484 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.812544 kubelet[1906]: E0113 20:46:06.812495 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.812711 kubelet[1906]: E0113 20:46:06.812697 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.812738 kubelet[1906]: W0113 20:46:06.812710 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.812738 kubelet[1906]: E0113 20:46:06.812722 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.812921 kubelet[1906]: E0113 20:46:06.812907 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.812921 kubelet[1906]: W0113 20:46:06.812919 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.813058 kubelet[1906]: E0113 20:46:06.812934 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.813184 kubelet[1906]: E0113 20:46:06.813171 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.813209 kubelet[1906]: W0113 20:46:06.813183 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.813209 kubelet[1906]: E0113 20:46:06.813196 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.813416 kubelet[1906]: E0113 20:46:06.813392 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.813416 kubelet[1906]: W0113 20:46:06.813406 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.813416 kubelet[1906]: E0113 20:46:06.813416 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.813758 kubelet[1906]: E0113 20:46:06.813735 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.813758 kubelet[1906]: W0113 20:46:06.813746 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.813758 kubelet[1906]: E0113 20:46:06.813756 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.813962 kubelet[1906]: E0113 20:46:06.813946 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.813962 kubelet[1906]: W0113 20:46:06.813958 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.814051 kubelet[1906]: E0113 20:46:06.813970 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.814207 kubelet[1906]: E0113 20:46:06.814195 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.814239 kubelet[1906]: W0113 20:46:06.814206 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.814239 kubelet[1906]: E0113 20:46:06.814218 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.814438 kubelet[1906]: E0113 20:46:06.814426 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.814438 kubelet[1906]: W0113 20:46:06.814437 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.814492 kubelet[1906]: E0113 20:46:06.814450 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.814650 kubelet[1906]: E0113 20:46:06.814638 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.814650 kubelet[1906]: W0113 20:46:06.814648 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.814693 kubelet[1906]: E0113 20:46:06.814658 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.814863 kubelet[1906]: E0113 20:46:06.814852 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.814863 kubelet[1906]: W0113 20:46:06.814862 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.814914 kubelet[1906]: E0113 20:46:06.814872 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.815152 kubelet[1906]: E0113 20:46:06.815139 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.815152 kubelet[1906]: W0113 20:46:06.815150 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.815198 kubelet[1906]: E0113 20:46:06.815160 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.815412 kubelet[1906]: E0113 20:46:06.815398 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.815412 kubelet[1906]: W0113 20:46:06.815408 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.815476 kubelet[1906]: E0113 20:46:06.815425 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.815669 kubelet[1906]: E0113 20:46:06.815657 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.815669 kubelet[1906]: W0113 20:46:06.815667 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.815709 kubelet[1906]: E0113 20:46:06.815682 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.815899 kubelet[1906]: E0113 20:46:06.815888 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.815899 kubelet[1906]: W0113 20:46:06.815897 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.815949 kubelet[1906]: E0113 20:46:06.815912 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.816130 kubelet[1906]: E0113 20:46:06.816117 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.816130 kubelet[1906]: W0113 20:46:06.816128 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.816180 kubelet[1906]: E0113 20:46:06.816146 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.816391 kubelet[1906]: E0113 20:46:06.816378 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.816391 kubelet[1906]: W0113 20:46:06.816389 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.816446 kubelet[1906]: E0113 20:46:06.816406 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.816818 kubelet[1906]: E0113 20:46:06.816772 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.816818 kubelet[1906]: W0113 20:46:06.816801 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.816897 kubelet[1906]: E0113 20:46:06.816824 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.817127 kubelet[1906]: E0113 20:46:06.817099 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.817127 kubelet[1906]: W0113 20:46:06.817113 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.817127 kubelet[1906]: E0113 20:46:06.817137 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.817395 kubelet[1906]: E0113 20:46:06.817384 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.817395 kubelet[1906]: W0113 20:46:06.817394 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.817446 kubelet[1906]: E0113 20:46:06.817412 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.817630 kubelet[1906]: E0113 20:46:06.817606 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.817630 kubelet[1906]: W0113 20:46:06.817619 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.817684 kubelet[1906]: E0113 20:46:06.817634 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.817910 kubelet[1906]: E0113 20:46:06.817890 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.817910 kubelet[1906]: W0113 20:46:06.817903 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.817973 kubelet[1906]: E0113 20:46:06.817920 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:06.818126 kubelet[1906]: E0113 20:46:06.818110 1906 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:06.818126 kubelet[1906]: W0113 20:46:06.818121 1906 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:06.818194 kubelet[1906]: E0113 20:46:06.818132 1906 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:07.268156 kubelet[1906]: E0113 20:46:07.268093 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:07.729174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876083913.mount: Deactivated successfully. Jan 13 20:46:07.797464 containerd[1572]: time="2025-01-13T20:46:07.797408674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:07.798198 containerd[1572]: time="2025-01-13T20:46:07.798156907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 20:46:07.799247 containerd[1572]: time="2025-01-13T20:46:07.799198861Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:07.801203 containerd[1572]: time="2025-01-13T20:46:07.801178664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:07.801738 containerd[1572]: time="2025-01-13T20:46:07.801702296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.467894533s" Jan 13 20:46:07.801738 containerd[1572]: time="2025-01-13T20:46:07.801728315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 20:46:07.803172 containerd[1572]: time="2025-01-13T20:46:07.803147125Z" level=info msg="CreateContainer within sandbox \"bfd46e49f0a6beff2cd3cb55ede4974317fe5a8d43f615fc945e0617dcc58133\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:46:07.817931 containerd[1572]: time="2025-01-13T20:46:07.817887950Z" level=info msg="CreateContainer within sandbox \"bfd46e49f0a6beff2cd3cb55ede4974317fe5a8d43f615fc945e0617dcc58133\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fd74ab881f3edcd1a1e3bd3cacd6d1005dec26517d97050dbdffa760ba84360d\"" Jan 13 20:46:07.818359 containerd[1572]: time="2025-01-13T20:46:07.818332894Z" level=info msg="StartContainer for \"fd74ab881f3edcd1a1e3bd3cacd6d1005dec26517d97050dbdffa760ba84360d\"" Jan 13 20:46:07.875622 containerd[1572]: time="2025-01-13T20:46:07.875568050Z" level=info msg="StartContainer for \"fd74ab881f3edcd1a1e3bd3cacd6d1005dec26517d97050dbdffa760ba84360d\" returns successfully" Jan 13 20:46:08.268615 kubelet[1906]: E0113 20:46:08.268545 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:08.710607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd74ab881f3edcd1a1e3bd3cacd6d1005dec26517d97050dbdffa760ba84360d-rootfs.mount: Deactivated successfully. Jan 13 20:46:08.742225 kubelet[1906]: E0113 20:46:08.742121 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:08.748950 containerd[1572]: time="2025-01-13T20:46:08.748870284Z" level=info msg="shim disconnected" id=fd74ab881f3edcd1a1e3bd3cacd6d1005dec26517d97050dbdffa760ba84360d namespace=k8s.io Jan 13 20:46:08.748950 containerd[1572]: time="2025-01-13T20:46:08.748921450Z" level=warning msg="cleaning up after shim disconnected" id=fd74ab881f3edcd1a1e3bd3cacd6d1005dec26517d97050dbdffa760ba84360d namespace=k8s.io Jan 13 20:46:08.748950 containerd[1572]: time="2025-01-13T20:46:08.748931749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:08.758343 kubelet[1906]: E0113 20:46:08.758298 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:08.912862 kubelet[1906]: I0113 20:46:08.911118 1906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kx9m6" podStartSLOduration=7.241786564 podStartE2EDuration="8.911053115s" podCreationTimestamp="2025-01-13 20:46:00 +0000 UTC" firstStartedPulling="2025-01-13 20:46:03.664238003 +0000 UTC m=+4.191137730" lastFinishedPulling="2025-01-13 20:46:05.333504534 +0000 UTC m=+5.860404281" observedRunningTime="2025-01-13 20:46:05.765364656 +0000 UTC m=+6.292264383" watchObservedRunningTime="2025-01-13 20:46:08.911053115 +0000 UTC m=+9.437952842" Jan 13 20:46:09.269351 kubelet[1906]: E0113 20:46:09.269278 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:09.761101 kubelet[1906]: E0113 20:46:09.761062 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:09.761725 containerd[1572]: time="2025-01-13T20:46:09.761682047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:46:10.269769 kubelet[1906]: E0113 20:46:10.269707 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:10.742507 kubelet[1906]: E0113 20:46:10.742467 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:11.270514 kubelet[1906]: E0113 20:46:11.270459 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:12.271442 kubelet[1906]: E0113 20:46:12.271364 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:12.741993 kubelet[1906]: E0113 20:46:12.741944 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:13.272606 kubelet[1906]: E0113 20:46:13.272536 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:14.273042 kubelet[1906]: E0113 20:46:14.272976 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:14.742256 kubelet[1906]: E0113 20:46:14.742176 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:14.975246 containerd[1572]: time="2025-01-13T20:46:14.975157144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:14.988823 containerd[1572]: time="2025-01-13T20:46:14.988744485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 20:46:15.011683 containerd[1572]: time="2025-01-13T20:46:15.011520398Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:15.045225 containerd[1572]: time="2025-01-13T20:46:15.045138114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:15.045954 containerd[1572]: time="2025-01-13T20:46:15.045902717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.284177158s" Jan 13 20:46:15.046047 containerd[1572]: time="2025-01-13T20:46:15.045952320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 20:46:15.047755 containerd[1572]: time="2025-01-13T20:46:15.047724073Z" level=info msg="CreateContainer within sandbox \"bfd46e49f0a6beff2cd3cb55ede4974317fe5a8d43f615fc945e0617dcc58133\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:46:15.260389 containerd[1572]: time="2025-01-13T20:46:15.260244525Z" level=info msg="CreateContainer within sandbox \"bfd46e49f0a6beff2cd3cb55ede4974317fe5a8d43f615fc945e0617dcc58133\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"26a7471cf16f4a306f5ba2638f9263e8723db70cefdf6d7ecbbdb9c71c56ce1b\"" Jan 13 20:46:15.261104 containerd[1572]: time="2025-01-13T20:46:15.261061878Z" level=info msg="StartContainer for \"26a7471cf16f4a306f5ba2638f9263e8723db70cefdf6d7ecbbdb9c71c56ce1b\"" Jan 13 20:46:15.274061 kubelet[1906]: E0113 20:46:15.273884 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:15.518948 containerd[1572]: time="2025-01-13T20:46:15.518877344Z" level=info msg="StartContainer for \"26a7471cf16f4a306f5ba2638f9263e8723db70cefdf6d7ecbbdb9c71c56ce1b\" returns successfully" Jan 13 20:46:15.771404 kubelet[1906]: E0113 20:46:15.771368 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:16.274604 kubelet[1906]: E0113 20:46:16.274542 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:16.742322 kubelet[1906]: E0113 20:46:16.742224 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:16.772400 kubelet[1906]: E0113 20:46:16.772366 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:17.275417 kubelet[1906]: E0113 20:46:17.275364 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:17.690267 containerd[1572]: time="2025-01-13T20:46:17.690034428Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:46:17.712295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26a7471cf16f4a306f5ba2638f9263e8723db70cefdf6d7ecbbdb9c71c56ce1b-rootfs.mount: Deactivated successfully. Jan 13 20:46:17.716818 kubelet[1906]: I0113 20:46:17.716552 1906 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:46:18.276262 kubelet[1906]: E0113 20:46:18.276202 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:18.745857 containerd[1572]: time="2025-01-13T20:46:18.745811537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:0,}" Jan 13 20:46:18.937234 containerd[1572]: time="2025-01-13T20:46:18.937153782Z" level=info msg="shim disconnected" id=26a7471cf16f4a306f5ba2638f9263e8723db70cefdf6d7ecbbdb9c71c56ce1b namespace=k8s.io Jan 13 20:46:18.937234 containerd[1572]: time="2025-01-13T20:46:18.937204066Z" level=warning msg="cleaning up after shim disconnected" id=26a7471cf16f4a306f5ba2638f9263e8723db70cefdf6d7ecbbdb9c71c56ce1b namespace=k8s.io Jan 13 20:46:18.937234 containerd[1572]: time="2025-01-13T20:46:18.937211911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:19.276555 kubelet[1906]: E0113 20:46:19.276378 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:19.569485 kubelet[1906]: I0113 20:46:19.569410 1906 topology_manager.go:215] "Topology Admit Handler" podUID="671fc0ad-ef20-41af-9baf-b6763131170f" podNamespace="default" podName="nginx-deployment-6d5f899847-l84xv" Jan 13 20:46:19.663102 containerd[1572]: time="2025-01-13T20:46:19.663052401Z" level=error msg="Failed to destroy network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:19.663496 containerd[1572]: time="2025-01-13T20:46:19.663463913Z" level=error msg="encountered an error cleaning up failed sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:19.663539 containerd[1572]: time="2025-01-13T20:46:19.663525088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:19.663758 kubelet[1906]: E0113 20:46:19.663724 1906 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:19.663887 kubelet[1906]: E0113 20:46:19.663779 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:19.663887 kubelet[1906]: E0113 20:46:19.663798 1906 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:19.663887 kubelet[1906]: E0113 20:46:19.663846 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7ccjv_calico-system(a7de8883-b1b1-470b-8914-6f939b888e9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7ccjv_calico-system(a7de8883-b1b1-470b-8914-6f939b888e9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:19.665103 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9-shm.mount: Deactivated successfully. Jan 13 20:46:19.697037 kubelet[1906]: I0113 20:46:19.696988 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjwg4\" (UniqueName: \"kubernetes.io/projected/671fc0ad-ef20-41af-9baf-b6763131170f-kube-api-access-vjwg4\") pod \"nginx-deployment-6d5f899847-l84xv\" (UID: \"671fc0ad-ef20-41af-9baf-b6763131170f\") " pod="default/nginx-deployment-6d5f899847-l84xv" Jan 13 20:46:19.778491 kubelet[1906]: I0113 20:46:19.778458 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9" Jan 13 20:46:19.779132 containerd[1572]: time="2025-01-13T20:46:19.779070667Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\"" Jan 13 20:46:19.779611 containerd[1572]: time="2025-01-13T20:46:19.779274058Z" level=info msg="Ensure that sandbox cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9 in task-service has been cleanup successfully" Jan 13 20:46:19.779611 containerd[1572]: time="2025-01-13T20:46:19.779472040Z" level=info msg="TearDown network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" successfully" Jan 13 20:46:19.779611 containerd[1572]: time="2025-01-13T20:46:19.779485665Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" returns successfully" Jan 13 20:46:19.780246 containerd[1572]: time="2025-01-13T20:46:19.780206026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:1,}" Jan 13 20:46:19.781332 systemd[1]: run-netns-cni\x2d1497e63d\x2dedf0\x2d8f67\x2d362d\x2d5b2b69dd77d6.mount: Deactivated successfully. Jan 13 20:46:19.781489 kubelet[1906]: E0113 20:46:19.781468 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:19.782003 containerd[1572]: time="2025-01-13T20:46:19.781978260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:46:20.174114 containerd[1572]: time="2025-01-13T20:46:20.174072967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-l84xv,Uid:671fc0ad-ef20-41af-9baf-b6763131170f,Namespace:default,Attempt:0,}" Jan 13 20:46:20.263144 kubelet[1906]: E0113 20:46:20.263088 1906 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:20.277293 kubelet[1906]: E0113 20:46:20.277227 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:21.097608 containerd[1572]: time="2025-01-13T20:46:21.097549566Z" level=error msg="Failed to destroy network for sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:21.098095 containerd[1572]: time="2025-01-13T20:46:21.097992261Z" level=error msg="encountered an error cleaning up failed sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:21.098095 containerd[1572]: time="2025-01-13T20:46:21.098063359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:21.098313 kubelet[1906]: E0113 20:46:21.098282 1906 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:21.098375 kubelet[1906]: E0113 20:46:21.098338 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:21.098375 kubelet[1906]: E0113 20:46:21.098358 1906 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:21.098436 kubelet[1906]: E0113 20:46:21.098412 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7ccjv_calico-system(a7de8883-b1b1-470b-8914-6f939b888e9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7ccjv_calico-system(a7de8883-b1b1-470b-8914-6f939b888e9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:21.099579 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081-shm.mount: Deactivated successfully. Jan 13 20:46:21.277728 kubelet[1906]: E0113 20:46:21.277631 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:21.510692 containerd[1572]: time="2025-01-13T20:46:21.510526826Z" level=error msg="Failed to destroy network for sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:21.511123 containerd[1572]: time="2025-01-13T20:46:21.511069715Z" level=error msg="encountered an error cleaning up failed sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:21.511273 containerd[1572]: time="2025-01-13T20:46:21.511143287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-l84xv,Uid:671fc0ad-ef20-41af-9baf-b6763131170f,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:21.511472 kubelet[1906]: E0113 20:46:21.511415 1906 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:21.511523 kubelet[1906]: E0113 20:46:21.511488 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-l84xv" Jan 13 20:46:21.511523 kubelet[1906]: E0113 20:46:21.511514 1906 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-l84xv" Jan 13 20:46:21.511600 kubelet[1906]: E0113 20:46:21.511583 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-l84xv_default(671fc0ad-ef20-41af-9baf-b6763131170f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-l84xv_default(671fc0ad-ef20-41af-9baf-b6763131170f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-l84xv" podUID="671fc0ad-ef20-41af-9baf-b6763131170f" Jan 13 20:46:21.784769 kubelet[1906]: I0113 20:46:21.784735 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081" Jan 13 20:46:21.785295 containerd[1572]: time="2025-01-13T20:46:21.785186043Z" level=info msg="StopPodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\"" Jan 13 20:46:21.785451 containerd[1572]: time="2025-01-13T20:46:21.785370058Z" level=info msg="Ensure that sandbox be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081 in task-service has been cleanup successfully" Jan 13 20:46:21.785580 containerd[1572]: time="2025-01-13T20:46:21.785563752Z" level=info msg="TearDown network for sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" successfully" Jan 13 20:46:21.785580 containerd[1572]: time="2025-01-13T20:46:21.785578200Z" level=info msg="StopPodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" returns successfully" Jan 13 20:46:21.785653 kubelet[1906]: I0113 20:46:21.785634 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448" Jan 13 20:46:21.786138 containerd[1572]: time="2025-01-13T20:46:21.786099718Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\"" Jan 13 20:46:21.786193 containerd[1572]: time="2025-01-13T20:46:21.786175735Z" level=info msg="TearDown network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" successfully" Jan 13 20:46:21.786193 containerd[1572]: time="2025-01-13T20:46:21.786184451Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" returns successfully" Jan 13 20:46:21.786257 containerd[1572]: time="2025-01-13T20:46:21.786249417Z" level=info msg="StopPodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\"" Jan 13 20:46:21.786395 containerd[1572]: time="2025-01-13T20:46:21.786369760Z" level=info msg="Ensure that sandbox 2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448 in task-service has been cleanup successfully" Jan 13 20:46:21.786569 containerd[1572]: time="2025-01-13T20:46:21.786537052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:2,}" Jan 13 20:46:21.786747 containerd[1572]: time="2025-01-13T20:46:21.786728953Z" level=info msg="TearDown network for sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" successfully" Jan 13 20:46:21.786747 containerd[1572]: time="2025-01-13T20:46:21.786744193Z" level=info msg="StopPodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" returns successfully" Jan 13 20:46:21.787135 containerd[1572]: time="2025-01-13T20:46:21.787113096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-l84xv,Uid:671fc0ad-ef20-41af-9baf-b6763131170f,Namespace:default,Attempt:1,}" Jan 13 20:46:21.901037 systemd[1]: run-netns-cni\x2dbc0456eb\x2d272d\x2de3a8\x2d8b23\x2da67621fd106f.mount: Deactivated successfully. Jan 13 20:46:21.901238 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448-shm.mount: Deactivated successfully. Jan 13 20:46:21.901397 systemd[1]: run-netns-cni\x2d4d16877c\x2d0d49\x2d37e3\x2d50c2\x2deb5172e57be2.mount: Deactivated successfully. Jan 13 20:46:22.278600 kubelet[1906]: E0113 20:46:22.278405 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:22.622528 containerd[1572]: time="2025-01-13T20:46:22.622469619Z" level=error msg="Failed to destroy network for sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:22.623007 containerd[1572]: time="2025-01-13T20:46:22.622880352Z" level=error msg="encountered an error cleaning up failed sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:22.623007 containerd[1572]: time="2025-01-13T20:46:22.622940638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:22.623254 kubelet[1906]: E0113 20:46:22.623208 1906 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:22.623254 kubelet[1906]: E0113 20:46:22.623269 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:22.623415 kubelet[1906]: E0113 20:46:22.623290 1906 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:22.623415 kubelet[1906]: E0113 20:46:22.623344 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7ccjv_calico-system(a7de8883-b1b1-470b-8914-6f939b888e9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7ccjv_calico-system(a7de8883-b1b1-470b-8914-6f939b888e9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:22.673233 containerd[1572]: time="2025-01-13T20:46:22.673104153Z" level=error msg="Failed to destroy network for sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:22.673552 containerd[1572]: time="2025-01-13T20:46:22.673518252Z" level=error msg="encountered an error cleaning up failed sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:22.673594 containerd[1572]: time="2025-01-13T20:46:22.673574540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-l84xv,Uid:671fc0ad-ef20-41af-9baf-b6763131170f,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:22.673833 kubelet[1906]: E0113 20:46:22.673805 1906 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:22.673896 kubelet[1906]: E0113 20:46:22.673864 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-l84xv" Jan 13 20:46:22.673896 kubelet[1906]: E0113 20:46:22.673885 1906 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-l84xv" Jan 13 20:46:22.673994 kubelet[1906]: E0113 20:46:22.673978 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-l84xv_default(671fc0ad-ef20-41af-9baf-b6763131170f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-l84xv_default(671fc0ad-ef20-41af-9baf-b6763131170f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-l84xv" podUID="671fc0ad-ef20-41af-9baf-b6763131170f" Jan 13 20:46:22.787639 kubelet[1906]: I0113 20:46:22.787599 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c" Jan 13 20:46:22.788109 containerd[1572]: time="2025-01-13T20:46:22.788076456Z" level=info msg="StopPodSandbox for \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\"" Jan 13 20:46:22.788296 containerd[1572]: time="2025-01-13T20:46:22.788272063Z" level=info msg="Ensure that sandbox 84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c in task-service has been cleanup successfully" Jan 13 20:46:22.788505 containerd[1572]: time="2025-01-13T20:46:22.788445317Z" level=info msg="TearDown network for sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\" successfully" Jan 13 20:46:22.788505 containerd[1572]: time="2025-01-13T20:46:22.788460105Z" level=info msg="StopPodSandbox for \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\" returns successfully" Jan 13 20:46:22.788841 containerd[1572]: time="2025-01-13T20:46:22.788786444Z" level=info msg="StopPodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\"" Jan 13 20:46:22.788930 containerd[1572]: time="2025-01-13T20:46:22.788912057Z" level=info msg="TearDown network for sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" successfully" Jan 13 20:46:22.788930 containerd[1572]: time="2025-01-13T20:46:22.788927938Z" level=info msg="StopPodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" returns successfully" Jan 13 20:46:22.789176 kubelet[1906]: I0113 20:46:22.789158 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea" Jan 13 20:46:22.789617 containerd[1572]: time="2025-01-13T20:46:22.789389779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-l84xv,Uid:671fc0ad-ef20-41af-9baf-b6763131170f,Namespace:default,Attempt:2,}" Jan 13 20:46:22.789617 containerd[1572]: time="2025-01-13T20:46:22.789483188Z" level=info msg="StopPodSandbox for \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\"" Jan 13 20:46:22.789689 containerd[1572]: time="2025-01-13T20:46:22.789620193Z" level=info msg="Ensure that sandbox 5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea in task-service has been cleanup successfully" Jan 13 20:46:22.789766 containerd[1572]: time="2025-01-13T20:46:22.789748470Z" level=info msg="TearDown network for sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\" successfully" Jan 13 20:46:22.789766 containerd[1572]: time="2025-01-13T20:46:22.789762767Z" level=info msg="StopPodSandbox for \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\" returns successfully" Jan 13 20:46:22.790155 containerd[1572]: time="2025-01-13T20:46:22.789965719Z" level=info msg="StopPodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\"" Jan 13 20:46:22.790155 containerd[1572]: time="2025-01-13T20:46:22.790080119Z" level=info msg="TearDown network for sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" successfully" Jan 13 20:46:22.790155 containerd[1572]: time="2025-01-13T20:46:22.790095449Z" level=info msg="StopPodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" returns successfully" Jan 13 20:46:22.790293 containerd[1572]: time="2025-01-13T20:46:22.790274274Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\"" Jan 13 20:46:22.790384 containerd[1572]: time="2025-01-13T20:46:22.790346022Z" level=info msg="TearDown network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" successfully" Jan 13 20:46:22.790384 containerd[1572]: time="2025-01-13T20:46:22.790380759Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" returns successfully" Jan 13 20:46:22.790709 containerd[1572]: time="2025-01-13T20:46:22.790684565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:3,}" Jan 13 20:46:22.901197 systemd[1]: run-netns-cni\x2d339f2a57\x2d401a\x2d397d\x2d042a\x2d2f46a9476cdb.mount: Deactivated successfully. Jan 13 20:46:22.901377 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c-shm.mount: Deactivated successfully. Jan 13 20:46:22.901522 systemd[1]: run-netns-cni\x2d38af1e8a\x2d7b97\x2d43bc\x2d04c7\x2d32b5ff664de6.mount: Deactivated successfully. Jan 13 20:46:22.901751 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea-shm.mount: Deactivated successfully. Jan 13 20:46:23.278757 kubelet[1906]: E0113 20:46:23.278611 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:23.342110 containerd[1572]: time="2025-01-13T20:46:23.338777846Z" level=error msg="Failed to destroy network for sandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:23.342110 containerd[1572]: time="2025-01-13T20:46:23.339497311Z" level=error msg="encountered an error cleaning up failed sandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:23.342110 containerd[1572]: time="2025-01-13T20:46:23.339562146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-l84xv,Uid:671fc0ad-ef20-41af-9baf-b6763131170f,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:23.342342 kubelet[1906]: E0113 20:46:23.339877 1906 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:23.342342 kubelet[1906]: E0113 20:46:23.339944 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-l84xv" Jan 13 20:46:23.342342 kubelet[1906]: E0113 20:46:23.339965 1906 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-l84xv" Jan 13 20:46:23.342427 kubelet[1906]: E0113 20:46:23.340028 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-l84xv_default(671fc0ad-ef20-41af-9baf-b6763131170f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-l84xv_default(671fc0ad-ef20-41af-9baf-b6763131170f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-l84xv" podUID="671fc0ad-ef20-41af-9baf-b6763131170f" Jan 13 20:46:23.392954 containerd[1572]: time="2025-01-13T20:46:23.392857043Z" level=error msg="Failed to destroy network for sandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:23.393756 containerd[1572]: time="2025-01-13T20:46:23.393346455Z" level=error msg="encountered an error cleaning up failed sandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:23.393756 containerd[1572]: time="2025-01-13T20:46:23.393417602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:23.393975 kubelet[1906]: E0113 20:46:23.393686 1906 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:23.393975 kubelet[1906]: E0113 20:46:23.393753 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:23.393975 kubelet[1906]: E0113 20:46:23.393779 1906 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:23.394170 kubelet[1906]: E0113 20:46:23.393835 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7ccjv_calico-system(a7de8883-b1b1-470b-8914-6f939b888e9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7ccjv_calico-system(a7de8883-b1b1-470b-8914-6f939b888e9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:23.791767 kubelet[1906]: I0113 20:46:23.791733 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7" Jan 13 20:46:23.792349 containerd[1572]: time="2025-01-13T20:46:23.792317964Z" level=info msg="StopPodSandbox for \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\"" Jan 13 20:46:23.792807 containerd[1572]: time="2025-01-13T20:46:23.792535613Z" level=info msg="Ensure that sandbox e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7 in task-service has been cleanup successfully" Jan 13 20:46:23.792877 containerd[1572]: time="2025-01-13T20:46:23.792837364Z" level=info msg="TearDown network for sandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\" successfully" Jan 13 20:46:23.792877 containerd[1572]: time="2025-01-13T20:46:23.792855168Z" level=info msg="StopPodSandbox for \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\" returns successfully" Jan 13 20:46:23.793069 containerd[1572]: time="2025-01-13T20:46:23.793046997Z" level=info msg="StopPodSandbox for \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\"" Jan 13 20:46:23.793137 containerd[1572]: time="2025-01-13T20:46:23.793122914Z" level=info msg="TearDown network for sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\" successfully" Jan 13 20:46:23.793137 containerd[1572]: time="2025-01-13T20:46:23.793135117Z" level=info msg="StopPodSandbox for \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\" returns successfully" Jan 13 20:46:23.793353 containerd[1572]: time="2025-01-13T20:46:23.793317959Z" level=info msg="StopPodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\"" Jan 13 20:46:23.793422 containerd[1572]: time="2025-01-13T20:46:23.793406359Z" level=info msg="TearDown network for sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" successfully" Jan 13 20:46:23.793460 containerd[1572]: time="2025-01-13T20:46:23.793420537Z" level=info msg="StopPodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" returns successfully" Jan 13 20:46:23.793616 kubelet[1906]: I0113 20:46:23.793594 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71" Jan 13 20:46:23.793710 containerd[1572]: time="2025-01-13T20:46:23.793689795Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\"" Jan 13 20:46:23.793789 containerd[1572]: time="2025-01-13T20:46:23.793774579Z" level=info msg="TearDown network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" successfully" Jan 13 20:46:23.793789 containerd[1572]: time="2025-01-13T20:46:23.793787613Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" returns successfully" Jan 13 20:46:23.793954 containerd[1572]: time="2025-01-13T20:46:23.793922323Z" level=info msg="StopPodSandbox for \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\"" Jan 13 20:46:23.794108 containerd[1572]: time="2025-01-13T20:46:23.794089274Z" level=info msg="Ensure that sandbox 8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71 in task-service has been cleanup successfully" Jan 13 20:46:23.794193 containerd[1572]: time="2025-01-13T20:46:23.794170300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:4,}" Jan 13 20:46:23.794330 containerd[1572]: time="2025-01-13T20:46:23.794306051Z" level=info msg="TearDown network for sandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\" successfully" Jan 13 20:46:23.794330 containerd[1572]: time="2025-01-13T20:46:23.794327953Z" level=info msg="StopPodSandbox for \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\" returns successfully" Jan 13 20:46:23.794602 containerd[1572]: time="2025-01-13T20:46:23.794576832Z" level=info msg="StopPodSandbox for \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\"" Jan 13 20:46:23.794708 containerd[1572]: time="2025-01-13T20:46:23.794687996Z" level=info msg="TearDown network for sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\" successfully" Jan 13 20:46:23.794758 containerd[1572]: time="2025-01-13T20:46:23.794707614Z" level=info msg="StopPodSandbox for \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\" returns successfully" Jan 13 20:46:23.794966 containerd[1572]: time="2025-01-13T20:46:23.794946975Z" level=info msg="StopPodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\"" Jan 13 20:46:23.795049 containerd[1572]: time="2025-01-13T20:46:23.795034664Z" level=info msg="TearDown network for sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" successfully" Jan 13 20:46:23.795049 containerd[1572]: time="2025-01-13T20:46:23.795046577Z" level=info msg="StopPodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" returns successfully" Jan 13 20:46:23.795295 containerd[1572]: time="2025-01-13T20:46:23.795278262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-l84xv,Uid:671fc0ad-ef20-41af-9baf-b6763131170f,Namespace:default,Attempt:3,}" Jan 13 20:46:23.901112 systemd[1]: run-netns-cni\x2ddd82848d\x2db597\x2d7174\x2d36f6\x2d69ac13b90e69.mount: Deactivated successfully. Jan 13 20:46:23.901329 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7-shm.mount: Deactivated successfully. Jan 13 20:46:23.901529 systemd[1]: run-netns-cni\x2d1d50820a\x2d6115\x2d1de4\x2da34a\x2d865e2bb06e8f.mount: Deactivated successfully. Jan 13 20:46:23.901714 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71-shm.mount: Deactivated successfully. Jan 13 20:46:24.279288 kubelet[1906]: E0113 20:46:24.279166 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:25.198756 containerd[1572]: time="2025-01-13T20:46:25.198691441Z" level=error msg="Failed to destroy network for sandbox \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:25.199270 containerd[1572]: time="2025-01-13T20:46:25.199179648Z" level=error msg="encountered an error cleaning up failed sandbox \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:25.199303 containerd[1572]: time="2025-01-13T20:46:25.199264591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:25.199725 kubelet[1906]: E0113 20:46:25.199598 1906 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:25.199725 kubelet[1906]: E0113 20:46:25.199678 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:25.199725 kubelet[1906]: E0113 20:46:25.199700 1906 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:25.200122 kubelet[1906]: E0113 20:46:25.200056 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7ccjv_calico-system(a7de8883-b1b1-470b-8914-6f939b888e9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7ccjv_calico-system(a7de8883-b1b1-470b-8914-6f939b888e9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:25.201132 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1-shm.mount: Deactivated successfully. Jan 13 20:46:25.280219 kubelet[1906]: E0113 20:46:25.280175 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:25.307591 containerd[1572]: time="2025-01-13T20:46:25.307530221Z" level=error msg="Failed to destroy network for sandbox \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:25.308172 containerd[1572]: time="2025-01-13T20:46:25.308138438Z" level=error msg="encountered an error cleaning up failed sandbox \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:25.308211 containerd[1572]: time="2025-01-13T20:46:25.308193483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-l84xv,Uid:671fc0ad-ef20-41af-9baf-b6763131170f,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:25.308496 kubelet[1906]: E0113 20:46:25.308432 1906 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:25.308496 kubelet[1906]: E0113 20:46:25.308493 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-l84xv" Jan 13 20:46:25.308496 kubelet[1906]: E0113 20:46:25.308513 1906 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-l84xv" Jan 13 20:46:25.308679 kubelet[1906]: E0113 20:46:25.308571 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-l84xv_default(671fc0ad-ef20-41af-9baf-b6763131170f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-l84xv_default(671fc0ad-ef20-41af-9baf-b6763131170f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-l84xv" podUID="671fc0ad-ef20-41af-9baf-b6763131170f" Jan 13 20:46:25.799335 kubelet[1906]: I0113 20:46:25.799193 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1" Jan 13 20:46:25.800122 containerd[1572]: time="2025-01-13T20:46:25.799969707Z" level=info msg="StopPodSandbox for \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\"" Jan 13 20:46:25.800362 containerd[1572]: time="2025-01-13T20:46:25.800337913Z" level=info msg="Ensure that sandbox 26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1 in task-service has been cleanup successfully" Jan 13 20:46:25.801360 containerd[1572]: time="2025-01-13T20:46:25.800626578Z" level=info msg="TearDown network for sandbox \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\" successfully" Jan 13 20:46:25.801360 containerd[1572]: time="2025-01-13T20:46:25.800644542Z" level=info msg="StopPodSandbox for \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\" returns successfully" Jan 13 20:46:25.801546 containerd[1572]: time="2025-01-13T20:46:25.801503560Z" level=info msg="StopPodSandbox for \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\"" Jan 13 20:46:25.801732 containerd[1572]: time="2025-01-13T20:46:25.801671834Z" level=info msg="TearDown network for sandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\" successfully" Jan 13 20:46:25.801732 containerd[1572]: time="2025-01-13T20:46:25.801727901Z" level=info msg="StopPodSandbox for \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\" returns successfully" Jan 13 20:46:25.802209 containerd[1572]: time="2025-01-13T20:46:25.802181361Z" level=info msg="StopPodSandbox for \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\"" Jan 13 20:46:25.802307 containerd[1572]: time="2025-01-13T20:46:25.802259180Z" level=info msg="TearDown network for sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\" successfully" Jan 13 20:46:25.802307 containerd[1572]: time="2025-01-13T20:46:25.802272146Z" level=info msg="StopPodSandbox for \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\" returns successfully" Jan 13 20:46:25.802753 kubelet[1906]: I0113 20:46:25.802722 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7" Jan 13 20:46:25.803395 containerd[1572]: time="2025-01-13T20:46:25.803161062Z" level=info msg="StopPodSandbox for \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\"" Jan 13 20:46:25.803395 containerd[1572]: time="2025-01-13T20:46:25.803295029Z" level=info msg="Ensure that sandbox f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7 in task-service has been cleanup successfully" Jan 13 20:46:25.803395 containerd[1572]: time="2025-01-13T20:46:25.803317682Z" level=info msg="StopPodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\"" Jan 13 20:46:25.803521 containerd[1572]: time="2025-01-13T20:46:25.803421220Z" level=info msg="TearDown network for sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" successfully" Jan 13 20:46:25.803521 containerd[1572]: time="2025-01-13T20:46:25.803449726Z" level=info msg="StopPodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" returns successfully" Jan 13 20:46:25.803774 containerd[1572]: time="2025-01-13T20:46:25.803738259Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\"" Jan 13 20:46:25.803864 containerd[1572]: time="2025-01-13T20:46:25.803831408Z" level=info msg="TearDown network for sandbox \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\" successfully" Jan 13 20:46:25.803864 containerd[1572]: time="2025-01-13T20:46:25.803860614Z" level=info msg="StopPodSandbox for \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\" returns successfully" Jan 13 20:46:25.804056 containerd[1572]: time="2025-01-13T20:46:25.803840796Z" level=info msg="TearDown network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" successfully" Jan 13 20:46:25.804056 containerd[1572]: time="2025-01-13T20:46:25.803906442Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" returns successfully" Jan 13 20:46:25.804431 containerd[1572]: time="2025-01-13T20:46:25.804307461Z" level=info msg="StopPodSandbox for \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\"" Jan 13 20:46:25.804431 containerd[1572]: time="2025-01-13T20:46:25.804383156Z" level=info msg="TearDown network for sandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\" successfully" Jan 13 20:46:25.804431 containerd[1572]: time="2025-01-13T20:46:25.804392525Z" level=info msg="StopPodSandbox for \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\" returns successfully" Jan 13 20:46:25.804512 containerd[1572]: time="2025-01-13T20:46:25.804482848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:5,}" Jan 13 20:46:25.804923 containerd[1572]: time="2025-01-13T20:46:25.804895200Z" level=info msg="StopPodSandbox for \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\"" Jan 13 20:46:25.805333 containerd[1572]: time="2025-01-13T20:46:25.805023635Z" level=info msg="TearDown network for sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\" successfully" Jan 13 20:46:25.805333 containerd[1572]: time="2025-01-13T20:46:25.805038965Z" level=info msg="StopPodSandbox for \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\" returns successfully" Jan 13 20:46:25.805541 containerd[1572]: time="2025-01-13T20:46:25.805522463Z" level=info msg="StopPodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\"" Jan 13 20:46:25.805611 containerd[1572]: time="2025-01-13T20:46:25.805595514Z" level=info msg="TearDown network for sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" successfully" Jan 13 20:46:25.805611 containerd[1572]: time="2025-01-13T20:46:25.805608428Z" level=info msg="StopPodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" returns successfully" Jan 13 20:46:25.806306 containerd[1572]: time="2025-01-13T20:46:25.806194062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-l84xv,Uid:671fc0ad-ef20-41af-9baf-b6763131170f,Namespace:default,Attempt:4,}" Jan 13 20:46:26.094484 systemd[1]: run-netns-cni\x2d5676ee93\x2ddc56\x2dd4c9\x2deb0f\x2d20a5599b2e8b.mount: Deactivated successfully. Jan 13 20:46:26.094681 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7-shm.mount: Deactivated successfully. Jan 13 20:46:26.094823 systemd[1]: run-netns-cni\x2d8fb63e4d\x2dfe52\x2d2edb\x2df442\x2d4401740f4c82.mount: Deactivated successfully. Jan 13 20:46:26.281939 kubelet[1906]: E0113 20:46:26.281878 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:26.479917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount879349032.mount: Deactivated successfully. Jan 13 20:46:27.282088 kubelet[1906]: E0113 20:46:27.281987 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:27.429203 containerd[1572]: time="2025-01-13T20:46:27.429130138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:27.580453 containerd[1572]: time="2025-01-13T20:46:27.580365590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 20:46:27.652360 containerd[1572]: time="2025-01-13T20:46:27.652285389Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:27.722892 containerd[1572]: time="2025-01-13T20:46:27.722814246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:27.723597 containerd[1572]: time="2025-01-13T20:46:27.723175578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.941170217s" Jan 13 20:46:27.723597 containerd[1572]: time="2025-01-13T20:46:27.723203411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 20:46:27.730658 containerd[1572]: time="2025-01-13T20:46:27.730614936Z" level=info msg="CreateContainer within sandbox \"bfd46e49f0a6beff2cd3cb55ede4974317fe5a8d43f615fc945e0617dcc58133\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:46:27.874403 containerd[1572]: time="2025-01-13T20:46:27.874246644Z" level=error msg="Failed to destroy network for sandbox \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:27.875164 containerd[1572]: time="2025-01-13T20:46:27.874996359Z" level=error msg="encountered an error cleaning up failed sandbox \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:27.875164 containerd[1572]: time="2025-01-13T20:46:27.875068297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-l84xv,Uid:671fc0ad-ef20-41af-9baf-b6763131170f,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:27.875366 kubelet[1906]: E0113 20:46:27.875306 1906 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:27.875366 kubelet[1906]: E0113 20:46:27.875359 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-l84xv" Jan 13 20:46:27.875437 kubelet[1906]: E0113 20:46:27.875383 1906 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-l84xv" Jan 13 20:46:27.875437 kubelet[1906]: E0113 20:46:27.875435 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-l84xv_default(671fc0ad-ef20-41af-9baf-b6763131170f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-l84xv_default(671fc0ad-ef20-41af-9baf-b6763131170f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-l84xv" podUID="671fc0ad-ef20-41af-9baf-b6763131170f" Jan 13 20:46:27.886913 containerd[1572]: time="2025-01-13T20:46:27.886844173Z" level=error msg="Failed to destroy network for sandbox \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:27.887408 containerd[1572]: time="2025-01-13T20:46:27.887370180Z" level=error msg="encountered an error cleaning up failed sandbox \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:27.887481 containerd[1572]: time="2025-01-13T20:46:27.887443780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:27.887708 kubelet[1906]: E0113 20:46:27.887679 1906 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:27.887761 kubelet[1906]: E0113 20:46:27.887742 1906 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:27.887798 kubelet[1906]: E0113 20:46:27.887762 1906 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ccjv" Jan 13 20:46:27.887827 kubelet[1906]: E0113 20:46:27.887813 1906 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7ccjv_calico-system(a7de8883-b1b1-470b-8914-6f939b888e9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7ccjv_calico-system(a7de8883-b1b1-470b-8914-6f939b888e9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7ccjv" podUID="a7de8883-b1b1-470b-8914-6f939b888e9c" Jan 13 20:46:28.147423 containerd[1572]: time="2025-01-13T20:46:28.147262949Z" level=info msg="CreateContainer within sandbox \"bfd46e49f0a6beff2cd3cb55ede4974317fe5a8d43f615fc945e0617dcc58133\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2a4662fc331f4ca2f138f64b7b35a8ea6dab78257fa2c324162c9182c85ddef9\"" Jan 13 20:46:28.148089 containerd[1572]: time="2025-01-13T20:46:28.147758677Z" level=info msg="StartContainer for \"2a4662fc331f4ca2f138f64b7b35a8ea6dab78257fa2c324162c9182c85ddef9\"" Jan 13 20:46:28.239105 containerd[1572]: time="2025-01-13T20:46:28.239025133Z" level=info msg="StartContainer for \"2a4662fc331f4ca2f138f64b7b35a8ea6dab78257fa2c324162c9182c85ddef9\" returns successfully" Jan 13 20:46:28.279379 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:46:28.279503 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:46:28.282697 kubelet[1906]: E0113 20:46:28.282634 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:28.713462 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf-shm.mount: Deactivated successfully. Jan 13 20:46:28.713659 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8-shm.mount: Deactivated successfully. Jan 13 20:46:28.811203 kubelet[1906]: E0113 20:46:28.810807 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:28.811934 kubelet[1906]: I0113 20:46:28.811908 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf" Jan 13 20:46:28.812705 containerd[1572]: time="2025-01-13T20:46:28.812657420Z" level=info msg="StopPodSandbox for \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\"" Jan 13 20:46:28.813109 containerd[1572]: time="2025-01-13T20:46:28.812853555Z" level=info msg="Ensure that sandbox 4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf in task-service has been cleanup successfully" Jan 13 20:46:28.813682 containerd[1572]: time="2025-01-13T20:46:28.813649766Z" level=info msg="TearDown network for sandbox \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\" successfully" Jan 13 20:46:28.813682 containerd[1572]: time="2025-01-13T20:46:28.813677399Z" level=info msg="StopPodSandbox for \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\" returns successfully" Jan 13 20:46:28.814106 containerd[1572]: time="2025-01-13T20:46:28.814073867Z" level=info msg="StopPodSandbox for \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\"" Jan 13 20:46:28.814207 containerd[1572]: time="2025-01-13T20:46:28.814177496Z" level=info msg="TearDown network for sandbox \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\" successfully" Jan 13 20:46:28.814207 containerd[1572]: time="2025-01-13T20:46:28.814202673Z" level=info msg="StopPodSandbox for \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\" returns successfully" Jan 13 20:46:28.815607 systemd[1]: run-netns-cni\x2dfeaf40d5\x2d69a2\x2d37b9\x2dc7da\x2dd862580391a5.mount: Deactivated successfully. Jan 13 20:46:28.816158 containerd[1572]: time="2025-01-13T20:46:28.816128895Z" level=info msg="StopPodSandbox for \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\"" Jan 13 20:46:28.816273 containerd[1572]: time="2025-01-13T20:46:28.816243544Z" level=info msg="TearDown network for sandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\" successfully" Jan 13 20:46:28.816273 containerd[1572]: time="2025-01-13T20:46:28.816269544Z" level=info msg="StopPodSandbox for \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\" returns successfully" Jan 13 20:46:28.817348 kubelet[1906]: I0113 20:46:28.817315 1906 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8" Jan 13 20:46:28.817948 containerd[1572]: time="2025-01-13T20:46:28.817799899Z" level=info msg="StopPodSandbox for \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\"" Jan 13 20:46:28.818213 containerd[1572]: time="2025-01-13T20:46:28.818180917Z" level=info msg="Ensure that sandbox 25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8 in task-service has been cleanup successfully" Jan 13 20:46:28.819662 containerd[1572]: time="2025-01-13T20:46:28.819502494Z" level=info msg="TearDown network for sandbox \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\" successfully" Jan 13 20:46:28.819662 containerd[1572]: time="2025-01-13T20:46:28.819580222Z" level=info msg="StopPodSandbox for \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\" returns successfully" Jan 13 20:46:28.820199 containerd[1572]: time="2025-01-13T20:46:28.820089986Z" level=info msg="StopPodSandbox for \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\"" Jan 13 20:46:28.820360 containerd[1572]: time="2025-01-13T20:46:28.820325235Z" level=info msg="TearDown network for sandbox \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\" successfully" Jan 13 20:46:28.820413 containerd[1572]: time="2025-01-13T20:46:28.820365994Z" level=info msg="StopPodSandbox for \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\" returns successfully" Jan 13 20:46:28.820634 containerd[1572]: time="2025-01-13T20:46:28.820612074Z" level=info msg="StopPodSandbox for \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\"" Jan 13 20:46:28.820733 containerd[1572]: time="2025-01-13T20:46:28.820715822Z" level=info msg="TearDown network for sandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\" successfully" Jan 13 20:46:28.820778 containerd[1572]: time="2025-01-13T20:46:28.820733566Z" level=info msg="StopPodSandbox for \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\" returns successfully" Jan 13 20:46:28.822395 systemd[1]: run-netns-cni\x2d93afc09a\x2d88dd\x2d9015\x2d1184\x2d1e7780090353.mount: Deactivated successfully. Jan 13 20:46:28.823000 containerd[1572]: time="2025-01-13T20:46:28.822959931Z" level=info msg="StopPodSandbox for \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\"" Jan 13 20:46:28.823773 containerd[1572]: time="2025-01-13T20:46:28.823145616Z" level=info msg="TearDown network for sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\" successfully" Jan 13 20:46:28.823773 containerd[1572]: time="2025-01-13T20:46:28.823160295Z" level=info msg="StopPodSandbox for \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\" returns successfully" Jan 13 20:46:28.823773 containerd[1572]: time="2025-01-13T20:46:28.823284732Z" level=info msg="StopPodSandbox for \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\"" Jan 13 20:46:28.823773 containerd[1572]: time="2025-01-13T20:46:28.823370697Z" level=info msg="TearDown network for sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\" successfully" Jan 13 20:46:28.823773 containerd[1572]: time="2025-01-13T20:46:28.823379734Z" level=info msg="StopPodSandbox for \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\" returns successfully" Jan 13 20:46:28.824810 containerd[1572]: time="2025-01-13T20:46:28.824765824Z" level=info msg="StopPodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\"" Jan 13 20:46:28.824968 containerd[1572]: time="2025-01-13T20:46:28.824882877Z" level=info msg="TearDown network for sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" successfully" Jan 13 20:46:28.824968 containerd[1572]: time="2025-01-13T20:46:28.824904589Z" level=info msg="StopPodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" returns successfully" Jan 13 20:46:28.824968 containerd[1572]: time="2025-01-13T20:46:28.824906823Z" level=info msg="StopPodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\"" Jan 13 20:46:28.825274 containerd[1572]: time="2025-01-13T20:46:28.825258114Z" level=info msg="TearDown network for sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" successfully" Jan 13 20:46:28.825352 containerd[1572]: time="2025-01-13T20:46:28.825338367Z" level=info msg="StopPodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" returns successfully" Jan 13 20:46:28.825467 containerd[1572]: time="2025-01-13T20:46:28.825439321Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\"" Jan 13 20:46:28.825633 containerd[1572]: time="2025-01-13T20:46:28.825618703Z" level=info msg="TearDown network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" successfully" Jan 13 20:46:28.825809 containerd[1572]: time="2025-01-13T20:46:28.825794019Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" returns successfully" Jan 13 20:46:28.826136 containerd[1572]: time="2025-01-13T20:46:28.826117947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-l84xv,Uid:671fc0ad-ef20-41af-9baf-b6763131170f,Namespace:default,Attempt:5,}" Jan 13 20:46:28.826686 containerd[1572]: time="2025-01-13T20:46:28.826638443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:6,}" Jan 13 20:46:28.915440 kubelet[1906]: I0113 20:46:28.915406 1906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-jvjfl" podStartSLOduration=4.856057599 podStartE2EDuration="28.915370534s" podCreationTimestamp="2025-01-13 20:46:00 +0000 UTC" firstStartedPulling="2025-01-13 20:46:03.664263611 +0000 UTC m=+4.191163338" lastFinishedPulling="2025-01-13 20:46:27.723576546 +0000 UTC m=+28.250476273" observedRunningTime="2025-01-13 20:46:28.915119463 +0000 UTC m=+29.442019190" watchObservedRunningTime="2025-01-13 20:46:28.915370534 +0000 UTC m=+29.442270261" Jan 13 20:46:29.283442 kubelet[1906]: E0113 20:46:29.283388 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:29.571745 systemd-networkd[1246]: cali311c2bface4: Link UP Jan 13 20:46:29.572582 systemd-networkd[1246]: cali311c2bface4: Gained carrier Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.298 [INFO][2968] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.339 [INFO][2968] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0 nginx-deployment-6d5f899847- default 671fc0ad-ef20-41af-9baf-b6763131170f 921 0 2025-01-13 20:46:19 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.153 nginx-deployment-6d5f899847-l84xv eth0 default [] [] [kns.default ksa.default.default] cali311c2bface4 [] []}} ContainerID="d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" Namespace="default" Pod="nginx-deployment-6d5f899847-l84xv" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-" Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.339 [INFO][2968] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" Namespace="default" Pod="nginx-deployment-6d5f899847-l84xv" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0" Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.380 [INFO][3003] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" HandleID="k8s-pod-network.d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" Workload="10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0" Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.395 [INFO][3003] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" HandleID="k8s-pod-network.d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" Workload="10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f45e0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.153", "pod":"nginx-deployment-6d5f899847-l84xv", "timestamp":"2025-01-13 20:46:29.380939625 +0000 UTC"}, Hostname:"10.0.0.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.395 [INFO][3003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.395 [INFO][3003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.395 [INFO][3003] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.153' Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.399 [INFO][3003] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" host="10.0.0.153" Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.403 [INFO][3003] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.153" Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.495 [INFO][3003] ipam/ipam.go 489: Trying affinity for 192.168.31.128/26 host="10.0.0.153" Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.507 [INFO][3003] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.128/26 host="10.0.0.153" Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.509 [INFO][3003] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="10.0.0.153" Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.509 [INFO][3003] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" host="10.0.0.153" Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.511 [INFO][3003] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.533 [INFO][3003] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" host="10.0.0.153" Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.558 [INFO][3003] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.129/26] block=192.168.31.128/26 handle="k8s-pod-network.d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" host="10.0.0.153" Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.559 [INFO][3003] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.129/26] handle="k8s-pod-network.d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" host="10.0.0.153" Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.559 [INFO][3003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:29.616098 containerd[1572]: 2025-01-13 20:46:29.559 [INFO][3003] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.129/26] IPv6=[] ContainerID="d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" HandleID="k8s-pod-network.d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" Workload="10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0" Jan 13 20:46:29.616695 containerd[1572]: 2025-01-13 20:46:29.563 [INFO][2968] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" Namespace="default" Pod="nginx-deployment-6d5f899847-l84xv" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"671fc0ad-ef20-41af-9baf-b6763131170f", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"", Pod:"nginx-deployment-6d5f899847-l84xv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali311c2bface4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:29.616695 containerd[1572]: 2025-01-13 20:46:29.563 [INFO][2968] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.129/32] ContainerID="d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" Namespace="default" Pod="nginx-deployment-6d5f899847-l84xv" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0" Jan 13 20:46:29.616695 containerd[1572]: 2025-01-13 20:46:29.563 [INFO][2968] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali311c2bface4 ContainerID="d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" Namespace="default" Pod="nginx-deployment-6d5f899847-l84xv" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0" Jan 13 20:46:29.616695 containerd[1572]: 2025-01-13 20:46:29.573 [INFO][2968] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" Namespace="default" Pod="nginx-deployment-6d5f899847-l84xv" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0" Jan 13 20:46:29.616695 containerd[1572]: 2025-01-13 20:46:29.573 [INFO][2968] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" Namespace="default" Pod="nginx-deployment-6d5f899847-l84xv" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"671fc0ad-ef20-41af-9baf-b6763131170f", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e", Pod:"nginx-deployment-6d5f899847-l84xv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali311c2bface4", MAC:"2a:4d:4a:ed:7e:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:29.616695 containerd[1572]: 2025-01-13 20:46:29.613 [INFO][2968] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e" Namespace="default" Pod="nginx-deployment-6d5f899847-l84xv" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--6d5f899847--l84xv-eth0" Jan 13 20:46:29.640919 systemd-networkd[1246]: cali46a457a65c7: Link UP Jan 13 20:46:29.641531 systemd-networkd[1246]: cali46a457a65c7: Gained carrier Jan 13 20:46:29.686578 containerd[1572]: time="2025-01-13T20:46:29.686445514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:29.687466 containerd[1572]: time="2025-01-13T20:46:29.687168454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:29.687466 containerd[1572]: time="2025-01-13T20:46:29.687186980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:29.687466 containerd[1572]: time="2025-01-13T20:46:29.687391671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:29.719258 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:29.745825 containerd[1572]: time="2025-01-13T20:46:29.745786213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-l84xv,Uid:671fc0ad-ef20-41af-9baf-b6763131170f,Namespace:default,Attempt:5,} returns sandbox id \"d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e\"" Jan 13 20:46:29.747392 containerd[1572]: time="2025-01-13T20:46:29.747362392Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.373 [INFO][2993] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.395 [INFO][2993] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.153-k8s-csi--node--driver--7ccjv-eth0 csi-node-driver- calico-system a7de8883-b1b1-470b-8914-6f939b888e9c 719 0 2025-01-13 20:46:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.153 csi-node-driver-7ccjv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali46a457a65c7 [] []}} ContainerID="8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" Namespace="calico-system" Pod="csi-node-driver-7ccjv" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--7ccjv-" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.396 [INFO][2993] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" Namespace="calico-system" Pod="csi-node-driver-7ccjv" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--7ccjv-eth0" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.423 [INFO][3013] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" HandleID="k8s-pod-network.8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" Workload="10.0.0.153-k8s-csi--node--driver--7ccjv-eth0" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.496 [INFO][3013] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" HandleID="k8s-pod-network.8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" Workload="10.0.0.153-k8s-csi--node--driver--7ccjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f40e0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.153", "pod":"csi-node-driver-7ccjv", "timestamp":"2025-01-13 20:46:29.423947042 +0000 UTC"}, Hostname:"10.0.0.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.496 [INFO][3013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.559 [INFO][3013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.559 [INFO][3013] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.153' Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.561 [INFO][3013] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" host="10.0.0.153" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.564 [INFO][3013] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.153" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.568 [INFO][3013] ipam/ipam.go 489: Trying affinity for 192.168.31.128/26 host="10.0.0.153" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.571 [INFO][3013] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.128/26 host="10.0.0.153" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.574 [INFO][3013] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="10.0.0.153" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.574 [INFO][3013] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" host="10.0.0.153" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.576 [INFO][3013] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966 Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.604 [INFO][3013] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" host="10.0.0.153" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.636 [INFO][3013] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.130/26] block=192.168.31.128/26 handle="k8s-pod-network.8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" host="10.0.0.153" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.636 [INFO][3013] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.130/26] handle="k8s-pod-network.8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" host="10.0.0.153" Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.636 [INFO][3013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:29.775726 containerd[1572]: 2025-01-13 20:46:29.636 [INFO][3013] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.130/26] IPv6=[] ContainerID="8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" HandleID="k8s-pod-network.8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" Workload="10.0.0.153-k8s-csi--node--driver--7ccjv-eth0" Jan 13 20:46:29.776503 containerd[1572]: 2025-01-13 20:46:29.639 [INFO][2993] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" Namespace="calico-system" Pod="csi-node-driver-7ccjv" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--7ccjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-csi--node--driver--7ccjv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7de8883-b1b1-470b-8914-6f939b888e9c", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"", Pod:"csi-node-driver-7ccjv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali46a457a65c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:29.776503 containerd[1572]: 2025-01-13 20:46:29.639 [INFO][2993] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.130/32] ContainerID="8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" Namespace="calico-system" Pod="csi-node-driver-7ccjv" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--7ccjv-eth0" Jan 13 20:46:29.776503 containerd[1572]: 2025-01-13 20:46:29.639 [INFO][2993] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46a457a65c7 ContainerID="8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" Namespace="calico-system" Pod="csi-node-driver-7ccjv" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--7ccjv-eth0" Jan 13 20:46:29.776503 containerd[1572]: 2025-01-13 20:46:29.641 [INFO][2993] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" Namespace="calico-system" Pod="csi-node-driver-7ccjv" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--7ccjv-eth0" Jan 13 20:46:29.776503 containerd[1572]: 2025-01-13 20:46:29.642 [INFO][2993] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" Namespace="calico-system" Pod="csi-node-driver-7ccjv" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--7ccjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-csi--node--driver--7ccjv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7de8883-b1b1-470b-8914-6f939b888e9c", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966", Pod:"csi-node-driver-7ccjv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali46a457a65c7", MAC:"8a:2c:98:f0:65:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:29.776503 containerd[1572]: 2025-01-13 20:46:29.773 [INFO][2993] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966" Namespace="calico-system" Pod="csi-node-driver-7ccjv" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--7ccjv-eth0" Jan 13 20:46:29.821548 kubelet[1906]: E0113 20:46:29.821510 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:29.851428 containerd[1572]: time="2025-01-13T20:46:29.850315302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:29.851428 containerd[1572]: time="2025-01-13T20:46:29.851299982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:29.851428 containerd[1572]: time="2025-01-13T20:46:29.851331602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:29.852095 containerd[1572]: time="2025-01-13T20:46:29.851428888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:29.880172 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:29.894478 containerd[1572]: time="2025-01-13T20:46:29.894428710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ccjv,Uid:a7de8883-b1b1-470b-8914-6f939b888e9c,Namespace:calico-system,Attempt:6,} returns sandbox id \"8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966\"" Jan 13 20:46:30.284191 kubelet[1906]: E0113 20:46:30.284159 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:30.485052 kernel: bpftool[3278]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:46:30.712215 systemd-networkd[1246]: vxlan.calico: Link UP Jan 13 20:46:30.712227 systemd-networkd[1246]: vxlan.calico: Gained carrier Jan 13 20:46:31.259220 systemd-networkd[1246]: cali46a457a65c7: Gained IPv6LL Jan 13 20:46:31.284963 kubelet[1906]: E0113 20:46:31.284924 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:31.515402 systemd-networkd[1246]: cali311c2bface4: Gained IPv6LL Jan 13 20:46:31.771647 systemd-networkd[1246]: vxlan.calico: Gained IPv6LL Jan 13 20:46:32.286522 kubelet[1906]: E0113 20:46:32.286445 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:33.281397 update_engine[1553]: I20250113 20:46:33.281307 1553 update_attempter.cc:509] Updating boot flags... Jan 13 20:46:33.287623 kubelet[1906]: E0113 20:46:33.287587 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:33.353045 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3313) Jan 13 20:46:33.467135 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3313) Jan 13 20:46:33.502256 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3313) Jan 13 20:46:34.288463 kubelet[1906]: E0113 20:46:34.288398 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:35.289458 kubelet[1906]: E0113 20:46:35.289398 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:36.290402 kubelet[1906]: E0113 20:46:36.290329 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:37.291468 kubelet[1906]: E0113 20:46:37.291422 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:38.022568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823990648.mount: Deactivated successfully. Jan 13 20:46:38.292281 kubelet[1906]: E0113 20:46:38.292242 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:39.293106 kubelet[1906]: E0113 20:46:39.293051 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:40.263400 kubelet[1906]: E0113 20:46:40.263355 1906 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:40.293635 kubelet[1906]: E0113 20:46:40.293604 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:41.294152 kubelet[1906]: E0113 20:46:41.294090 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:42.294999 kubelet[1906]: E0113 20:46:42.294935 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:43.295831 kubelet[1906]: E0113 20:46:43.295754 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:44.296724 kubelet[1906]: E0113 20:46:44.296658 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:45.297659 kubelet[1906]: E0113 20:46:45.297596 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:45.832067 kubelet[1906]: E0113 20:46:45.832038 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:46.298801 kubelet[1906]: E0113 20:46:46.298734 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:47.299869 kubelet[1906]: E0113 20:46:47.299790 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:48.300188 kubelet[1906]: E0113 20:46:48.300131 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:49.300964 kubelet[1906]: E0113 20:46:49.300877 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:50.301884 kubelet[1906]: E0113 20:46:50.301831 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:50.420839 containerd[1572]: time="2025-01-13T20:46:50.420751314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:50.489227 containerd[1572]: time="2025-01-13T20:46:50.489169278Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 20:46:50.571957 containerd[1572]: time="2025-01-13T20:46:50.571786174Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:50.685657 containerd[1572]: time="2025-01-13T20:46:50.685598995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:50.686706 containerd[1572]: time="2025-01-13T20:46:50.686643813Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 20.939231978s" Jan 13 20:46:50.686706 containerd[1572]: time="2025-01-13T20:46:50.686706411Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:46:50.687384 containerd[1572]: time="2025-01-13T20:46:50.687233484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:46:50.688099 containerd[1572]: time="2025-01-13T20:46:50.688072095Z" level=info msg="CreateContainer within sandbox \"d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:46:51.302466 kubelet[1906]: E0113 20:46:51.302399 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:51.758469 containerd[1572]: time="2025-01-13T20:46:51.758228857Z" level=info msg="CreateContainer within sandbox \"d50458e7cc87cd407218d5268fcbd2c804f2b1a01ea31547dec20bf75f47bb9e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4deba3cafae1b391691e8b43bb1ebad5b123926ba4f4baee4ea1635e7295d47d\"" Jan 13 20:46:51.759161 containerd[1572]: time="2025-01-13T20:46:51.758876988Z" level=info msg="StartContainer for \"4deba3cafae1b391691e8b43bb1ebad5b123926ba4f4baee4ea1635e7295d47d\"" Jan 13 20:46:52.093460 containerd[1572]: time="2025-01-13T20:46:52.093420360Z" level=info msg="StartContainer for \"4deba3cafae1b391691e8b43bb1ebad5b123926ba4f4baee4ea1635e7295d47d\" returns successfully" Jan 13 20:46:52.302716 kubelet[1906]: E0113 20:46:52.302680 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:52.402840 kubelet[1906]: I0113 20:46:52.402703 1906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-l84xv" podStartSLOduration=12.462763055 podStartE2EDuration="33.402670812s" podCreationTimestamp="2025-01-13 20:46:19 +0000 UTC" firstStartedPulling="2025-01-13 20:46:29.747114478 +0000 UTC m=+30.274014206" lastFinishedPulling="2025-01-13 20:46:50.687022246 +0000 UTC m=+51.213921963" observedRunningTime="2025-01-13 20:46:52.402541879 +0000 UTC m=+52.929441606" watchObservedRunningTime="2025-01-13 20:46:52.402670812 +0000 UTC m=+52.929570539" Jan 13 20:46:53.303362 kubelet[1906]: E0113 20:46:53.303312 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:54.304169 kubelet[1906]: E0113 20:46:54.304095 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:55.304832 kubelet[1906]: E0113 20:46:55.304761 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:55.362500 containerd[1572]: time="2025-01-13T20:46:55.362438344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:55.375377 containerd[1572]: time="2025-01-13T20:46:55.375326034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 20:46:55.396335 containerd[1572]: time="2025-01-13T20:46:55.396298567Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:55.444575 containerd[1572]: time="2025-01-13T20:46:55.444527157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:55.445206 containerd[1572]: time="2025-01-13T20:46:55.445168384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 4.757909994s" Jan 13 20:46:55.445251 containerd[1572]: time="2025-01-13T20:46:55.445208399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 20:46:55.446727 containerd[1572]: time="2025-01-13T20:46:55.446698202Z" level=info msg="CreateContainer within sandbox \"8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:46:55.971690 containerd[1572]: time="2025-01-13T20:46:55.971616953Z" level=info msg="CreateContainer within sandbox \"8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9d77dfe5339b6fc09507ea008bfb2ae945bbfe444128b7db240eebf5679b7a48\"" Jan 13 20:46:55.972228 containerd[1572]: time="2025-01-13T20:46:55.972197465Z" level=info msg="StartContainer for \"9d77dfe5339b6fc09507ea008bfb2ae945bbfe444128b7db240eebf5679b7a48\"" Jan 13 20:46:56.305239 kubelet[1906]: E0113 20:46:56.305193 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:56.327446 containerd[1572]: time="2025-01-13T20:46:56.327399336Z" level=info msg="StartContainer for \"9d77dfe5339b6fc09507ea008bfb2ae945bbfe444128b7db240eebf5679b7a48\" returns successfully" Jan 13 20:46:56.328494 containerd[1572]: time="2025-01-13T20:46:56.328436296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:46:57.306065 kubelet[1906]: E0113 20:46:57.305995 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:58.306423 kubelet[1906]: E0113 20:46:58.306363 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:58.932091 kubelet[1906]: I0113 20:46:58.932039 1906 topology_manager.go:215] "Topology Admit Handler" podUID="62e5c5bc-eb19-4bff-8bdb-b3cd026ec59d" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 20:46:58.969926 kubelet[1906]: I0113 20:46:58.969883 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/62e5c5bc-eb19-4bff-8bdb-b3cd026ec59d-data\") pod \"nfs-server-provisioner-0\" (UID: \"62e5c5bc-eb19-4bff-8bdb-b3cd026ec59d\") " pod="default/nfs-server-provisioner-0" Jan 13 20:46:58.969926 kubelet[1906]: I0113 20:46:58.969937 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4gv4\" (UniqueName: \"kubernetes.io/projected/62e5c5bc-eb19-4bff-8bdb-b3cd026ec59d-kube-api-access-m4gv4\") pod \"nfs-server-provisioner-0\" (UID: \"62e5c5bc-eb19-4bff-8bdb-b3cd026ec59d\") " pod="default/nfs-server-provisioner-0" Jan 13 20:46:59.237293 containerd[1572]: time="2025-01-13T20:46:59.237115354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:62e5c5bc-eb19-4bff-8bdb-b3cd026ec59d,Namespace:default,Attempt:0,}" Jan 13 20:46:59.306784 kubelet[1906]: E0113 20:46:59.306746 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:46:59.513173 containerd[1572]: time="2025-01-13T20:46:59.513007793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:59.578913 containerd[1572]: time="2025-01-13T20:46:59.578859009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 20:46:59.634486 containerd[1572]: time="2025-01-13T20:46:59.634426289Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:59.796250 containerd[1572]: time="2025-01-13T20:46:59.796199910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:59.797264 containerd[1572]: time="2025-01-13T20:46:59.797147542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.468648587s" Jan 13 20:46:59.797264 containerd[1572]: time="2025-01-13T20:46:59.797176617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 20:46:59.798725 containerd[1572]: time="2025-01-13T20:46:59.798689882Z" level=info msg="CreateContainer within sandbox \"8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:46:59.912945 systemd-networkd[1246]: cali60e51b789ff: Link UP Jan 13 20:46:59.913409 systemd-networkd[1246]: cali60e51b789ff: Gained carrier Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.775 [INFO][3545] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.153-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 62e5c5bc-eb19-4bff-8bdb-b3cd026ec59d 1197 0 2025-01-13 20:46:58 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.153 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-" Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.775 [INFO][3545] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.802 [INFO][3559] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" HandleID="k8s-pod-network.de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" Workload="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.809 [INFO][3559] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" HandleID="k8s-pod-network.de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" Workload="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f55b0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.153", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 20:46:59.802229347 +0000 UTC"}, Hostname:"10.0.0.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.809 [INFO][3559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.809 [INFO][3559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.809 [INFO][3559] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.153' Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.810 [INFO][3559] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" host="10.0.0.153" Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.813 [INFO][3559] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.153" Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.816 [INFO][3559] ipam/ipam.go 489: Trying affinity for 192.168.31.128/26 host="10.0.0.153" Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.818 [INFO][3559] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.128/26 host="10.0.0.153" Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.819 [INFO][3559] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="10.0.0.153" Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.819 [INFO][3559] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" host="10.0.0.153" Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.820 [INFO][3559] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15 Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.831 [INFO][3559] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" host="10.0.0.153" Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.908 [INFO][3559] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.131/26] block=192.168.31.128/26 handle="k8s-pod-network.de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" host="10.0.0.153" Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.908 [INFO][3559] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.131/26] handle="k8s-pod-network.de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" host="10.0.0.153" Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.908 [INFO][3559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:59.937592 containerd[1572]: 2025-01-13 20:46:59.909 [INFO][3559] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.131/26] IPv6=[] ContainerID="de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" HandleID="k8s-pod-network.de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" Workload="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:46:59.938486 containerd[1572]: 2025-01-13 20:46:59.911 [INFO][3545] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"62e5c5bc-eb19-4bff-8bdb-b3cd026ec59d", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.31.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:59.938486 containerd[1572]: 2025-01-13 20:46:59.911 [INFO][3545] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.131/32] ContainerID="de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:46:59.938486 containerd[1572]: 2025-01-13 20:46:59.911 [INFO][3545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:46:59.938486 containerd[1572]: 2025-01-13 20:46:59.913 [INFO][3545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:46:59.938702 containerd[1572]: 2025-01-13 20:46:59.913 [INFO][3545] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"62e5c5bc-eb19-4bff-8bdb-b3cd026ec59d", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.31.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"f6:fa:a6:de:f7:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:59.938702 containerd[1572]: 2025-01-13 20:46:59.935 [INFO][3545] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:47:00.165789 containerd[1572]: time="2025-01-13T20:47:00.165615215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:47:00.165789 containerd[1572]: time="2025-01-13T20:47:00.165678945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:47:00.165789 containerd[1572]: time="2025-01-13T20:47:00.165690286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:47:00.165934 containerd[1572]: time="2025-01-13T20:47:00.165772812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:47:00.190132 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:47:00.220324 containerd[1572]: time="2025-01-13T20:47:00.220284900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:62e5c5bc-eb19-4bff-8bdb-b3cd026ec59d,Namespace:default,Attempt:0,} returns sandbox id \"de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15\"" Jan 13 20:47:00.221629 containerd[1572]: time="2025-01-13T20:47:00.221593460Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:47:00.263612 kubelet[1906]: E0113 20:47:00.263576 1906 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:00.275857 containerd[1572]: time="2025-01-13T20:47:00.275824169Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\"" Jan 13 20:47:00.276241 containerd[1572]: time="2025-01-13T20:47:00.275926823Z" level=info msg="TearDown network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" successfully" Jan 13 20:47:00.276241 containerd[1572]: time="2025-01-13T20:47:00.275937904Z" level=info msg="StopPodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" returns successfully" Jan 13 20:47:00.276241 containerd[1572]: time="2025-01-13T20:47:00.276220655Z" level=info msg="RemovePodSandbox for \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\"" Jan 13 20:47:00.276323 containerd[1572]: time="2025-01-13T20:47:00.276243768Z" level=info msg="Forcibly stopping sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\"" Jan 13 20:47:00.276366 containerd[1572]: time="2025-01-13T20:47:00.276323890Z" level=info msg="TearDown network for sandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" successfully" Jan 13 20:47:00.307511 kubelet[1906]: E0113 20:47:00.307489 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:00.457189 containerd[1572]: time="2025-01-13T20:47:00.457086459Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:00.457189 containerd[1572]: time="2025-01-13T20:47:00.457133898Z" level=info msg="RemovePodSandbox \"cf93863586937aaaccf0275b186d612aa060d7b41d3775f48834f87503594bd9\" returns successfully" Jan 13 20:47:00.457488 containerd[1572]: time="2025-01-13T20:47:00.457463398Z" level=info msg="StopPodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\"" Jan 13 20:47:00.457615 containerd[1572]: time="2025-01-13T20:47:00.457595556Z" level=info msg="TearDown network for sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" successfully" Jan 13 20:47:00.457615 containerd[1572]: time="2025-01-13T20:47:00.457611716Z" level=info msg="StopPodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" returns successfully" Jan 13 20:47:00.457835 containerd[1572]: time="2025-01-13T20:47:00.457808877Z" level=info msg="RemovePodSandbox for \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\"" Jan 13 20:47:00.457835 containerd[1572]: time="2025-01-13T20:47:00.457828744Z" level=info msg="Forcibly stopping sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\"" Jan 13 20:47:00.457964 containerd[1572]: time="2025-01-13T20:47:00.457890852Z" level=info msg="TearDown network for sandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" successfully" Jan 13 20:47:00.669317 containerd[1572]: time="2025-01-13T20:47:00.669277102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:00.669317 containerd[1572]: time="2025-01-13T20:47:00.669317177Z" level=info msg="RemovePodSandbox \"be2dc96ec23b8d3948629be1c508212ab93adc3de2264b1cdf9f1282f9e54081\" returns successfully" Jan 13 20:47:00.669691 containerd[1572]: time="2025-01-13T20:47:00.669656846Z" level=info msg="StopPodSandbox for \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\"" Jan 13 20:47:00.669784 containerd[1572]: time="2025-01-13T20:47:00.669766301Z" level=info msg="TearDown network for sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\" successfully" Jan 13 20:47:00.669784 containerd[1572]: time="2025-01-13T20:47:00.669778414Z" level=info msg="StopPodSandbox for \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\" returns successfully" Jan 13 20:47:00.670107 containerd[1572]: time="2025-01-13T20:47:00.670071125Z" level=info msg="RemovePodSandbox for \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\"" Jan 13 20:47:00.670107 containerd[1572]: time="2025-01-13T20:47:00.670097424Z" level=info msg="Forcibly stopping sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\"" Jan 13 20:47:00.670192 containerd[1572]: time="2025-01-13T20:47:00.670170471Z" level=info msg="TearDown network for sandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\" successfully" Jan 13 20:47:00.673555 containerd[1572]: time="2025-01-13T20:47:00.673522202Z" level=info msg="CreateContainer within sandbox \"8ddd97412606f4c9b3db74da7378e8233793d1fb18bf2cc5e0986bc9657d3966\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7d1f2defe7a60616e56d47cf0f3c3c06c32eb331c2899cda6a2c94ce0ab51f77\"" Jan 13 20:47:00.673848 containerd[1572]: time="2025-01-13T20:47:00.673829841Z" level=info msg="StartContainer for \"7d1f2defe7a60616e56d47cf0f3c3c06c32eb331c2899cda6a2c94ce0ab51f77\"" Jan 13 20:47:00.696201 containerd[1572]: time="2025-01-13T20:47:00.696142837Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:00.696201 containerd[1572]: time="2025-01-13T20:47:00.696199173Z" level=info msg="RemovePodSandbox \"5ffcf8eac0a6693cbbc9a3eebe746e0205be1192d2b6c0f98f2f8339c14226ea\" returns successfully" Jan 13 20:47:00.696677 containerd[1572]: time="2025-01-13T20:47:00.696651163Z" level=info msg="StopPodSandbox for \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\"" Jan 13 20:47:00.696785 containerd[1572]: time="2025-01-13T20:47:00.696764045Z" level=info msg="TearDown network for sandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\" successfully" Jan 13 20:47:00.696785 containerd[1572]: time="2025-01-13T20:47:00.696782540Z" level=info msg="StopPodSandbox for \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\" returns successfully" Jan 13 20:47:00.697107 containerd[1572]: time="2025-01-13T20:47:00.697049682Z" level=info msg="RemovePodSandbox for \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\"" Jan 13 20:47:00.697107 containerd[1572]: time="2025-01-13T20:47:00.697076502Z" level=info msg="Forcibly stopping sandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\"" Jan 13 20:47:00.697231 containerd[1572]: time="2025-01-13T20:47:00.697159839Z" level=info msg="TearDown network for sandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\" successfully" Jan 13 20:47:00.779906 kubelet[1906]: I0113 20:47:00.779810 1906 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:47:00.779906 kubelet[1906]: I0113 20:47:00.779853 1906 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:47:00.798754 containerd[1572]: time="2025-01-13T20:47:00.798700478Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:00.801496 containerd[1572]: time="2025-01-13T20:47:00.801445909Z" level=info msg="RemovePodSandbox \"e14d271694079e6aa9b9169f448504ed169278c3253d44264b18a956f1f0aea7\" returns successfully" Jan 13 20:47:00.801588 containerd[1572]: time="2025-01-13T20:47:00.801498738Z" level=info msg="StartContainer for \"7d1f2defe7a60616e56d47cf0f3c3c06c32eb331c2899cda6a2c94ce0ab51f77\" returns successfully" Jan 13 20:47:00.802111 containerd[1572]: time="2025-01-13T20:47:00.801962731Z" level=info msg="StopPodSandbox for \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\"" Jan 13 20:47:00.802111 containerd[1572]: time="2025-01-13T20:47:00.802060825Z" level=info msg="TearDown network for sandbox \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\" successfully" Jan 13 20:47:00.802111 containerd[1572]: time="2025-01-13T20:47:00.802070903Z" level=info msg="StopPodSandbox for \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\" returns successfully" Jan 13 20:47:00.802313 containerd[1572]: time="2025-01-13T20:47:00.802276730Z" level=info msg="RemovePodSandbox for \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\"" Jan 13 20:47:00.802313 containerd[1572]: time="2025-01-13T20:47:00.802298361Z" level=info msg="Forcibly stopping sandbox \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\"" Jan 13 20:47:00.802434 containerd[1572]: time="2025-01-13T20:47:00.802363955Z" level=info msg="TearDown network for sandbox \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\" successfully" Jan 13 20:47:00.885811 containerd[1572]: time="2025-01-13T20:47:00.885751768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:00.885811 containerd[1572]: time="2025-01-13T20:47:00.885802774Z" level=info msg="RemovePodSandbox \"26f0d2599c1a52e79eb1031693ee3eb7e4a3050bfc43c90c1a96e60923d31fd1\" returns successfully" Jan 13 20:47:00.886133 containerd[1572]: time="2025-01-13T20:47:00.886107286Z" level=info msg="StopPodSandbox for \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\"" Jan 13 20:47:00.886226 containerd[1572]: time="2025-01-13T20:47:00.886195983Z" level=info msg="TearDown network for sandbox \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\" successfully" Jan 13 20:47:00.886226 containerd[1572]: time="2025-01-13T20:47:00.886209929Z" level=info msg="StopPodSandbox for \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\" returns successfully" Jan 13 20:47:00.886509 containerd[1572]: time="2025-01-13T20:47:00.886483634Z" level=info msg="RemovePodSandbox for \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\"" Jan 13 20:47:00.886566 containerd[1572]: time="2025-01-13T20:47:00.886510604Z" level=info msg="Forcibly stopping sandbox \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\"" Jan 13 20:47:00.886678 containerd[1572]: time="2025-01-13T20:47:00.886634407Z" level=info msg="TearDown network for sandbox \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\" successfully" Jan 13 20:47:01.045435 containerd[1572]: time="2025-01-13T20:47:01.045400803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:01.045536 containerd[1572]: time="2025-01-13T20:47:01.045437702Z" level=info msg="RemovePodSandbox \"4f5cbd7edabec4441803eb5dd69e5171538950dfe7fd758276710ca62567aedf\" returns successfully" Jan 13 20:47:01.045774 containerd[1572]: time="2025-01-13T20:47:01.045750901Z" level=info msg="StopPodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\"" Jan 13 20:47:01.045880 containerd[1572]: time="2025-01-13T20:47:01.045852151Z" level=info msg="TearDown network for sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" successfully" Jan 13 20:47:01.045880 containerd[1572]: time="2025-01-13T20:47:01.045871347Z" level=info msg="StopPodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" returns successfully" Jan 13 20:47:01.046131 containerd[1572]: time="2025-01-13T20:47:01.046108423Z" level=info msg="RemovePodSandbox for \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\"" Jan 13 20:47:01.046190 containerd[1572]: time="2025-01-13T20:47:01.046130775Z" level=info msg="Forcibly stopping sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\"" Jan 13 20:47:01.046249 containerd[1572]: time="2025-01-13T20:47:01.046206929Z" level=info msg="TearDown network for sandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" successfully" Jan 13 20:47:01.083154 systemd-networkd[1246]: cali60e51b789ff: Gained IPv6LL Jan 13 20:47:01.308100 kubelet[1906]: E0113 20:47:01.307979 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:01.663598 containerd[1572]: time="2025-01-13T20:47:01.663480349Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:01.663598 containerd[1572]: time="2025-01-13T20:47:01.663523761Z" level=info msg="RemovePodSandbox \"2e07b3a74dfd8839f26360ff6fce7efb0a50af60db9e76648cc927e10e801448\" returns successfully" Jan 13 20:47:01.664046 containerd[1572]: time="2025-01-13T20:47:01.663886362Z" level=info msg="StopPodSandbox for \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\"" Jan 13 20:47:01.664046 containerd[1572]: time="2025-01-13T20:47:01.663982263Z" level=info msg="TearDown network for sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\" successfully" Jan 13 20:47:01.664046 containerd[1572]: time="2025-01-13T20:47:01.663992873Z" level=info msg="StopPodSandbox for \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\" returns successfully" Jan 13 20:47:01.664320 containerd[1572]: time="2025-01-13T20:47:01.664262279Z" level=info msg="RemovePodSandbox for \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\"" Jan 13 20:47:01.664320 containerd[1572]: time="2025-01-13T20:47:01.664284030Z" level=info msg="Forcibly stopping sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\"" Jan 13 20:47:01.664385 containerd[1572]: time="2025-01-13T20:47:01.664348601Z" level=info msg="TearDown network for sandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\" successfully" Jan 13 20:47:01.908864 containerd[1572]: time="2025-01-13T20:47:01.908800402Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:01.908864 containerd[1572]: time="2025-01-13T20:47:01.908864132Z" level=info msg="RemovePodSandbox \"84edbd93677b1661e16f8940e3ac98eb96412f65310409e0bf188eda9ef1120c\" returns successfully" Jan 13 20:47:01.909295 containerd[1572]: time="2025-01-13T20:47:01.909269855Z" level=info msg="StopPodSandbox for \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\"" Jan 13 20:47:01.909399 containerd[1572]: time="2025-01-13T20:47:01.909380893Z" level=info msg="TearDown network for sandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\" successfully" Jan 13 20:47:01.909429 containerd[1572]: time="2025-01-13T20:47:01.909399007Z" level=info msg="StopPodSandbox for \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\" returns successfully" Jan 13 20:47:01.909726 containerd[1572]: time="2025-01-13T20:47:01.909698190Z" level=info msg="RemovePodSandbox for \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\"" Jan 13 20:47:01.909776 containerd[1572]: time="2025-01-13T20:47:01.909726533Z" level=info msg="Forcibly stopping sandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\"" Jan 13 20:47:01.909854 containerd[1572]: time="2025-01-13T20:47:01.909806583Z" level=info msg="TearDown network for sandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\" successfully" Jan 13 20:47:01.957534 containerd[1572]: time="2025-01-13T20:47:01.957400813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:01.957534 containerd[1572]: time="2025-01-13T20:47:01.957449795Z" level=info msg="RemovePodSandbox \"8ea47646a33e14e044b96fcc80b1215d5d580b9197bf020e0ba6368ce298dc71\" returns successfully" Jan 13 20:47:01.957816 containerd[1572]: time="2025-01-13T20:47:01.957774576Z" level=info msg="StopPodSandbox for \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\"" Jan 13 20:47:01.957876 containerd[1572]: time="2025-01-13T20:47:01.957859816Z" level=info msg="TearDown network for sandbox \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\" successfully" Jan 13 20:47:01.957876 containerd[1572]: time="2025-01-13T20:47:01.957869504Z" level=info msg="StopPodSandbox for \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\" returns successfully" Jan 13 20:47:01.958265 containerd[1572]: time="2025-01-13T20:47:01.958220424Z" level=info msg="RemovePodSandbox for \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\"" Jan 13 20:47:01.958265 containerd[1572]: time="2025-01-13T20:47:01.958262363Z" level=info msg="Forcibly stopping sandbox \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\"" Jan 13 20:47:01.958468 containerd[1572]: time="2025-01-13T20:47:01.958417134Z" level=info msg="TearDown network for sandbox \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\" successfully" Jan 13 20:47:01.984805 containerd[1572]: time="2025-01-13T20:47:01.984760420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:01.984896 containerd[1572]: time="2025-01-13T20:47:01.984815924Z" level=info msg="RemovePodSandbox \"f41b13194083fc7e42f65c30c39b8def669e97275649f2238b8778ca783551b7\" returns successfully" Jan 13 20:47:01.985368 containerd[1572]: time="2025-01-13T20:47:01.985248998Z" level=info msg="StopPodSandbox for \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\"" Jan 13 20:47:01.985368 containerd[1572]: time="2025-01-13T20:47:01.985362761Z" level=info msg="TearDown network for sandbox \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\" successfully" Jan 13 20:47:01.985457 containerd[1572]: time="2025-01-13T20:47:01.985378020Z" level=info msg="StopPodSandbox for \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\" returns successfully" Jan 13 20:47:01.985739 containerd[1572]: time="2025-01-13T20:47:01.985710255Z" level=info msg="RemovePodSandbox for \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\"" Jan 13 20:47:01.985739 containerd[1572]: time="2025-01-13T20:47:01.985735873Z" level=info msg="Forcibly stopping sandbox \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\"" Jan 13 20:47:01.985866 containerd[1572]: time="2025-01-13T20:47:01.985816274Z" level=info msg="TearDown network for sandbox \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\" successfully" Jan 13 20:47:02.308365 kubelet[1906]: E0113 20:47:02.308327 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:03.309046 kubelet[1906]: E0113 20:47:03.308980 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:03.756307 containerd[1572]: time="2025-01-13T20:47:03.756180459Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:03.756307 containerd[1572]: time="2025-01-13T20:47:03.756236103Z" level=info msg="RemovePodSandbox \"25debaa7490345a1168063de7b0cebdfe50b3f8839cb18bb76aea5e369960dd8\" returns successfully" Jan 13 20:47:04.309514 kubelet[1906]: E0113 20:47:04.309478 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:05.021851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1007594744.mount: Deactivated successfully. Jan 13 20:47:05.310628 kubelet[1906]: E0113 20:47:05.310587 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:06.310784 kubelet[1906]: E0113 20:47:06.310719 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:07.311373 kubelet[1906]: E0113 20:47:07.311314 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:08.312131 kubelet[1906]: E0113 20:47:08.312003 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:09.313145 kubelet[1906]: E0113 20:47:09.313087 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:10.313661 kubelet[1906]: E0113 20:47:10.313601 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:10.571036 containerd[1572]: time="2025-01-13T20:47:10.570834427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:10.587177 containerd[1572]: time="2025-01-13T20:47:10.587128122Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 13 20:47:10.609418 containerd[1572]: time="2025-01-13T20:47:10.609370228Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:10.616141 containerd[1572]: time="2025-01-13T20:47:10.616104393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:10.617152 containerd[1572]: time="2025-01-13T20:47:10.617103699Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 10.395475895s" Jan 13 20:47:10.617152 containerd[1572]: time="2025-01-13T20:47:10.617150327Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 20:47:10.618941 containerd[1572]: time="2025-01-13T20:47:10.618903569Z" level=info msg="CreateContainer within sandbox \"de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:47:10.813482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount382608323.mount: Deactivated successfully. Jan 13 20:47:11.054434 containerd[1572]: time="2025-01-13T20:47:11.054375018Z" level=info msg="CreateContainer within sandbox \"de35f28257702aa3546f3679a978704ebfd494150887758a2b5b1f517bff4b15\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"33dec8c0b1fd8719c27c77113d6416b719927f4fe717328f8bdde79e2cb69ef0\"" Jan 13 20:47:11.054927 containerd[1572]: time="2025-01-13T20:47:11.054869997Z" level=info msg="StartContainer for \"33dec8c0b1fd8719c27c77113d6416b719927f4fe717328f8bdde79e2cb69ef0\"" Jan 13 20:47:11.140264 containerd[1572]: time="2025-01-13T20:47:11.140216905Z" level=info msg="StartContainer for \"33dec8c0b1fd8719c27c77113d6416b719927f4fe717328f8bdde79e2cb69ef0\" returns successfully" Jan 13 20:47:11.314418 kubelet[1906]: E0113 20:47:11.314249 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:11.498518 kubelet[1906]: I0113 20:47:11.498458 1906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=3.102304682 podStartE2EDuration="13.498414788s" podCreationTimestamp="2025-01-13 20:46:58 +0000 UTC" firstStartedPulling="2025-01-13 20:47:00.221258932 +0000 UTC m=+60.748158649" lastFinishedPulling="2025-01-13 20:47:10.617369028 +0000 UTC m=+71.144268755" observedRunningTime="2025-01-13 20:47:11.498361619 +0000 UTC m=+72.025261366" watchObservedRunningTime="2025-01-13 20:47:11.498414788 +0000 UTC m=+72.025314515" Jan 13 20:47:11.498728 kubelet[1906]: I0113 20:47:11.498605 1906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-7ccjv" podStartSLOduration=41.596642755 podStartE2EDuration="1m11.498587542s" podCreationTimestamp="2025-01-13 20:46:00 +0000 UTC" firstStartedPulling="2025-01-13 20:46:29.895458155 +0000 UTC m=+30.422357882" lastFinishedPulling="2025-01-13 20:46:59.797402942 +0000 UTC m=+60.324302669" observedRunningTime="2025-01-13 20:47:01.385587621 +0000 UTC m=+61.912487358" watchObservedRunningTime="2025-01-13 20:47:11.498587542 +0000 UTC m=+72.025487269" Jan 13 20:47:12.314707 kubelet[1906]: E0113 20:47:12.314660 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:13.315001 kubelet[1906]: E0113 20:47:13.314923 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:14.315697 kubelet[1906]: E0113 20:47:14.315642 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:15.316051 kubelet[1906]: E0113 20:47:15.315959 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:16.317220 kubelet[1906]: E0113 20:47:16.317157 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:17.318279 kubelet[1906]: E0113 20:47:17.318219 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:18.319041 kubelet[1906]: E0113 20:47:18.318959 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:19.319147 kubelet[1906]: E0113 20:47:19.319082 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:20.262828 kubelet[1906]: E0113 20:47:20.262769 1906 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:20.320084 kubelet[1906]: E0113 20:47:20.319991 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:21.321004 kubelet[1906]: E0113 20:47:21.320925 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:22.321733 kubelet[1906]: E0113 20:47:22.321669 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:23.322692 kubelet[1906]: E0113 20:47:23.322616 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:24.322844 kubelet[1906]: E0113 20:47:24.322766 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:25.323981 kubelet[1906]: E0113 20:47:25.323895 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:25.637629 kubelet[1906]: I0113 20:47:25.637450 1906 topology_manager.go:215] "Topology Admit Handler" podUID="3b47c974-3385-44de-a228-c28d9f45a6aa" podNamespace="default" podName="test-pod-1" Jan 13 20:47:25.782791 kubelet[1906]: I0113 20:47:25.782728 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlfws\" (UniqueName: \"kubernetes.io/projected/3b47c974-3385-44de-a228-c28d9f45a6aa-kube-api-access-vlfws\") pod \"test-pod-1\" (UID: \"3b47c974-3385-44de-a228-c28d9f45a6aa\") " pod="default/test-pod-1" Jan 13 20:47:25.782791 kubelet[1906]: I0113 20:47:25.782792 1906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1e8ecdbc-8d2f-4b2c-bccb-cd4520d8dd93\" (UniqueName: \"kubernetes.io/nfs/3b47c974-3385-44de-a228-c28d9f45a6aa-pvc-1e8ecdbc-8d2f-4b2c-bccb-cd4520d8dd93\") pod \"test-pod-1\" (UID: \"3b47c974-3385-44de-a228-c28d9f45a6aa\") " pod="default/test-pod-1" Jan 13 20:47:25.907050 kernel: FS-Cache: Loaded Jan 13 20:47:25.974605 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:47:25.974722 kernel: RPC: Registered udp transport module. Jan 13 20:47:25.974752 kernel: RPC: Registered tcp transport module. Jan 13 20:47:25.975237 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:47:25.976723 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:47:26.249495 kernel: NFS: Registering the id_resolver key type Jan 13 20:47:26.249646 kernel: Key type id_resolver registered Jan 13 20:47:26.249676 kernel: Key type id_legacy registered Jan 13 20:47:26.276412 nfsidmap[3819]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 20:47:26.281304 nfsidmap[3822]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 20:47:26.325065 kubelet[1906]: E0113 20:47:26.324981 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:26.542762 containerd[1572]: time="2025-01-13T20:47:26.542697593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3b47c974-3385-44de-a228-c28d9f45a6aa,Namespace:default,Attempt:0,}" Jan 13 20:47:26.974354 systemd-networkd[1246]: cali5ec59c6bf6e: Link UP Jan 13 20:47:26.974589 systemd-networkd[1246]: cali5ec59c6bf6e: Gained carrier Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.725 [INFO][3825] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.153-k8s-test--pod--1-eth0 default 3b47c974-3385-44de-a228-c28d9f45a6aa 1289 0 2025-01-13 20:46:59 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.153 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.725 [INFO][3825] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.753 [INFO][3838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" HandleID="k8s-pod-network.637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" Workload="10.0.0.153-k8s-test--pod--1-eth0" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.762 [INFO][3838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" HandleID="k8s-pod-network.637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" Workload="10.0.0.153-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000437ae0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.153", "pod":"test-pod-1", "timestamp":"2025-01-13 20:47:26.752979002 +0000 UTC"}, Hostname:"10.0.0.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.762 [INFO][3838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.762 [INFO][3838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.762 [INFO][3838] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.153' Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.763 [INFO][3838] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" host="10.0.0.153" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.767 [INFO][3838] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.153" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.771 [INFO][3838] ipam/ipam.go 489: Trying affinity for 192.168.31.128/26 host="10.0.0.153" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.773 [INFO][3838] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.128/26 host="10.0.0.153" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.775 [INFO][3838] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="10.0.0.153" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.775 [INFO][3838] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" host="10.0.0.153" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.776 [INFO][3838] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710 Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.793 [INFO][3838] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" host="10.0.0.153" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.968 [INFO][3838] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.132/26] block=192.168.31.128/26 handle="k8s-pod-network.637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" host="10.0.0.153" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.968 [INFO][3838] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.132/26] handle="k8s-pod-network.637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" host="10.0.0.153" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.968 [INFO][3838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.968 [INFO][3838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.132/26] IPv6=[] ContainerID="637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" HandleID="k8s-pod-network.637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" Workload="10.0.0.153-k8s-test--pod--1-eth0" Jan 13 20:47:27.017640 containerd[1572]: 2025-01-13 20:47:26.971 [INFO][3825] cni-plugin/k8s.go 386: Populated endpoint ContainerID="637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3b47c974-3385-44de-a228-c28d9f45a6aa", ResourceVersion:"1289", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:47:27.018271 containerd[1572]: 2025-01-13 20:47:26.971 [INFO][3825] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.132/32] ContainerID="637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" Jan 13 20:47:27.018271 containerd[1572]: 2025-01-13 20:47:26.972 [INFO][3825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" Jan 13 20:47:27.018271 containerd[1572]: 2025-01-13 20:47:26.974 [INFO][3825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" Jan 13 20:47:27.018271 containerd[1572]: 2025-01-13 20:47:26.974 [INFO][3825] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3b47c974-3385-44de-a228-c28d9f45a6aa", ResourceVersion:"1289", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"86:c9:f3:84:1a:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:47:27.018271 containerd[1572]: 2025-01-13 20:47:27.014 [INFO][3825] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" Jan 13 20:47:27.087856 containerd[1572]: time="2025-01-13T20:47:27.087756418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:47:27.088521 containerd[1572]: time="2025-01-13T20:47:27.088279523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:47:27.088521 containerd[1572]: time="2025-01-13T20:47:27.088340060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:47:27.088521 containerd[1572]: time="2025-01-13T20:47:27.088471775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:47:27.116712 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:47:27.141979 containerd[1572]: time="2025-01-13T20:47:27.141940794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3b47c974-3385-44de-a228-c28d9f45a6aa,Namespace:default,Attempt:0,} returns sandbox id \"637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710\"" Jan 13 20:47:27.143199 containerd[1572]: time="2025-01-13T20:47:27.143165830Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:47:27.326106 kubelet[1906]: E0113 20:47:27.326053 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:27.716422 containerd[1572]: time="2025-01-13T20:47:27.716240453Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:27.717704 containerd[1572]: time="2025-01-13T20:47:27.717654616Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:47:27.720752 containerd[1572]: time="2025-01-13T20:47:27.720713478Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 577.516228ms" Jan 13 20:47:27.720752 containerd[1572]: time="2025-01-13T20:47:27.720747534Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:47:27.722407 containerd[1572]: time="2025-01-13T20:47:27.722373047Z" level=info msg="CreateContainer within sandbox \"637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:47:27.799320 containerd[1572]: time="2025-01-13T20:47:27.799253570Z" level=info msg="CreateContainer within sandbox \"637e81ddbbf721397b5c7316021c5e9c4b6c2a3ca652ca0111b80658f9c0c710\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"00dd8dc2e705d371b78f326dba0dd765524c827b2d5cf1f86803ca2be59077ed\"" Jan 13 20:47:27.799831 containerd[1572]: time="2025-01-13T20:47:27.799781865Z" level=info msg="StartContainer for \"00dd8dc2e705d371b78f326dba0dd765524c827b2d5cf1f86803ca2be59077ed\"" Jan 13 20:47:27.927832 containerd[1572]: time="2025-01-13T20:47:27.927770649Z" level=info msg="StartContainer for \"00dd8dc2e705d371b78f326dba0dd765524c827b2d5cf1f86803ca2be59077ed\" returns successfully" Jan 13 20:47:28.219257 systemd-networkd[1246]: cali5ec59c6bf6e: Gained IPv6LL Jan 13 20:47:28.326783 kubelet[1906]: E0113 20:47:28.326743 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:28.441668 kubelet[1906]: I0113 20:47:28.441602 1906 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=28.863525762 podStartE2EDuration="29.441557409s" podCreationTimestamp="2025-01-13 20:46:59 +0000 UTC" firstStartedPulling="2025-01-13 20:47:27.142923811 +0000 UTC m=+87.669823538" lastFinishedPulling="2025-01-13 20:47:27.720955458 +0000 UTC m=+88.247855185" observedRunningTime="2025-01-13 20:47:28.441360117 +0000 UTC m=+88.968259854" watchObservedRunningTime="2025-01-13 20:47:28.441557409 +0000 UTC m=+88.968457136" Jan 13 20:47:29.327373 kubelet[1906]: E0113 20:47:29.327298 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:30.327801 kubelet[1906]: E0113 20:47:30.327758 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:31.328245 kubelet[1906]: E0113 20:47:31.328179 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:47:31.743097 kubelet[1906]: E0113 20:47:31.742928 1906 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:47:32.328785 kubelet[1906]: E0113 20:47:32.328733 1906 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"