Feb 13 15:18:57.892029 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:18:57.892053 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 13:51:50 -00 2025 Feb 13 15:18:57.892065 kernel: KASLR enabled Feb 13 15:18:57.892071 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 15:18:57.892076 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Feb 13 15:18:57.892082 kernel: random: crng init done Feb 13 15:18:57.892089 kernel: secureboot: Secure boot disabled Feb 13 15:18:57.892095 kernel: ACPI: Early table checksum verification disabled Feb 13 15:18:57.892101 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 15:18:57.892108 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:18:57.892115 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:57.892120 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:57.892126 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:57.892132 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:57.892139 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:57.892147 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:57.892153 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:57.892159 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:57.892165 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:57.892172 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:18:57.892178 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 15:18:57.892184 kernel: NUMA: Failed to initialise from firmware Feb 13 15:18:57.892191 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:18:57.892197 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 15:18:57.892203 kernel: Zone ranges: Feb 13 15:18:57.892211 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:18:57.892217 kernel: DMA32 empty Feb 13 15:18:57.892223 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 15:18:57.893757 kernel: Movable zone start for each node Feb 13 15:18:57.893771 kernel: Early memory node ranges Feb 13 15:18:57.893778 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Feb 13 15:18:57.893785 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Feb 13 15:18:57.893793 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Feb 13 15:18:57.893804 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 15:18:57.893811 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 15:18:57.893817 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 15:18:57.893824 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 15:18:57.893840 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 15:18:57.893847 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 15:18:57.893854 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:18:57.893867 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 15:18:57.893874 kernel: psci: probing for conduit method from ACPI. Feb 13 15:18:57.893882 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:18:57.893891 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:18:57.893899 kernel: psci: Trusted OS migration not required Feb 13 15:18:57.893905 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:18:57.893914 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:18:57.893921 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:18:57.893929 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:18:57.893937 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:18:57.893943 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:18:57.893950 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:18:57.893957 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:18:57.893965 kernel: CPU features: detected: Spectre-v4 Feb 13 15:18:57.893971 kernel: CPU features: detected: Spectre-BHB Feb 13 15:18:57.893978 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:18:57.893984 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:18:57.893991 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:18:57.893998 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:18:57.894004 kernel: alternatives: applying boot alternatives Feb 13 15:18:57.894012 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:18:57.894019 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:18:57.894026 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:18:57.894033 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:18:57.894041 kernel: Fallback order for Node 0: 0 Feb 13 15:18:57.894047 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 15:18:57.894054 kernel: Policy zone: Normal Feb 13 15:18:57.894060 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:18:57.894067 kernel: software IO TLB: area num 2. Feb 13 15:18:57.894074 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 15:18:57.894081 kernel: Memory: 3883896K/4096000K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 212104K reserved, 0K cma-reserved) Feb 13 15:18:57.894087 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:18:57.894094 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:18:57.894101 kernel: rcu: RCU event tracing is enabled. Feb 13 15:18:57.894108 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:18:57.894115 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:18:57.894123 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:18:57.894130 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:18:57.894140 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:18:57.894147 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:18:57.894154 kernel: GICv3: 256 SPIs implemented Feb 13 15:18:57.894162 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:18:57.894169 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:18:57.894175 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:18:57.894182 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:18:57.894188 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:18:57.894195 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:18:57.894204 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:18:57.894210 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 15:18:57.894217 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 15:18:57.894224 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:18:57.894991 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:18:57.894999 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:18:57.895006 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:18:57.895013 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:18:57.895020 kernel: Console: colour dummy device 80x25 Feb 13 15:18:57.895028 kernel: ACPI: Core revision 20230628 Feb 13 15:18:57.895035 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:18:57.895047 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:18:57.895055 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:18:57.895062 kernel: landlock: Up and running. Feb 13 15:18:57.895068 kernel: SELinux: Initializing. Feb 13 15:18:57.895075 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:18:57.895082 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:18:57.895089 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:18:57.895096 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:18:57.895103 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:18:57.895112 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:18:57.895119 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:18:57.895125 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:18:57.895132 kernel: Remapping and enabling EFI services. Feb 13 15:18:57.895139 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:18:57.895146 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:18:57.895153 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:18:57.895160 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 15:18:57.895167 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:18:57.895175 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:18:57.895183 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:18:57.895195 kernel: SMP: Total of 2 processors activated. Feb 13 15:18:57.895204 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:18:57.895211 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:18:57.895218 kernel: CPU features: detected: Common not Private translations Feb 13 15:18:57.895256 kernel: CPU features: detected: CRC32 instructions Feb 13 15:18:57.895265 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:18:57.895272 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:18:57.895281 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:18:57.895289 kernel: CPU features: detected: Privileged Access Never Feb 13 15:18:57.895296 kernel: CPU features: detected: RAS Extension Support Feb 13 15:18:57.895303 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:18:57.895310 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:18:57.895317 kernel: alternatives: applying system-wide alternatives Feb 13 15:18:57.895324 kernel: devtmpfs: initialized Feb 13 15:18:57.895332 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:18:57.895340 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:18:57.895348 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:18:57.895355 kernel: SMBIOS 3.0.0 present. Feb 13 15:18:57.895362 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 15:18:57.895369 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:18:57.895377 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:18:57.895384 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:18:57.895391 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:18:57.895399 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:18:57.895407 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Feb 13 15:18:57.895415 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:18:57.895422 kernel: cpuidle: using governor menu Feb 13 15:18:57.895429 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:18:57.895437 kernel: ASID allocator initialised with 32768 entries Feb 13 15:18:57.895444 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:18:57.895452 kernel: Serial: AMBA PL011 UART driver Feb 13 15:18:57.895459 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:18:57.895466 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:18:57.895475 kernel: Modules: 509280 pages in range for PLT usage Feb 13 15:18:57.895483 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:18:57.895490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:18:57.895497 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:18:57.895504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:18:57.895511 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:18:57.895519 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:18:57.895526 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:18:57.895533 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:18:57.895541 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:18:57.895551 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:18:57.895560 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:18:57.895568 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:18:57.895576 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:18:57.895583 kernel: ACPI: Interpreter enabled Feb 13 15:18:57.895590 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:18:57.895597 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:18:57.895605 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:18:57.895614 kernel: printk: console [ttyAMA0] enabled Feb 13 15:18:57.895621 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:18:57.895797 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:18:57.895890 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:18:57.895959 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:18:57.896025 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:18:57.896090 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:18:57.896103 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:18:57.896110 kernel: PCI host bridge to bus 0000:00 Feb 13 15:18:57.896190 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:18:57.897381 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:18:57.897462 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:18:57.897521 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:18:57.897613 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:18:57.897720 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 15:18:57.897812 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 15:18:57.897881 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:18:57.897959 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:18:57.898029 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 15:18:57.898104 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 15:18:57.898175 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 15:18:57.898425 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 15:18:57.898516 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 15:18:57.898626 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 15:18:57.898699 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 15:18:57.898833 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 15:18:57.898906 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 15:18:57.898996 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 15:18:57.899067 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 15:18:57.899154 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 15:18:57.899223 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 15:18:57.900360 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 15:18:57.900436 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 15:18:57.900517 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:18:57.900585 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 15:18:57.900669 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 15:18:57.900764 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 15:18:57.900867 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:18:57.900955 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 15:18:57.901043 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:18:57.901119 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:18:57.901207 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 15:18:57.901377 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 15:18:57.901467 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 15:18:57.901554 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 15:18:57.901638 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 15:18:57.901776 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 15:18:57.901877 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 15:18:57.901980 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 15:18:57.902064 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 15:18:57.902149 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 15:18:57.902257 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 15:18:57.902363 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 15:18:57.902450 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:18:57.902546 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:18:57.902628 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 15:18:57.902716 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 15:18:57.902835 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:18:57.902930 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 15:18:57.903005 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:18:57.903088 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:18:57.903167 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 15:18:57.904653 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 15:18:57.904810 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 15:18:57.904901 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 15:18:57.904982 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:18:57.905069 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:18:57.905152 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 15:18:57.906371 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 15:18:57.906515 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 15:18:57.906605 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 15:18:57.906683 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:18:57.906794 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:18:57.906891 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 15:18:57.906972 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:18:57.907052 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:18:57.907137 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 15:18:57.907218 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:18:57.908015 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:18:57.908108 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 15:18:57.908199 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:18:57.908303 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:18:57.908386 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 15:18:57.908493 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:18:57.908566 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:18:57.908651 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 15:18:57.908764 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:18:57.908862 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 15:18:57.908950 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:18:57.909036 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 15:18:57.909108 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:18:57.909193 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 15:18:57.909294 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:18:57.909377 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 15:18:57.909465 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:18:57.909545 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 15:18:57.909627 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:18:57.909708 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 15:18:57.909804 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:18:57.909889 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 15:18:57.909970 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:18:57.910047 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 15:18:57.910122 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:18:57.910205 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 15:18:57.910419 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 15:18:57.910509 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 15:18:57.910593 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 15:18:57.910849 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 15:18:57.910949 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 15:18:57.911042 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 15:18:57.911125 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 15:18:57.911197 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 15:18:57.911308 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 15:18:57.911394 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 15:18:57.911469 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 15:18:57.911551 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 15:18:57.911627 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 15:18:57.911714 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 15:18:57.911817 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 15:18:57.911908 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 15:18:57.911997 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 15:18:57.912080 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 15:18:57.912166 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 15:18:57.912277 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 15:18:57.912379 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 15:18:57.912473 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:18:57.912571 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 15:18:57.912658 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 15:18:57.912755 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 15:18:57.912844 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 15:18:57.912928 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:18:57.913019 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 15:18:57.913108 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 15:18:57.913190 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 15:18:57.915834 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 15:18:57.915940 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:18:57.916035 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:18:57.916129 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 15:18:57.916210 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 15:18:57.916313 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 15:18:57.916396 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 15:18:57.916479 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:18:57.916572 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:18:57.916654 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 15:18:57.916753 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 15:18:57.916842 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 15:18:57.916920 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:18:57.917011 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 15:18:57.917096 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 15:18:57.917181 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 15:18:57.917291 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 15:18:57.917376 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 15:18:57.917457 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:18:57.917554 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 15:18:57.917642 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 15:18:57.917724 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 15:18:57.917828 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 15:18:57.917899 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 15:18:57.917982 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:18:57.918072 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 15:18:57.918157 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 15:18:57.918311 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 15:18:57.918400 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 15:18:57.918481 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 15:18:57.918564 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 15:18:57.918647 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:18:57.918775 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 15:18:57.918875 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 15:18:57.918964 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 15:18:57.919041 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:18:57.919141 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 15:18:57.919251 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 15:18:57.919338 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 15:18:57.919413 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:18:57.919493 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:18:57.919569 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:18:57.919636 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:18:57.919722 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 15:18:57.919829 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 15:18:57.919893 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:18:57.919977 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 15:18:57.920056 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 15:18:57.920131 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:18:57.922342 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 15:18:57.922497 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 15:18:57.922561 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:18:57.922635 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 15:18:57.922700 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 15:18:57.922798 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:18:57.922882 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 15:18:57.922947 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 15:18:57.923010 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:18:57.923087 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 15:18:57.923153 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 15:18:57.923218 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:18:57.923305 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 15:18:57.923367 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 15:18:57.923428 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:18:57.923496 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 15:18:57.923558 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 15:18:57.923623 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:18:57.923692 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 15:18:57.923769 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 15:18:57.923832 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:18:57.923842 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:18:57.923850 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:18:57.923857 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:18:57.923865 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:18:57.923875 kernel: iommu: Default domain type: Translated Feb 13 15:18:57.923883 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:18:57.923891 kernel: efivars: Registered efivars operations Feb 13 15:18:57.923898 kernel: vgaarb: loaded Feb 13 15:18:57.923906 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:18:57.923915 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:18:57.923923 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:18:57.923930 kernel: pnp: PnP ACPI init Feb 13 15:18:57.924008 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:18:57.924021 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:18:57.924028 kernel: NET: Registered PF_INET protocol family Feb 13 15:18:57.924036 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:18:57.924045 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:18:57.924052 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:18:57.924060 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:18:57.924067 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:18:57.924075 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:18:57.924084 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:18:57.924092 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:18:57.924101 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:18:57.924179 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 15:18:57.924191 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:18:57.924199 kernel: kvm [1]: HYP mode not available Feb 13 15:18:57.924206 kernel: Initialise system trusted keyrings Feb 13 15:18:57.924214 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:18:57.924221 kernel: Key type asymmetric registered Feb 13 15:18:57.925330 kernel: Asymmetric key parser 'x509' registered Feb 13 15:18:57.925342 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:18:57.925351 kernel: io scheduler mq-deadline registered Feb 13 15:18:57.925359 kernel: io scheduler kyber registered Feb 13 15:18:57.925367 kernel: io scheduler bfq registered Feb 13 15:18:57.925375 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:18:57.925537 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 15:18:57.925615 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 15:18:57.925727 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:18:57.925940 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 15:18:57.926075 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 15:18:57.926191 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:18:57.926338 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 15:18:57.926456 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 15:18:57.926577 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:18:57.926693 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 15:18:57.926825 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 15:18:57.926940 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:18:57.927058 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 15:18:57.927169 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 15:18:57.927307 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:18:57.927426 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 15:18:57.927537 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 15:18:57.927648 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:18:57.927776 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 15:18:57.927889 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 15:18:57.928007 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:18:57.928128 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 15:18:57.928251 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 15:18:57.928364 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:18:57.928382 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 15:18:57.928494 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 15:18:57.928611 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 15:18:57.928722 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:18:57.928781 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:18:57.928796 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:18:57.928809 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:18:57.928947 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 15:18:57.929074 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 15:18:57.929092 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:18:57.929111 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:18:57.929225 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 15:18:57.929274 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 15:18:57.929287 kernel: thunder_xcv, ver 1.0 Feb 13 15:18:57.929300 kernel: thunder_bgx, ver 1.0 Feb 13 15:18:57.929313 kernel: nicpf, ver 1.0 Feb 13 15:18:57.929325 kernel: nicvf, ver 1.0 Feb 13 15:18:57.929465 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:18:57.929575 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:18:57 UTC (1739459937) Feb 13 15:18:57.929597 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:18:57.929609 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:18:57.929622 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:18:57.929635 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:18:57.929647 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:18:57.929660 kernel: Segment Routing with IPv6 Feb 13 15:18:57.929672 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:18:57.929684 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:18:57.929699 kernel: Key type dns_resolver registered Feb 13 15:18:57.929711 kernel: registered taskstats version 1 Feb 13 15:18:57.929724 kernel: Loading compiled-in X.509 certificates Feb 13 15:18:57.929753 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 03c2ececc548f4ae45f50171451f5c036e2757d4' Feb 13 15:18:57.929766 kernel: Key type .fscrypt registered Feb 13 15:18:57.929778 kernel: Key type fscrypt-provisioning registered Feb 13 15:18:57.929791 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:18:57.929803 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:18:57.929815 kernel: ima: No architecture policies found Feb 13 15:18:57.929832 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:18:57.929844 kernel: clk: Disabling unused clocks Feb 13 15:18:57.929856 kernel: Freeing unused kernel memory: 38336K Feb 13 15:18:57.929868 kernel: Run /init as init process Feb 13 15:18:57.929881 kernel: with arguments: Feb 13 15:18:57.929894 kernel: /init Feb 13 15:18:57.929907 kernel: with environment: Feb 13 15:18:57.929919 kernel: HOME=/ Feb 13 15:18:57.929931 kernel: TERM=linux Feb 13 15:18:57.929944 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:18:57.929959 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:18:57.929977 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:18:57.929991 systemd[1]: Detected virtualization kvm. Feb 13 15:18:57.930004 systemd[1]: Detected architecture arm64. Feb 13 15:18:57.930017 systemd[1]: Running in initrd. Feb 13 15:18:57.930030 systemd[1]: No hostname configured, using default hostname. Feb 13 15:18:57.930045 systemd[1]: Hostname set to . Feb 13 15:18:57.930058 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:18:57.930071 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:18:57.930085 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:18:57.930100 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:18:57.930115 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:18:57.930128 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:18:57.930141 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:18:57.930158 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:18:57.930172 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:18:57.930186 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:18:57.930199 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:18:57.930212 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:18:57.930225 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:18:57.932834 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:18:57.932855 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:18:57.932863 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:18:57.932875 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:18:57.932884 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:18:57.932894 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:18:57.932904 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:18:57.932914 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:18:57.932922 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:18:57.932932 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:18:57.932943 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:18:57.932952 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:18:57.932962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:18:57.932971 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:18:57.932979 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:18:57.932987 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:18:57.932995 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:18:57.933003 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:18:57.933017 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:18:57.933075 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 15:18:57.933097 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:18:57.933108 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:18:57.933117 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:18:57.933125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:18:57.933133 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:18:57.933142 kernel: Bridge firewalling registered Feb 13 15:18:57.933150 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:18:57.933160 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:18:57.933169 systemd-journald[237]: Journal started Feb 13 15:18:57.933188 systemd-journald[237]: Runtime Journal (/run/log/journal/9c3217296f454077a17f569f5f2a262d) is 8M, max 76.6M, 68.6M free. Feb 13 15:18:57.895523 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 15:18:57.936655 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:18:57.920903 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 15:18:57.942594 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:18:57.956888 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:18:57.959142 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:18:57.962601 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:18:57.964320 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:18:57.972945 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:18:57.976463 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:18:57.985318 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:18:57.994725 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:18:58.002401 dracut-cmdline[271]: dracut-dracut-053 Feb 13 15:18:58.004510 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:18:58.009140 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:18:58.035948 systemd-resolved[277]: Positive Trust Anchors: Feb 13 15:18:58.035966 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:18:58.035996 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:18:58.041027 systemd-resolved[277]: Defaulting to hostname 'linux'. Feb 13 15:18:58.042849 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:18:58.043506 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:18:58.133301 kernel: SCSI subsystem initialized Feb 13 15:18:58.139274 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:18:58.147744 kernel: iscsi: registered transport (tcp) Feb 13 15:18:58.162295 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:18:58.162409 kernel: QLogic iSCSI HBA Driver Feb 13 15:18:58.244524 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:18:58.252514 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:18:58.282635 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:18:58.282711 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:18:58.282745 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:18:58.341321 kernel: raid6: neonx8 gen() 15659 MB/s Feb 13 15:18:58.358372 kernel: raid6: neonx4 gen() 15684 MB/s Feb 13 15:18:58.375280 kernel: raid6: neonx2 gen() 13199 MB/s Feb 13 15:18:58.392265 kernel: raid6: neonx1 gen() 10461 MB/s Feb 13 15:18:58.409273 kernel: raid6: int64x8 gen() 6747 MB/s Feb 13 15:18:58.426304 kernel: raid6: int64x4 gen() 7303 MB/s Feb 13 15:18:58.443312 kernel: raid6: int64x2 gen() 6074 MB/s Feb 13 15:18:58.460298 kernel: raid6: int64x1 gen() 5020 MB/s Feb 13 15:18:58.460402 kernel: raid6: using algorithm neonx4 gen() 15684 MB/s Feb 13 15:18:58.477288 kernel: raid6: .... xor() 12344 MB/s, rmw enabled Feb 13 15:18:58.477360 kernel: raid6: using neon recovery algorithm Feb 13 15:18:58.482346 kernel: xor: measuring software checksum speed Feb 13 15:18:58.482419 kernel: 8regs : 21664 MB/sec Feb 13 15:18:58.482435 kernel: 32regs : 21681 MB/sec Feb 13 15:18:58.483272 kernel: arm64_neon : 27946 MB/sec Feb 13 15:18:58.483319 kernel: xor: using function: arm64_neon (27946 MB/sec) Feb 13 15:18:58.537285 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:18:58.550530 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:18:58.557433 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:18:58.571485 systemd-udevd[456]: Using default interface naming scheme 'v255'. Feb 13 15:18:58.575663 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:18:58.585441 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:18:58.602930 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Feb 13 15:18:58.658926 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:18:58.665690 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:18:58.718073 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:18:58.729909 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:18:58.762179 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:18:58.765129 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:18:58.767182 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:18:58.768870 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:18:58.775613 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:18:58.794257 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:18:58.836539 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:18:58.843486 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:18:58.843568 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 15:18:58.855257 kernel: ACPI: bus type USB registered Feb 13 15:18:58.855323 kernel: usbcore: registered new interface driver usbfs Feb 13 15:18:58.866297 kernel: usbcore: registered new interface driver hub Feb 13 15:18:58.868767 kernel: usbcore: registered new device driver usb Feb 13 15:18:58.880566 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:18:58.880796 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:18:58.883840 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:18:58.885167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:18:58.885377 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:18:58.888973 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:18:58.894578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:18:58.897302 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:18:58.906608 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:18:58.923694 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 15:18:58.923990 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 15:18:58.924116 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 15:18:58.924438 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 15:18:58.924590 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:18:58.924611 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:18:58.924805 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 15:18:58.924923 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:18:58.925090 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 15:18:58.925178 kernel: hub 1-0:1.0: USB hub found Feb 13 15:18:58.926373 kernel: hub 1-0:1.0: 4 ports detected Feb 13 15:18:58.926469 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 15:18:58.926583 kernel: hub 2-0:1.0: USB hub found Feb 13 15:18:58.926674 kernel: hub 2-0:1.0: 4 ports detected Feb 13 15:18:58.916262 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:18:58.922494 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:18:58.932375 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 15:18:58.942829 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 15:18:58.942969 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 15:18:58.943057 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 15:18:58.943165 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:18:58.943289 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:18:58.943301 kernel: GPT:17805311 != 80003071 Feb 13 15:18:58.943311 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:18:58.943321 kernel: GPT:17805311 != 80003071 Feb 13 15:18:58.943330 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:18:58.943343 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:18:58.943354 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 15:18:58.952141 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:18:58.987253 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (520) Feb 13 15:18:58.989257 kernel: BTRFS: device fsid b3d3c5e7-c505-4391-bb7a-de2a572c0855 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (504) Feb 13 15:18:59.017080 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 15:18:59.028811 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:18:59.038652 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 15:18:59.046968 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 15:18:59.048506 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 15:18:59.058545 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:18:59.066093 disk-uuid[578]: Primary Header is updated. Feb 13 15:18:59.066093 disk-uuid[578]: Secondary Entries is updated. Feb 13 15:18:59.066093 disk-uuid[578]: Secondary Header is updated. Feb 13 15:18:59.073276 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:18:59.165287 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 15:18:59.406284 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 15:18:59.539699 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 15:18:59.539764 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 15:18:59.542322 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 15:18:59.596279 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 15:18:59.596582 kernel: usbcore: registered new interface driver usbhid Feb 13 15:18:59.597777 kernel: usbhid: USB HID core driver Feb 13 15:19:00.097639 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:19:00.098856 disk-uuid[579]: The operation has completed successfully. Feb 13 15:19:00.180467 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:19:00.180585 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:19:00.201510 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:19:00.207314 sh[593]: Success Feb 13 15:19:00.221287 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:19:00.284145 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:19:00.293766 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:19:00.294629 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:19:00.331285 kernel: BTRFS info (device dm-0): first mount of filesystem b3d3c5e7-c505-4391-bb7a-de2a572c0855 Feb 13 15:19:00.331423 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:19:00.331436 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:19:00.331447 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:19:00.331456 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:19:00.338294 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:19:00.340166 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:19:00.341389 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:19:00.353698 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:19:00.358486 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:19:00.379134 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:19:00.379299 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:19:00.379314 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:19:00.386250 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:19:00.386358 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:19:00.407301 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:19:00.409356 kernel: BTRFS info (device sda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:19:00.420391 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:19:00.426916 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:19:00.517298 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:19:00.529025 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:19:00.548569 ignition[692]: Ignition 2.20.0 Feb 13 15:19:00.548581 ignition[692]: Stage: fetch-offline Feb 13 15:19:00.548620 ignition[692]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:00.548629 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:19:00.548887 ignition[692]: parsed url from cmdline: "" Feb 13 15:19:00.548891 ignition[692]: no config URL provided Feb 13 15:19:00.548896 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:19:00.553400 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:19:00.548905 ignition[692]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:19:00.548912 ignition[692]: failed to fetch config: resource requires networking Feb 13 15:19:00.549116 ignition[692]: Ignition finished successfully Feb 13 15:19:00.565546 systemd-networkd[780]: lo: Link UP Feb 13 15:19:00.565561 systemd-networkd[780]: lo: Gained carrier Feb 13 15:19:00.567637 systemd-networkd[780]: Enumeration completed Feb 13 15:19:00.567809 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:19:00.568515 systemd[1]: Reached target network.target - Network. Feb 13 15:19:00.569787 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:19:00.569790 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:19:00.570472 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:19:00.570475 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:19:00.571761 systemd-networkd[780]: eth0: Link UP Feb 13 15:19:00.571765 systemd-networkd[780]: eth0: Gained carrier Feb 13 15:19:00.571773 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:19:00.575659 systemd-networkd[780]: eth1: Link UP Feb 13 15:19:00.575662 systemd-networkd[780]: eth1: Gained carrier Feb 13 15:19:00.575672 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:19:00.576445 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:19:00.592014 ignition[784]: Ignition 2.20.0 Feb 13 15:19:00.592026 ignition[784]: Stage: fetch Feb 13 15:19:00.592243 ignition[784]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:00.592254 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:19:00.592359 ignition[784]: parsed url from cmdline: "" Feb 13 15:19:00.592362 ignition[784]: no config URL provided Feb 13 15:19:00.592367 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:19:00.592374 ignition[784]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:19:00.592467 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 15:19:00.593348 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 15:19:00.609375 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:19:00.628374 systemd-networkd[780]: eth0: DHCPv4 address 5.75.234.95/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:19:00.794393 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 15:19:00.800934 ignition[784]: GET result: OK Feb 13 15:19:00.801077 ignition[784]: parsing config with SHA512: a5a78750a7e7826c20cebeea534d15ff1a0aa99626d4ac638f2415114343a2a58bd85a0504b1a5ffefc1a803cad629e819ba04f031165f7e69f8329ce52ecae4 Feb 13 15:19:00.806152 unknown[784]: fetched base config from "system" Feb 13 15:19:00.806169 unknown[784]: fetched base config from "system" Feb 13 15:19:00.806515 ignition[784]: fetch: fetch complete Feb 13 15:19:00.806175 unknown[784]: fetched user config from "hetzner" Feb 13 15:19:00.806520 ignition[784]: fetch: fetch passed Feb 13 15:19:00.806593 ignition[784]: Ignition finished successfully Feb 13 15:19:00.809225 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:19:00.813583 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:19:00.840911 ignition[792]: Ignition 2.20.0 Feb 13 15:19:00.840924 ignition[792]: Stage: kargs Feb 13 15:19:00.841126 ignition[792]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:00.841136 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:19:00.842096 ignition[792]: kargs: kargs passed Feb 13 15:19:00.842159 ignition[792]: Ignition finished successfully Feb 13 15:19:00.844894 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:19:00.849608 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:19:00.866542 ignition[798]: Ignition 2.20.0 Feb 13 15:19:00.866560 ignition[798]: Stage: disks Feb 13 15:19:00.866783 ignition[798]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:00.866794 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:19:00.867576 ignition[798]: disks: disks passed Feb 13 15:19:00.867630 ignition[798]: Ignition finished successfully Feb 13 15:19:00.871131 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:19:00.872339 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:19:00.873172 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:19:00.874367 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:19:00.875392 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:19:00.876357 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:19:00.883531 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:19:00.903478 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:19:00.908350 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:19:01.334435 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:19:01.388258 kernel: EXT4-fs (sda9): mounted filesystem f78dcc36-7881-4d16-ad8b-28e23dfbdad0 r/w with ordered data mode. Quota mode: none. Feb 13 15:19:01.389893 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:19:01.391348 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:19:01.401446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:19:01.407815 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:19:01.413435 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:19:01.415618 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:19:01.417224 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:19:01.420611 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:19:01.423387 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (815) Feb 13 15:19:01.425305 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:19:01.425415 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:19:01.425459 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:19:01.428962 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:19:01.433023 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:19:01.433084 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:19:01.436835 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:19:01.503266 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:19:01.513142 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:19:01.514685 coreos-metadata[817]: Feb 13 15:19:01.514 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 15:19:01.518322 coreos-metadata[817]: Feb 13 15:19:01.518 INFO Fetch successful Feb 13 15:19:01.520281 coreos-metadata[817]: Feb 13 15:19:01.519 INFO wrote hostname ci-4230-0-1-4-233282f7f8 to /sysroot/etc/hostname Feb 13 15:19:01.523072 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:19:01.523536 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:19:01.529941 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:19:01.654691 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:19:01.664410 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:19:01.667533 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:19:01.677346 kernel: BTRFS info (device sda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:19:01.701685 ignition[932]: INFO : Ignition 2.20.0 Feb 13 15:19:01.702652 ignition[932]: INFO : Stage: mount Feb 13 15:19:01.702652 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:01.702652 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:19:01.704901 ignition[932]: INFO : mount: mount passed Feb 13 15:19:01.704901 ignition[932]: INFO : Ignition finished successfully Feb 13 15:19:01.706121 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:19:01.715498 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:19:01.717189 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:19:02.330353 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:19:02.341522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:19:02.355263 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (945) Feb 13 15:19:02.357802 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:19:02.358061 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:19:02.358213 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:19:02.361280 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:19:02.361352 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:19:02.364083 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:19:02.385639 ignition[962]: INFO : Ignition 2.20.0 Feb 13 15:19:02.385639 ignition[962]: INFO : Stage: files Feb 13 15:19:02.386825 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:02.386825 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:19:02.386825 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:19:02.391569 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:19:02.391569 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:19:02.395990 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:19:02.397467 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:19:02.397467 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:19:02.396485 unknown[962]: wrote ssh authorized keys file for user: core Feb 13 15:19:02.401056 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:19:02.401056 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:19:02.401056 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:19:02.401056 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:19:02.401056 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:19:02.401056 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:19:02.401056 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:19:02.401056 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 15:19:02.455469 systemd-networkd[780]: eth1: Gained IPv6LL Feb 13 15:19:02.583502 systemd-networkd[780]: eth0: Gained IPv6LL Feb 13 15:19:02.838548 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 15:19:03.168982 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:19:03.168982 ignition[962]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 15:19:03.171388 ignition[962]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:19:03.171388 ignition[962]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:19:03.171388 ignition[962]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 15:19:03.171388 ignition[962]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:19:03.171388 ignition[962]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:19:03.171388 ignition[962]: INFO : files: files passed Feb 13 15:19:03.171388 ignition[962]: INFO : Ignition finished successfully Feb 13 15:19:03.172542 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:19:03.179437 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:19:03.181627 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:19:03.185646 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:19:03.186382 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:19:03.196950 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:19:03.196950 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:19:03.200381 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:19:03.203144 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:19:03.204663 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:19:03.212549 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:19:03.241598 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:19:03.241823 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:19:03.246719 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:19:03.247940 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:19:03.249328 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:19:03.251776 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:19:03.268388 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:19:03.274470 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:19:03.292959 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:19:03.295644 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:19:03.297069 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:19:03.297708 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:19:03.297843 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:19:03.300060 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:19:03.301318 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:19:03.302523 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:19:03.303715 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:19:03.305108 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:19:03.306681 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:19:03.308476 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:19:03.309596 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:19:03.310773 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:19:03.311824 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:19:03.312740 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:19:03.312882 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:19:03.314602 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:19:03.315565 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:19:03.316517 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:19:03.316630 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:19:03.317662 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:19:03.317890 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:19:03.319267 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:19:03.319453 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:19:03.320466 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:19:03.320615 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:19:03.321349 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:19:03.321492 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:19:03.327401 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:19:03.332605 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:19:03.333310 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:19:03.334460 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:19:03.336747 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:19:03.338471 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:19:03.349972 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:19:03.350066 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:19:03.362274 ignition[1015]: INFO : Ignition 2.20.0 Feb 13 15:19:03.362274 ignition[1015]: INFO : Stage: umount Feb 13 15:19:03.362274 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:03.362274 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:19:03.370106 ignition[1015]: INFO : umount: umount passed Feb 13 15:19:03.370106 ignition[1015]: INFO : Ignition finished successfully Feb 13 15:19:03.363849 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:19:03.370623 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:19:03.372274 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:19:03.374005 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:19:03.374143 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:19:03.374988 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:19:03.375039 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:19:03.376516 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:19:03.376621 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:19:03.377888 systemd[1]: Stopped target network.target - Network. Feb 13 15:19:03.378868 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:19:03.378934 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:19:03.379854 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:19:03.382329 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:19:03.389666 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:19:03.391683 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:19:03.396412 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:19:03.397625 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:19:03.397716 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:19:03.399085 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:19:03.399136 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:19:03.400056 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:19:03.400132 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:19:03.402331 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:19:03.402390 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:19:03.403327 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:19:03.404467 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:19:03.407807 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:19:03.407906 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:19:03.410318 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:19:03.411309 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:19:03.412966 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:19:03.413083 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:19:03.418815 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:19:03.419115 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:19:03.419214 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:19:03.422268 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:19:03.422969 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:19:03.423036 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:19:03.429409 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:19:03.430457 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:19:03.430563 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:19:03.432180 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:19:03.432288 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:19:03.434060 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:19:03.434120 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:19:03.434928 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:19:03.434972 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:19:03.436633 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:19:03.438190 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:19:03.440293 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:19:03.450059 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:19:03.450176 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:19:03.459189 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:19:03.459449 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:19:03.461933 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:19:03.462004 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:19:03.463385 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:19:03.463424 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:19:03.464422 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:19:03.464472 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:19:03.465865 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:19:03.465914 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:19:03.467366 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:19:03.467417 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:19:03.482652 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:19:03.484007 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:19:03.486280 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:19:03.491052 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:19:03.491117 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:19:03.493421 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:19:03.493479 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:19:03.496073 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:19:03.496132 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:19:03.499327 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:19:03.499407 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:19:03.499707 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:19:03.499817 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:19:03.501435 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:19:03.514070 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:19:03.523796 systemd[1]: Switching root. Feb 13 15:19:03.556152 systemd-journald[237]: Journal stopped Feb 13 15:19:04.596929 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 15:19:04.597048 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:19:04.597068 kernel: SELinux: policy capability open_perms=1 Feb 13 15:19:04.597078 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:19:04.597092 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:19:04.597101 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:19:04.597113 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:19:04.597122 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:19:04.597130 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:19:04.597143 kernel: audit: type=1403 audit(1739459943.680:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:19:04.597154 systemd[1]: Successfully loaded SELinux policy in 35.475ms. Feb 13 15:19:04.597174 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.530ms. Feb 13 15:19:04.597187 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:19:04.597198 systemd[1]: Detected virtualization kvm. Feb 13 15:19:04.597208 systemd[1]: Detected architecture arm64. Feb 13 15:19:04.597218 systemd[1]: Detected first boot. Feb 13 15:19:04.597277 systemd[1]: Hostname set to . Feb 13 15:19:04.597288 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:19:04.597305 zram_generator::config[1060]: No configuration found. Feb 13 15:19:04.597319 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:19:04.597332 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:19:04.597344 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:19:04.597359 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:19:04.597369 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:19:04.597379 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:19:04.597389 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:19:04.597399 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:19:04.597409 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:19:04.597419 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:19:04.597431 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:19:04.597442 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:19:04.597452 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:19:04.597463 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:19:04.597473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:19:04.597484 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:19:04.597495 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:19:04.597504 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:19:04.597515 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:19:04.597528 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:19:04.597538 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:19:04.597548 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:19:04.597558 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:19:04.597568 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:19:04.597578 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:19:04.597590 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:19:04.597601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:19:04.597611 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:19:04.597623 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:19:04.597632 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:19:04.597643 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:19:04.597653 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:19:04.597663 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:19:04.597673 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:19:04.597709 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:19:04.597723 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:19:04.597734 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:19:04.597744 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:19:04.597754 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:19:04.597764 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:19:04.597774 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:19:04.597784 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:19:04.597794 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:19:04.597808 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:19:04.597818 systemd[1]: Reached target machines.target - Containers. Feb 13 15:19:04.597828 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:19:04.597838 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:19:04.597848 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:19:04.597858 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:19:04.597869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:19:04.597883 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:19:04.597895 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:19:04.597906 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:19:04.597916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:19:04.597926 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:19:04.597937 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:19:04.597947 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:19:04.597959 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:19:04.597969 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:19:04.597980 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:19:04.597991 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:19:04.598001 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:19:04.598011 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:19:04.598021 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:19:04.598031 kernel: fuse: init (API version 7.39) Feb 13 15:19:04.598042 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:19:04.598053 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:19:04.598063 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:19:04.598073 systemd[1]: Stopped verity-setup.service. Feb 13 15:19:04.598083 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:19:04.598095 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:19:04.598105 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:19:04.598115 kernel: ACPI: bus type drm_connector registered Feb 13 15:19:04.598125 kernel: loop: module loaded Feb 13 15:19:04.598135 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:19:04.598146 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:19:04.598158 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:19:04.598168 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:19:04.598178 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:19:04.598189 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:19:04.598199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:19:04.598209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:19:04.598219 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:19:04.598241 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:19:04.598255 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:19:04.598266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:19:04.598276 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:19:04.598296 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:19:04.598307 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:19:04.598317 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:19:04.598327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:19:04.598337 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:19:04.598348 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:19:04.598360 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:19:04.598372 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:19:04.598382 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:19:04.598425 systemd-journald[1128]: Collecting audit messages is disabled. Feb 13 15:19:04.598453 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:19:04.598464 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:19:04.598476 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:19:04.598490 systemd-journald[1128]: Journal started Feb 13 15:19:04.598515 systemd-journald[1128]: Runtime Journal (/run/log/journal/9c3217296f454077a17f569f5f2a262d) is 8M, max 76.6M, 68.6M free. Feb 13 15:19:04.257626 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:19:04.268033 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:19:04.268579 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:19:04.603333 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:19:04.618783 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:19:04.626004 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:19:04.626092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:19:04.639381 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:19:04.647459 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:19:04.647547 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:19:04.656293 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:19:04.668387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:19:04.672255 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:19:04.681280 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:19:04.685462 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:19:04.693834 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:19:04.695016 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:19:04.696592 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:19:04.700294 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:19:04.702556 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:19:04.729291 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:19:04.738278 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 15:19:04.756942 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:19:04.767583 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:19:04.782282 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:19:04.782794 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Feb 13 15:19:04.782804 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Feb 13 15:19:04.788317 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:19:04.792857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:19:04.800371 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:19:04.803890 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:19:04.823388 systemd-journald[1128]: Time spent on flushing to /var/log/journal/9c3217296f454077a17f569f5f2a262d is 39.944ms for 1137 entries. Feb 13 15:19:04.823388 systemd-journald[1128]: System Journal (/var/log/journal/9c3217296f454077a17f569f5f2a262d) is 8M, max 584.8M, 576.8M free. Feb 13 15:19:04.878984 systemd-journald[1128]: Received client request to flush runtime journal. Feb 13 15:19:04.879113 kernel: loop1: detected capacity change from 0 to 113512 Feb 13 15:19:04.821540 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:19:04.836802 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:19:04.848552 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:19:04.883247 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:19:04.896403 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:19:04.904381 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 15:19:04.909560 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:19:04.935008 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Feb 13 15:19:04.935025 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Feb 13 15:19:04.941292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:19:04.960757 kernel: loop3: detected capacity change from 0 to 8 Feb 13 15:19:04.984251 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 15:19:05.017363 kernel: loop5: detected capacity change from 0 to 113512 Feb 13 15:19:05.034931 kernel: loop6: detected capacity change from 0 to 189592 Feb 13 15:19:05.064257 kernel: loop7: detected capacity change from 0 to 8 Feb 13 15:19:05.065786 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 15:19:05.068028 (sd-merge)[1209]: Merged extensions into '/usr'. Feb 13 15:19:05.078786 systemd[1]: Reload requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:19:05.078894 systemd[1]: Reloading... Feb 13 15:19:05.209977 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:19:05.223298 zram_generator::config[1240]: No configuration found. Feb 13 15:19:05.341495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:19:05.404300 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:19:05.404515 systemd[1]: Reloading finished in 325 ms. Feb 13 15:19:05.434095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:19:05.438286 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:19:05.439354 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:19:05.450264 systemd[1]: Starting ensure-sysext.service... Feb 13 15:19:05.452472 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:19:05.457544 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:19:05.468110 systemd[1]: Reload requested from client PID 1275 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:19:05.468129 systemd[1]: Reloading... Feb 13 15:19:05.479449 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:19:05.480109 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:19:05.481191 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:19:05.481670 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Feb 13 15:19:05.481860 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Feb 13 15:19:05.486450 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:19:05.486460 systemd-tmpfiles[1276]: Skipping /boot Feb 13 15:19:05.497564 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:19:05.497581 systemd-tmpfiles[1276]: Skipping /boot Feb 13 15:19:05.520988 systemd-udevd[1277]: Using default interface naming scheme 'v255'. Feb 13 15:19:05.570257 zram_generator::config[1305]: No configuration found. Feb 13 15:19:05.771548 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:19:05.799255 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:19:05.817287 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1311) Feb 13 15:19:05.848576 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:19:05.849043 systemd[1]: Reloading finished in 380 ms. Feb 13 15:19:05.858577 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:19:05.861275 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:19:05.919376 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:19:05.928582 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:19:05.931937 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:19:05.933372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:19:05.945576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:19:05.950450 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:19:05.954431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:19:05.958423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:19:05.959482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:19:05.964384 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:19:05.966946 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:19:05.978155 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 15:19:05.978239 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 15:19:05.978282 kernel: [drm] features: -context_init Feb 13 15:19:05.977510 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:19:05.979420 kernel: [drm] number of scanouts: 1 Feb 13 15:19:05.980852 kernel: [drm] number of cap sets: 0 Feb 13 15:19:05.987255 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 15:19:05.985446 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:19:05.992288 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:19:05.992263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:19:06.007264 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 15:19:06.007865 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:19:06.011320 systemd[1]: Finished ensure-sysext.service. Feb 13 15:19:06.012668 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:19:06.014269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:19:06.017728 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 15:19:06.030959 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:19:06.032168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:19:06.055333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:19:06.063592 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:19:06.065355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:19:06.065489 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:19:06.070615 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:19:06.075288 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:19:06.079659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:19:06.084322 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:19:06.092215 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:19:06.092462 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:19:06.093983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:19:06.095339 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:19:06.096472 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:19:06.112372 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:19:06.113949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:19:06.116354 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:19:06.131705 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:19:06.137866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:19:06.138799 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:19:06.140570 augenrules[1432]: No rules Feb 13 15:19:06.145605 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:19:06.147407 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:19:06.148187 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:19:06.149356 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:19:06.150604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:19:06.152283 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:19:06.154870 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:19:06.166481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:19:06.167201 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:19:06.169277 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:19:06.212102 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:19:06.225659 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:19:06.249327 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:19:06.278015 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:19:06.279199 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:19:06.287529 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:19:06.292042 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:19:06.293113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:19:06.300591 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:19:06.302447 systemd-networkd[1398]: lo: Link UP Feb 13 15:19:06.302477 systemd-networkd[1398]: lo: Gained carrier Feb 13 15:19:06.305502 systemd-networkd[1398]: Enumeration completed Feb 13 15:19:06.305640 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:19:06.312466 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:19:06.312477 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:19:06.313202 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:19:06.313214 systemd-networkd[1398]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:19:06.313486 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:19:06.315047 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:19:06.316949 systemd-networkd[1398]: eth0: Link UP Feb 13 15:19:06.316959 systemd-networkd[1398]: eth0: Gained carrier Feb 13 15:19:06.316976 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:19:06.324523 systemd-networkd[1398]: eth1: Link UP Feb 13 15:19:06.324536 systemd-networkd[1398]: eth1: Gained carrier Feb 13 15:19:06.324560 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:19:06.326408 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:19:06.337395 systemd-resolved[1400]: Positive Trust Anchors: Feb 13 15:19:06.337756 systemd-resolved[1400]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:19:06.337841 systemd-resolved[1400]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:19:06.347743 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:19:06.348454 systemd-resolved[1400]: Using system hostname 'ci-4230-0-1-4-233282f7f8'. Feb 13 15:19:06.350331 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:19:06.351124 systemd[1]: Reached target network.target - Network. Feb 13 15:19:06.351735 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:19:06.352568 systemd-networkd[1398]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:19:06.353060 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:19:06.354481 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:19:06.355501 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:19:06.356426 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:19:06.357138 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:19:06.357927 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Feb 13 15:19:06.358344 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:19:06.358973 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:19:06.359003 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:19:06.359532 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:19:06.361714 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:19:06.365466 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:19:06.369428 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:19:06.370353 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:19:06.371011 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:19:06.373314 systemd-networkd[1398]: eth0: DHCPv4 address 5.75.234.95/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:19:06.374960 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Feb 13 15:19:06.375169 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:19:06.376321 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:19:06.377969 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:19:06.378835 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:19:06.380047 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:19:06.381101 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:19:06.381863 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:19:06.381901 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:19:06.391595 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:19:06.396881 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:19:06.408068 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:19:06.414364 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:19:06.418533 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:19:06.421432 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:19:06.423443 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:19:06.427040 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 15:19:06.433477 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:19:06.436490 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:19:06.444481 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:19:06.447039 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:19:06.447755 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:19:06.450255 dbus-daemon[1469]: [system] SELinux support is enabled Feb 13 15:19:06.451494 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:19:06.454214 jq[1470]: false Feb 13 15:19:06.457440 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:19:06.462940 coreos-metadata[1468]: Feb 13 15:19:06.462 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 15:19:06.464363 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:19:06.467424 coreos-metadata[1468]: Feb 13 15:19:06.467 INFO Fetch successful Feb 13 15:19:06.467424 coreos-metadata[1468]: Feb 13 15:19:06.467 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 15:19:06.467424 coreos-metadata[1468]: Feb 13 15:19:06.467 INFO Fetch successful Feb 13 15:19:06.473861 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:19:06.477515 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:19:06.486627 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:19:06.486709 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:19:06.488458 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:19:06.488484 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:19:06.495797 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:19:06.496427 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:19:06.509608 jq[1481]: true Feb 13 15:19:06.534952 extend-filesystems[1473]: Found loop4 Feb 13 15:19:06.534952 extend-filesystems[1473]: Found loop5 Feb 13 15:19:06.534952 extend-filesystems[1473]: Found loop6 Feb 13 15:19:06.534952 extend-filesystems[1473]: Found loop7 Feb 13 15:19:06.534952 extend-filesystems[1473]: Found sda Feb 13 15:19:06.534952 extend-filesystems[1473]: Found sda1 Feb 13 15:19:06.534952 extend-filesystems[1473]: Found sda2 Feb 13 15:19:06.534952 extend-filesystems[1473]: Found sda3 Feb 13 15:19:06.534952 extend-filesystems[1473]: Found usr Feb 13 15:19:06.534952 extend-filesystems[1473]: Found sda4 Feb 13 15:19:06.534952 extend-filesystems[1473]: Found sda6 Feb 13 15:19:06.534952 extend-filesystems[1473]: Found sda7 Feb 13 15:19:06.534952 extend-filesystems[1473]: Found sda9 Feb 13 15:19:06.534952 extend-filesystems[1473]: Checking size of /dev/sda9 Feb 13 15:19:06.557646 update_engine[1480]: I20250213 15:19:06.543695 1480 main.cc:92] Flatcar Update Engine starting Feb 13 15:19:06.557950 jq[1494]: true Feb 13 15:19:06.557186 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:19:06.557444 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:19:06.561007 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:19:06.564383 update_engine[1480]: I20250213 15:19:06.563764 1480 update_check_scheduler.cc:74] Next update check in 11m2s Feb 13 15:19:06.567497 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:19:06.567724 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:19:06.571320 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:19:06.573634 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:19:06.581726 extend-filesystems[1473]: Resized partition /dev/sda9 Feb 13 15:19:06.587487 extend-filesystems[1517]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:19:06.599166 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 15:19:06.658573 systemd-logind[1478]: New seat seat0. Feb 13 15:19:06.663746 systemd-logind[1478]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:19:06.663774 systemd-logind[1478]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 15:19:06.664013 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:19:06.688620 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:19:06.693094 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:19:06.714898 systemd[1]: Starting sshkeys.service... Feb 13 15:19:06.721782 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1316) Feb 13 15:19:06.735728 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 15:19:06.735062 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:19:06.742804 extend-filesystems[1517]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:19:06.742804 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 15:19:06.742804 extend-filesystems[1517]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 15:19:06.741350 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:19:06.754761 extend-filesystems[1473]: Resized filesystem in /dev/sda9 Feb 13 15:19:06.754761 extend-filesystems[1473]: Found sr0 Feb 13 15:19:06.751945 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:19:06.752224 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:19:06.858349 coreos-metadata[1541]: Feb 13 15:19:06.858 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 15:19:06.859825 coreos-metadata[1541]: Feb 13 15:19:06.859 INFO Fetch successful Feb 13 15:19:06.864441 unknown[1541]: wrote ssh authorized keys file for user: core Feb 13 15:19:06.894273 update-ssh-keys[1551]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:19:06.896603 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:19:06.901947 systemd[1]: Finished sshkeys.service. Feb 13 15:19:06.911946 containerd[1500]: time="2025-02-13T15:19:06.911846440Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:19:06.919207 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:19:06.946525 containerd[1500]: time="2025-02-13T15:19:06.946462600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:19:06.949972 containerd[1500]: time="2025-02-13T15:19:06.949916720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:19:06.949972 containerd[1500]: time="2025-02-13T15:19:06.949968480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:19:06.950079 containerd[1500]: time="2025-02-13T15:19:06.949993920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:19:06.950240 containerd[1500]: time="2025-02-13T15:19:06.950206520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:19:06.950324 containerd[1500]: time="2025-02-13T15:19:06.950262720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:19:06.951265 containerd[1500]: time="2025-02-13T15:19:06.950352000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:19:06.951265 containerd[1500]: time="2025-02-13T15:19:06.950376240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:19:06.951265 containerd[1500]: time="2025-02-13T15:19:06.950687520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:19:06.951265 containerd[1500]: time="2025-02-13T15:19:06.950709640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:19:06.951265 containerd[1500]: time="2025-02-13T15:19:06.950729000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:19:06.951265 containerd[1500]: time="2025-02-13T15:19:06.950741280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:19:06.951265 containerd[1500]: time="2025-02-13T15:19:06.950841960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:19:06.951265 containerd[1500]: time="2025-02-13T15:19:06.951083200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:19:06.951409 containerd[1500]: time="2025-02-13T15:19:06.951327360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:19:06.951409 containerd[1500]: time="2025-02-13T15:19:06.951348240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:19:06.951552 containerd[1500]: time="2025-02-13T15:19:06.951521520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:19:06.951624 containerd[1500]: time="2025-02-13T15:19:06.951605760Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:19:06.955821 containerd[1500]: time="2025-02-13T15:19:06.955778680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:19:06.955891 containerd[1500]: time="2025-02-13T15:19:06.955850240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:19:06.955891 containerd[1500]: time="2025-02-13T15:19:06.955867560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:19:06.955891 containerd[1500]: time="2025-02-13T15:19:06.955884640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:19:06.955964 containerd[1500]: time="2025-02-13T15:19:06.955899360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:19:06.956240 containerd[1500]: time="2025-02-13T15:19:06.956089160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:19:06.956394 containerd[1500]: time="2025-02-13T15:19:06.956372720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:19:06.956498 containerd[1500]: time="2025-02-13T15:19:06.956480440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:19:06.956531 containerd[1500]: time="2025-02-13T15:19:06.956500600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:19:06.956531 containerd[1500]: time="2025-02-13T15:19:06.956517920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:19:06.956564 containerd[1500]: time="2025-02-13T15:19:06.956541520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:19:06.956564 containerd[1500]: time="2025-02-13T15:19:06.956555760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:19:06.956601 containerd[1500]: time="2025-02-13T15:19:06.956567720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:19:06.956601 containerd[1500]: time="2025-02-13T15:19:06.956582960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:19:06.956601 containerd[1500]: time="2025-02-13T15:19:06.956597080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:19:06.956647 containerd[1500]: time="2025-02-13T15:19:06.956609760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:19:06.956647 containerd[1500]: time="2025-02-13T15:19:06.956621760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:19:06.956647 containerd[1500]: time="2025-02-13T15:19:06.956633400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:19:06.956759 containerd[1500]: time="2025-02-13T15:19:06.956653360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956759 containerd[1500]: time="2025-02-13T15:19:06.956721040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956759 containerd[1500]: time="2025-02-13T15:19:06.956737760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956759 containerd[1500]: time="2025-02-13T15:19:06.956751000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956833 containerd[1500]: time="2025-02-13T15:19:06.956763200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956833 containerd[1500]: time="2025-02-13T15:19:06.956777120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956833 containerd[1500]: time="2025-02-13T15:19:06.956788840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956833 containerd[1500]: time="2025-02-13T15:19:06.956801920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956833 containerd[1500]: time="2025-02-13T15:19:06.956814280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956833 containerd[1500]: time="2025-02-13T15:19:06.956828520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956932 containerd[1500]: time="2025-02-13T15:19:06.956840720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956932 containerd[1500]: time="2025-02-13T15:19:06.956859640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956932 containerd[1500]: time="2025-02-13T15:19:06.956871280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956932 containerd[1500]: time="2025-02-13T15:19:06.956885800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:19:06.956932 containerd[1500]: time="2025-02-13T15:19:06.956906040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956932 containerd[1500]: time="2025-02-13T15:19:06.956918640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.956932 containerd[1500]: time="2025-02-13T15:19:06.956929400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:19:06.957187 containerd[1500]: time="2025-02-13T15:19:06.957114200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:19:06.957187 containerd[1500]: time="2025-02-13T15:19:06.957140960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:19:06.957187 containerd[1500]: time="2025-02-13T15:19:06.957151840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:19:06.957187 containerd[1500]: time="2025-02-13T15:19:06.957162360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:19:06.957187 containerd[1500]: time="2025-02-13T15:19:06.957171280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.957187 containerd[1500]: time="2025-02-13T15:19:06.957183480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:19:06.957187 containerd[1500]: time="2025-02-13T15:19:06.957192960Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:19:06.957352 containerd[1500]: time="2025-02-13T15:19:06.957203920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:19:06.957624 containerd[1500]: time="2025-02-13T15:19:06.957574280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:19:06.957745 containerd[1500]: time="2025-02-13T15:19:06.957630560Z" level=info msg="Connect containerd service" Feb 13 15:19:06.957745 containerd[1500]: time="2025-02-13T15:19:06.957682880Z" level=info msg="using legacy CRI server" Feb 13 15:19:06.957745 containerd[1500]: time="2025-02-13T15:19:06.957692640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:19:06.960241 containerd[1500]: time="2025-02-13T15:19:06.957930840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:19:06.960578 containerd[1500]: time="2025-02-13T15:19:06.960538040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:19:06.961157 containerd[1500]: time="2025-02-13T15:19:06.961121640Z" level=info msg="Start subscribing containerd event" Feb 13 15:19:06.961183 containerd[1500]: time="2025-02-13T15:19:06.961177680Z" level=info msg="Start recovering state" Feb 13 15:19:06.961279 containerd[1500]: time="2025-02-13T15:19:06.961266720Z" level=info msg="Start event monitor" Feb 13 15:19:06.961303 containerd[1500]: time="2025-02-13T15:19:06.961282680Z" level=info msg="Start snapshots syncer" Feb 13 15:19:06.961303 containerd[1500]: time="2025-02-13T15:19:06.961292520Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:19:06.961303 containerd[1500]: time="2025-02-13T15:19:06.961299040Z" level=info msg="Start streaming server" Feb 13 15:19:06.961722 containerd[1500]: time="2025-02-13T15:19:06.961701360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:19:06.961767 containerd[1500]: time="2025-02-13T15:19:06.961752560Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:19:06.962875 containerd[1500]: time="2025-02-13T15:19:06.961807680Z" level=info msg="containerd successfully booted in 0.051583s" Feb 13 15:19:06.961905 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:19:07.193945 sshd_keygen[1510]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:19:07.218064 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:19:07.227828 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:19:07.236326 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:19:07.236831 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:19:07.251723 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:19:07.260607 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:19:07.270851 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:19:07.273296 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:19:07.274290 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:19:07.959462 systemd-networkd[1398]: eth0: Gained IPv6LL Feb 13 15:19:07.960358 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Feb 13 15:19:07.963360 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:19:07.967645 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:19:07.975706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:19:07.979022 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:19:08.006380 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:19:08.343501 systemd-networkd[1398]: eth1: Gained IPv6LL Feb 13 15:19:08.344247 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Feb 13 15:19:08.662379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:19:08.664025 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:19:08.667868 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:19:08.669341 systemd[1]: Startup finished in 788ms (kernel) + 6.000s (initrd) + 5.023s (userspace) = 11.813s. Feb 13 15:19:09.229161 kubelet[1593]: E0213 15:19:09.229051 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:19:09.232757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:19:09.232909 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:19:09.233274 systemd[1]: kubelet.service: Consumed 846ms CPU time, 234.3M memory peak. Feb 13 15:19:19.484068 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:19:19.492752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:19:19.600473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:19:19.602523 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:19:19.656250 kubelet[1612]: E0213 15:19:19.654648 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:19:19.658430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:19:19.658699 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:19:19.659598 systemd[1]: kubelet.service: Consumed 143ms CPU time, 97.3M memory peak. Feb 13 15:19:29.909329 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:19:29.922613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:19:30.016852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:19:30.021974 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:19:30.070392 kubelet[1627]: E0213 15:19:30.070344 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:19:30.074357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:19:30.074857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:19:30.075261 systemd[1]: kubelet.service: Consumed 139ms CPU time, 93.3M memory peak. Feb 13 15:19:38.615934 systemd-timesyncd[1414]: Contacted time server 85.215.93.134:123 (2.flatcar.pool.ntp.org). Feb 13 15:19:38.616024 systemd-timesyncd[1414]: Initial clock synchronization to Thu 2025-02-13 15:19:39.013112 UTC. Feb 13 15:19:40.329006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:19:40.335623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:19:40.455616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:19:40.455716 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:19:40.510971 kubelet[1642]: E0213 15:19:40.510879 1642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:19:40.515678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:19:40.516005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:19:40.516776 systemd[1]: kubelet.service: Consumed 141ms CPU time, 95.6M memory peak. Feb 13 15:19:50.552673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:19:50.562615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:19:50.694543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:19:50.697003 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:19:50.739573 kubelet[1658]: E0213 15:19:50.739508 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:19:50.742377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:19:50.742554 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:19:50.743566 systemd[1]: kubelet.service: Consumed 152ms CPU time, 96.3M memory peak. Feb 13 15:19:51.903198 update_engine[1480]: I20250213 15:19:51.902366 1480 update_attempter.cc:509] Updating boot flags... Feb 13 15:19:51.957295 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1674) Feb 13 15:19:52.016379 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1675) Feb 13 15:20:00.803173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:20:00.818348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:00.940528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:00.961202 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:01.011153 kubelet[1691]: E0213 15:20:01.011082 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:01.013808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:01.013982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:01.014870 systemd[1]: kubelet.service: Consumed 155ms CPU time, 94.3M memory peak. Feb 13 15:20:11.049593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:20:11.059562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:11.187549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:11.188573 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:11.230556 kubelet[1706]: E0213 15:20:11.230505 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:11.233585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:11.233798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:11.234572 systemd[1]: kubelet.service: Consumed 144ms CPU time, 94.4M memory peak. Feb 13 15:20:21.299451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 15:20:21.307581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:21.415050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:21.419600 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:21.469611 kubelet[1721]: E0213 15:20:21.469543 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:21.472311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:21.472505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:21.473487 systemd[1]: kubelet.service: Consumed 144ms CPU time, 95.8M memory peak. Feb 13 15:20:31.549062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 15:20:31.559590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:31.675178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:31.679707 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:31.721646 kubelet[1736]: E0213 15:20:31.721549 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:31.725143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:31.725581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:31.726616 systemd[1]: kubelet.service: Consumed 141ms CPU time, 96.2M memory peak. Feb 13 15:20:41.799729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Feb 13 15:20:41.807711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:41.934763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:41.935603 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:41.981578 kubelet[1751]: E0213 15:20:41.981507 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:41.985110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:41.985412 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:41.986395 systemd[1]: kubelet.service: Consumed 149ms CPU time, 94.4M memory peak. Feb 13 15:20:52.049463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Feb 13 15:20:52.054473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:52.162373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:52.176785 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:52.220597 kubelet[1766]: E0213 15:20:52.220494 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:52.223473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:52.223759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:52.224216 systemd[1]: kubelet.service: Consumed 141ms CPU time, 94.3M memory peak. Feb 13 15:20:59.932658 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:20:59.938704 systemd[1]: Started sshd@0-5.75.234.95:22-139.178.68.195:58988.service - OpenSSH per-connection server daemon (139.178.68.195:58988). Feb 13 15:21:00.939380 sshd[1773]: Accepted publickey for core from 139.178.68.195 port 58988 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:21:00.941928 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:00.950445 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:21:00.958871 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:21:00.969427 systemd-logind[1478]: New session 1 of user core. Feb 13 15:21:00.975319 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:21:00.982875 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:21:00.987889 (systemd)[1777]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:21:00.991614 systemd-logind[1478]: New session c1 of user core. Feb 13 15:21:01.129731 systemd[1777]: Queued start job for default target default.target. Feb 13 15:21:01.141566 systemd[1777]: Created slice app.slice - User Application Slice. Feb 13 15:21:01.141629 systemd[1777]: Reached target paths.target - Paths. Feb 13 15:21:01.141704 systemd[1777]: Reached target timers.target - Timers. Feb 13 15:21:01.144404 systemd[1777]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:21:01.159305 systemd[1777]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:21:01.159545 systemd[1777]: Reached target sockets.target - Sockets. Feb 13 15:21:01.159708 systemd[1777]: Reached target basic.target - Basic System. Feb 13 15:21:01.159836 systemd[1777]: Reached target default.target - Main User Target. Feb 13 15:21:01.159887 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:21:01.160000 systemd[1777]: Startup finished in 159ms. Feb 13 15:21:01.171847 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:21:01.882042 systemd[1]: Started sshd@1-5.75.234.95:22-139.178.68.195:59002.service - OpenSSH per-connection server daemon (139.178.68.195:59002). Feb 13 15:21:02.299405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Feb 13 15:21:02.307572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:02.456553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:02.471553 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:21:02.517065 kubelet[1798]: E0213 15:21:02.516958 1798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:21:02.520407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:21:02.520566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:21:02.521189 systemd[1]: kubelet.service: Consumed 156ms CPU time, 96.3M memory peak. Feb 13 15:21:02.868973 sshd[1788]: Accepted publickey for core from 139.178.68.195 port 59002 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:21:02.870964 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:02.880219 systemd-logind[1478]: New session 2 of user core. Feb 13 15:21:02.890566 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:21:03.550472 sshd[1805]: Connection closed by 139.178.68.195 port 59002 Feb 13 15:21:03.551406 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:03.557675 systemd-logind[1478]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:21:03.558442 systemd[1]: sshd@1-5.75.234.95:22-139.178.68.195:59002.service: Deactivated successfully. Feb 13 15:21:03.560474 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:21:03.561846 systemd-logind[1478]: Removed session 2. Feb 13 15:21:03.725564 systemd[1]: Started sshd@2-5.75.234.95:22-139.178.68.195:59014.service - OpenSSH per-connection server daemon (139.178.68.195:59014). Feb 13 15:21:04.704674 sshd[1811]: Accepted publickey for core from 139.178.68.195 port 59014 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:21:04.707359 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:04.714412 systemd-logind[1478]: New session 3 of user core. Feb 13 15:21:04.724666 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:21:05.376984 sshd[1813]: Connection closed by 139.178.68.195 port 59014 Feb 13 15:21:05.377965 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:05.382932 systemd[1]: sshd@2-5.75.234.95:22-139.178.68.195:59014.service: Deactivated successfully. Feb 13 15:21:05.385631 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:21:05.388006 systemd-logind[1478]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:21:05.389060 systemd-logind[1478]: Removed session 3. Feb 13 15:21:05.558660 systemd[1]: Started sshd@3-5.75.234.95:22-139.178.68.195:59018.service - OpenSSH per-connection server daemon (139.178.68.195:59018). Feb 13 15:21:06.554538 sshd[1819]: Accepted publickey for core from 139.178.68.195 port 59018 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:21:06.556652 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:06.562951 systemd-logind[1478]: New session 4 of user core. Feb 13 15:21:06.571827 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:21:07.244087 sshd[1821]: Connection closed by 139.178.68.195 port 59018 Feb 13 15:21:07.244958 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:07.250267 systemd[1]: sshd@3-5.75.234.95:22-139.178.68.195:59018.service: Deactivated successfully. Feb 13 15:21:07.254172 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:21:07.255184 systemd-logind[1478]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:21:07.256332 systemd-logind[1478]: Removed session 4. Feb 13 15:21:07.424980 systemd[1]: Started sshd@4-5.75.234.95:22-139.178.68.195:42944.service - OpenSSH per-connection server daemon (139.178.68.195:42944). Feb 13 15:21:08.409557 sshd[1827]: Accepted publickey for core from 139.178.68.195 port 42944 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:21:08.412057 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:08.418154 systemd-logind[1478]: New session 5 of user core. Feb 13 15:21:08.432613 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:21:08.946340 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:21:08.946658 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:21:08.967667 sudo[1830]: pam_unix(sudo:session): session closed for user root Feb 13 15:21:09.130843 sshd[1829]: Connection closed by 139.178.68.195 port 42944 Feb 13 15:21:09.131829 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:09.138325 systemd[1]: sshd@4-5.75.234.95:22-139.178.68.195:42944.service: Deactivated successfully. Feb 13 15:21:09.141391 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:21:09.146742 systemd-logind[1478]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:21:09.148644 systemd-logind[1478]: Removed session 5. Feb 13 15:21:09.304058 systemd[1]: Started sshd@5-5.75.234.95:22-139.178.68.195:42946.service - OpenSSH per-connection server daemon (139.178.68.195:42946). Feb 13 15:21:10.279552 sshd[1836]: Accepted publickey for core from 139.178.68.195 port 42946 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:21:10.281527 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:10.289356 systemd-logind[1478]: New session 6 of user core. Feb 13 15:21:10.295512 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:21:10.797981 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:21:10.798729 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:21:10.802901 sudo[1840]: pam_unix(sudo:session): session closed for user root Feb 13 15:21:10.809149 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:21:10.809594 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:21:10.830900 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:21:10.862906 augenrules[1862]: No rules Feb 13 15:21:10.864276 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:21:10.864577 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:21:10.866176 sudo[1839]: pam_unix(sudo:session): session closed for user root Feb 13 15:21:11.024187 sshd[1838]: Connection closed by 139.178.68.195 port 42946 Feb 13 15:21:11.025159 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:11.031038 systemd-logind[1478]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:21:11.031286 systemd[1]: sshd@5-5.75.234.95:22-139.178.68.195:42946.service: Deactivated successfully. Feb 13 15:21:11.033486 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:21:11.035583 systemd-logind[1478]: Removed session 6. Feb 13 15:21:11.221651 systemd[1]: Started sshd@6-5.75.234.95:22-139.178.68.195:42950.service - OpenSSH per-connection server daemon (139.178.68.195:42950). Feb 13 15:21:12.218667 sshd[1871]: Accepted publickey for core from 139.178.68.195 port 42950 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:21:12.220462 sshd-session[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:12.227423 systemd-logind[1478]: New session 7 of user core. Feb 13 15:21:12.237650 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:21:12.549538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Feb 13 15:21:12.564696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:12.686397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:12.691220 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:21:12.743560 kubelet[1882]: E0213 15:21:12.743468 1882 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:21:12.746076 sudo[1888]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:21:12.746096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:21:12.746253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:21:12.746981 sudo[1888]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:21:12.747620 systemd[1]: kubelet.service: Consumed 152ms CPU time, 96.2M memory peak. Feb 13 15:21:13.340989 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:13.341204 systemd[1]: kubelet.service: Consumed 152ms CPU time, 96.2M memory peak. Feb 13 15:21:13.353831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:13.388517 systemd[1]: Reload requested from client PID 1921 ('systemctl') (unit session-7.scope)... Feb 13 15:21:13.388707 systemd[1]: Reloading... Feb 13 15:21:13.520268 zram_generator::config[1966]: No configuration found. Feb 13 15:21:13.634118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:21:13.726701 systemd[1]: Reloading finished in 337 ms. Feb 13 15:21:13.775867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:13.780839 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:21:13.787875 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:13.788783 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:21:13.789173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:13.789373 systemd[1]: kubelet.service: Consumed 98ms CPU time, 84.1M memory peak. Feb 13 15:21:13.797691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:13.915048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:13.926946 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:21:13.965891 kubelet[2020]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:21:13.965891 kubelet[2020]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:21:13.965891 kubelet[2020]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:21:13.966373 kubelet[2020]: I0213 15:21:13.966005 2020 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:21:14.496082 kubelet[2020]: I0213 15:21:14.496003 2020 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:21:14.496082 kubelet[2020]: I0213 15:21:14.496046 2020 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:21:14.496420 kubelet[2020]: I0213 15:21:14.496392 2020 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:21:14.527741 kubelet[2020]: I0213 15:21:14.527004 2020 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:21:14.537265 kubelet[2020]: E0213 15:21:14.537202 2020 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:21:14.537265 kubelet[2020]: I0213 15:21:14.537255 2020 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:21:14.542101 kubelet[2020]: I0213 15:21:14.542073 2020 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:21:14.543314 kubelet[2020]: I0213 15:21:14.543286 2020 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:21:14.543483 kubelet[2020]: I0213 15:21:14.543447 2020 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:21:14.543687 kubelet[2020]: I0213 15:21:14.543485 2020 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:21:14.543827 kubelet[2020]: I0213 15:21:14.543815 2020 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:21:14.543862 kubelet[2020]: I0213 15:21:14.543828 2020 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:21:14.544038 kubelet[2020]: I0213 15:21:14.544015 2020 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:21:14.546619 kubelet[2020]: I0213 15:21:14.545958 2020 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:21:14.546619 kubelet[2020]: I0213 15:21:14.545993 2020 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:21:14.546619 kubelet[2020]: I0213 15:21:14.546026 2020 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:21:14.546619 kubelet[2020]: I0213 15:21:14.546040 2020 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:21:14.548262 kubelet[2020]: E0213 15:21:14.547027 2020 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:14.548262 kubelet[2020]: E0213 15:21:14.547087 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:14.548724 kubelet[2020]: I0213 15:21:14.548623 2020 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:21:14.550827 kubelet[2020]: I0213 15:21:14.550806 2020 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:21:14.551704 kubelet[2020]: W0213 15:21:14.551667 2020 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:21:14.552421 kubelet[2020]: I0213 15:21:14.552406 2020 server.go:1269] "Started kubelet" Feb 13 15:21:14.553552 kubelet[2020]: I0213 15:21:14.553320 2020 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:21:14.555256 kubelet[2020]: I0213 15:21:14.554669 2020 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:21:14.556506 kubelet[2020]: I0213 15:21:14.556451 2020 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:21:14.556903 kubelet[2020]: I0213 15:21:14.556883 2020 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:21:14.557676 kubelet[2020]: I0213 15:21:14.557645 2020 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:21:14.561883 kubelet[2020]: E0213 15:21:14.560448 2020 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.1823cdb8c4ab550f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-02-13 15:21:14.552382735 +0000 UTC m=+0.622164930,LastTimestamp:2025-02-13 15:21:14.552382735 +0000 UTC m=+0.622164930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Feb 13 15:21:14.569384 kubelet[2020]: I0213 15:21:14.569350 2020 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:21:14.573073 kubelet[2020]: E0213 15:21:14.573039 2020 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:21:14.574144 kubelet[2020]: I0213 15:21:14.573547 2020 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:21:14.574938 kubelet[2020]: E0213 15:21:14.574911 2020 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 15:21:14.578580 kubelet[2020]: I0213 15:21:14.573589 2020 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:21:14.579538 kubelet[2020]: W0213 15:21:14.579411 2020 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:21:14.580060 kubelet[2020]: E0213 15:21:14.579906 2020 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 15:21:14.580177 kubelet[2020]: I0213 15:21:14.575403 2020 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:21:14.582005 kubelet[2020]: W0213 15:21:14.581787 2020 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:21:14.582005 kubelet[2020]: E0213 15:21:14.581825 2020 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 15:21:14.582784 kubelet[2020]: I0213 15:21:14.582756 2020 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:21:14.582998 kubelet[2020]: I0213 15:21:14.582975 2020 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:21:14.589155 kubelet[2020]: I0213 15:21:14.588976 2020 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:21:14.599263 kubelet[2020]: E0213 15:21:14.598417 2020 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 15:21:14.599263 kubelet[2020]: E0213 15:21:14.598481 2020 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.1823cdb8c5e630ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-02-13 15:21:14.573017324 +0000 UTC m=+0.642799519,LastTimestamp:2025-02-13 15:21:14.573017324 +0000 UTC m=+0.642799519,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Feb 13 15:21:14.599263 kubelet[2020]: W0213 15:21:14.598737 2020 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 15:21:14.599263 kubelet[2020]: E0213 15:21:14.598769 2020 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 15:21:14.618403 kubelet[2020]: I0213 15:21:14.618368 2020 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:21:14.618552 kubelet[2020]: I0213 15:21:14.618541 2020 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:21:14.618658 kubelet[2020]: I0213 15:21:14.618649 2020 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:21:14.624249 kubelet[2020]: I0213 15:21:14.623535 2020 policy_none.go:49] "None policy: Start" Feb 13 15:21:14.626249 kubelet[2020]: I0213 15:21:14.626203 2020 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:21:14.626469 kubelet[2020]: I0213 15:21:14.626458 2020 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:21:14.629412 kubelet[2020]: I0213 15:21:14.629356 2020 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:21:14.630718 kubelet[2020]: I0213 15:21:14.630690 2020 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:21:14.630718 kubelet[2020]: I0213 15:21:14.630718 2020 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:21:14.630859 kubelet[2020]: I0213 15:21:14.630741 2020 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:21:14.630859 kubelet[2020]: E0213 15:21:14.630850 2020 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:21:14.640796 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:21:14.651525 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:21:14.656459 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:21:14.670283 kubelet[2020]: I0213 15:21:14.668990 2020 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:21:14.670283 kubelet[2020]: I0213 15:21:14.669336 2020 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:21:14.670283 kubelet[2020]: I0213 15:21:14.669375 2020 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:21:14.670283 kubelet[2020]: I0213 15:21:14.669811 2020 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:21:14.672398 kubelet[2020]: E0213 15:21:14.672145 2020 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" Feb 13 15:21:14.771363 kubelet[2020]: I0213 15:21:14.771160 2020 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.4" Feb 13 15:21:14.780936 kubelet[2020]: I0213 15:21:14.780864 2020 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.4" Feb 13 15:21:14.780936 kubelet[2020]: E0213 15:21:14.780918 2020 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": node \"10.0.0.4\" not found" Feb 13 15:21:14.807154 kubelet[2020]: E0213 15:21:14.807106 2020 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 15:21:14.908196 kubelet[2020]: E0213 15:21:14.908081 2020 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 15:21:14.959174 sudo[1888]: pam_unix(sudo:session): session closed for user root Feb 13 15:21:15.008802 kubelet[2020]: E0213 15:21:15.008687 2020 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 15:21:15.109483 kubelet[2020]: E0213 15:21:15.109386 2020 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 15:21:15.120110 sshd[1873]: Connection closed by 139.178.68.195 port 42950 Feb 13 15:21:15.121142 sshd-session[1871]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:15.126593 systemd-logind[1478]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:21:15.126733 systemd[1]: sshd@6-5.75.234.95:22-139.178.68.195:42950.service: Deactivated successfully. Feb 13 15:21:15.130523 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:21:15.130932 systemd[1]: session-7.scope: Consumed 475ms CPU time, 72.6M memory peak. Feb 13 15:21:15.135034 systemd-logind[1478]: Removed session 7. Feb 13 15:21:15.210448 kubelet[2020]: E0213 15:21:15.210383 2020 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 15:21:15.311740 kubelet[2020]: E0213 15:21:15.311611 2020 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 15:21:15.412842 kubelet[2020]: E0213 15:21:15.412538 2020 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 15:21:15.498589 kubelet[2020]: I0213 15:21:15.498441 2020 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 15:21:15.498815 kubelet[2020]: W0213 15:21:15.498753 2020 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:21:15.512853 kubelet[2020]: E0213 15:21:15.512755 2020 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 15:21:15.547484 kubelet[2020]: I0213 15:21:15.547315 2020 apiserver.go:52] "Watching apiserver" Feb 13 15:21:15.547484 kubelet[2020]: E0213 15:21:15.547398 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:15.553265 kubelet[2020]: E0213 15:21:15.552803 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:15.563287 systemd[1]: Created slice kubepods-besteffort-pode00e6b4d_881c_4be3_a4b5_7157b0a62670.slice - libcontainer container kubepods-besteffort-pode00e6b4d_881c_4be3_a4b5_7157b0a62670.slice. Feb 13 15:21:15.579897 kubelet[2020]: I0213 15:21:15.579862 2020 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:21:15.583157 systemd[1]: Created slice kubepods-besteffort-pod13eb5662_ec00_4a36_af11_cc5770848f2a.slice - libcontainer container kubepods-besteffort-pod13eb5662_ec00_4a36_af11_cc5770848f2a.slice. Feb 13 15:21:15.586142 kubelet[2020]: I0213 15:21:15.585595 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e00e6b4d-881c-4be3-a4b5-7157b0a62670-tigera-ca-bundle\") pod \"calico-node-59d4w\" (UID: \"e00e6b4d-881c-4be3-a4b5-7157b0a62670\") " pod="calico-system/calico-node-59d4w" Feb 13 15:21:15.586142 kubelet[2020]: I0213 15:21:15.585629 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e00e6b4d-881c-4be3-a4b5-7157b0a62670-node-certs\") pod \"calico-node-59d4w\" (UID: \"e00e6b4d-881c-4be3-a4b5-7157b0a62670\") " pod="calico-system/calico-node-59d4w" Feb 13 15:21:15.586142 kubelet[2020]: I0213 15:21:15.585648 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b4d9d4af-c074-438d-84cb-6509cb3860d9-socket-dir\") pod \"csi-node-driver-2t7hw\" (UID: \"b4d9d4af-c074-438d-84cb-6509cb3860d9\") " pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:15.586142 kubelet[2020]: I0213 15:21:15.585667 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77cck\" (UniqueName: \"kubernetes.io/projected/b4d9d4af-c074-438d-84cb-6509cb3860d9-kube-api-access-77cck\") pod \"csi-node-driver-2t7hw\" (UID: \"b4d9d4af-c074-438d-84cb-6509cb3860d9\") " pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:15.586142 kubelet[2020]: I0213 15:21:15.585710 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13eb5662-ec00-4a36-af11-cc5770848f2a-xtables-lock\") pod \"kube-proxy-pn8cf\" (UID: \"13eb5662-ec00-4a36-af11-cc5770848f2a\") " pod="kube-system/kube-proxy-pn8cf" Feb 13 15:21:15.586367 kubelet[2020]: I0213 15:21:15.585730 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8csqz\" (UniqueName: \"kubernetes.io/projected/13eb5662-ec00-4a36-af11-cc5770848f2a-kube-api-access-8csqz\") pod \"kube-proxy-pn8cf\" (UID: \"13eb5662-ec00-4a36-af11-cc5770848f2a\") " pod="kube-system/kube-proxy-pn8cf" Feb 13 15:21:15.586367 kubelet[2020]: I0213 15:21:15.585744 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e00e6b4d-881c-4be3-a4b5-7157b0a62670-lib-modules\") pod \"calico-node-59d4w\" (UID: \"e00e6b4d-881c-4be3-a4b5-7157b0a62670\") " pod="calico-system/calico-node-59d4w" Feb 13 15:21:15.586367 kubelet[2020]: I0213 15:21:15.585761 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e00e6b4d-881c-4be3-a4b5-7157b0a62670-policysync\") pod \"calico-node-59d4w\" (UID: \"e00e6b4d-881c-4be3-a4b5-7157b0a62670\") " pod="calico-system/calico-node-59d4w" Feb 13 15:21:15.586367 kubelet[2020]: I0213 15:21:15.585777 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e00e6b4d-881c-4be3-a4b5-7157b0a62670-cni-bin-dir\") pod \"calico-node-59d4w\" (UID: \"e00e6b4d-881c-4be3-a4b5-7157b0a62670\") " pod="calico-system/calico-node-59d4w" Feb 13 15:21:15.586367 kubelet[2020]: I0213 15:21:15.585792 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e00e6b4d-881c-4be3-a4b5-7157b0a62670-flexvol-driver-host\") pod \"calico-node-59d4w\" (UID: \"e00e6b4d-881c-4be3-a4b5-7157b0a62670\") " pod="calico-system/calico-node-59d4w" Feb 13 15:21:15.586472 kubelet[2020]: I0213 15:21:15.585807 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4d9d4af-c074-438d-84cb-6509cb3860d9-kubelet-dir\") pod \"csi-node-driver-2t7hw\" (UID: \"b4d9d4af-c074-438d-84cb-6509cb3860d9\") " pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:15.586472 kubelet[2020]: I0213 15:21:15.585823 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e00e6b4d-881c-4be3-a4b5-7157b0a62670-cni-log-dir\") pod \"calico-node-59d4w\" (UID: \"e00e6b4d-881c-4be3-a4b5-7157b0a62670\") " pod="calico-system/calico-node-59d4w" Feb 13 15:21:15.586472 kubelet[2020]: I0213 15:21:15.585839 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13eb5662-ec00-4a36-af11-cc5770848f2a-kube-proxy\") pod \"kube-proxy-pn8cf\" (UID: \"13eb5662-ec00-4a36-af11-cc5770848f2a\") " pod="kube-system/kube-proxy-pn8cf" Feb 13 15:21:15.586472 kubelet[2020]: I0213 15:21:15.585855 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e00e6b4d-881c-4be3-a4b5-7157b0a62670-cni-net-dir\") pod \"calico-node-59d4w\" (UID: \"e00e6b4d-881c-4be3-a4b5-7157b0a62670\") " pod="calico-system/calico-node-59d4w" Feb 13 15:21:15.586472 kubelet[2020]: I0213 15:21:15.585871 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8nqc\" (UniqueName: \"kubernetes.io/projected/e00e6b4d-881c-4be3-a4b5-7157b0a62670-kube-api-access-h8nqc\") pod \"calico-node-59d4w\" (UID: \"e00e6b4d-881c-4be3-a4b5-7157b0a62670\") " pod="calico-system/calico-node-59d4w" Feb 13 15:21:15.586576 kubelet[2020]: I0213 15:21:15.585885 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b4d9d4af-c074-438d-84cb-6509cb3860d9-varrun\") pod \"csi-node-driver-2t7hw\" (UID: \"b4d9d4af-c074-438d-84cb-6509cb3860d9\") " pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:15.586576 kubelet[2020]: I0213 15:21:15.585901 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b4d9d4af-c074-438d-84cb-6509cb3860d9-registration-dir\") pod \"csi-node-driver-2t7hw\" (UID: \"b4d9d4af-c074-438d-84cb-6509cb3860d9\") " pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:15.586576 kubelet[2020]: I0213 15:21:15.585918 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13eb5662-ec00-4a36-af11-cc5770848f2a-lib-modules\") pod \"kube-proxy-pn8cf\" (UID: \"13eb5662-ec00-4a36-af11-cc5770848f2a\") " pod="kube-system/kube-proxy-pn8cf" Feb 13 15:21:15.586576 kubelet[2020]: I0213 15:21:15.585932 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e00e6b4d-881c-4be3-a4b5-7157b0a62670-xtables-lock\") pod \"calico-node-59d4w\" (UID: \"e00e6b4d-881c-4be3-a4b5-7157b0a62670\") " pod="calico-system/calico-node-59d4w" Feb 13 15:21:15.586576 kubelet[2020]: I0213 15:21:15.585948 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e00e6b4d-881c-4be3-a4b5-7157b0a62670-var-run-calico\") pod \"calico-node-59d4w\" (UID: \"e00e6b4d-881c-4be3-a4b5-7157b0a62670\") " pod="calico-system/calico-node-59d4w" Feb 13 15:21:15.586687 kubelet[2020]: I0213 15:21:15.585974 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e00e6b4d-881c-4be3-a4b5-7157b0a62670-var-lib-calico\") pod \"calico-node-59d4w\" (UID: \"e00e6b4d-881c-4be3-a4b5-7157b0a62670\") " pod="calico-system/calico-node-59d4w" Feb 13 15:21:15.614493 kubelet[2020]: I0213 15:21:15.614455 2020 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 15:21:15.616254 containerd[1500]: time="2025-02-13T15:21:15.615510885Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:21:15.616666 kubelet[2020]: I0213 15:21:15.615995 2020 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 15:21:15.690841 kubelet[2020]: E0213 15:21:15.690423 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:15.690841 kubelet[2020]: W0213 15:21:15.690454 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:15.690841 kubelet[2020]: E0213 15:21:15.690483 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:15.690841 kubelet[2020]: E0213 15:21:15.690806 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:15.690841 kubelet[2020]: W0213 15:21:15.690840 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:15.691183 kubelet[2020]: E0213 15:21:15.690859 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:15.692643 kubelet[2020]: E0213 15:21:15.692203 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:15.692643 kubelet[2020]: W0213 15:21:15.692239 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:15.692643 kubelet[2020]: E0213 15:21:15.692275 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:15.692643 kubelet[2020]: E0213 15:21:15.692649 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:15.692643 kubelet[2020]: W0213 15:21:15.692659 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:15.693271 kubelet[2020]: E0213 15:21:15.692671 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:15.693271 kubelet[2020]: E0213 15:21:15.692864 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:15.693271 kubelet[2020]: W0213 15:21:15.692874 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:15.693271 kubelet[2020]: E0213 15:21:15.692884 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:15.693271 kubelet[2020]: E0213 15:21:15.693021 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:15.693271 kubelet[2020]: W0213 15:21:15.693029 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:15.693271 kubelet[2020]: E0213 15:21:15.693037 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:15.693271 kubelet[2020]: E0213 15:21:15.693159 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:15.693271 kubelet[2020]: W0213 15:21:15.693166 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:15.693271 kubelet[2020]: E0213 15:21:15.693173 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:15.693801 kubelet[2020]: E0213 15:21:15.693355 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:15.693801 kubelet[2020]: W0213 15:21:15.693364 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:15.693801 kubelet[2020]: E0213 15:21:15.693374 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:15.699268 kubelet[2020]: E0213 15:21:15.698464 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:15.699268 kubelet[2020]: W0213 15:21:15.698487 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:15.699268 kubelet[2020]: E0213 15:21:15.698506 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:15.703314 kubelet[2020]: E0213 15:21:15.702969 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:15.704631 kubelet[2020]: W0213 15:21:15.704470 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:15.704631 kubelet[2020]: E0213 15:21:15.704507 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:15.707124 kubelet[2020]: E0213 15:21:15.706750 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:15.707124 kubelet[2020]: W0213 15:21:15.706776 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:15.707124 kubelet[2020]: E0213 15:21:15.706801 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:15.712976 kubelet[2020]: E0213 15:21:15.712933 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:15.712976 kubelet[2020]: W0213 15:21:15.712963 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:15.713141 kubelet[2020]: E0213 15:21:15.712988 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:15.880380 containerd[1500]: time="2025-02-13T15:21:15.880327623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-59d4w,Uid:e00e6b4d-881c-4be3-a4b5-7157b0a62670,Namespace:calico-system,Attempt:0,}" Feb 13 15:21:15.886672 containerd[1500]: time="2025-02-13T15:21:15.886626964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pn8cf,Uid:13eb5662-ec00-4a36-af11-cc5770848f2a,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:16.489908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909586460.mount: Deactivated successfully. Feb 13 15:21:16.497751 containerd[1500]: time="2025-02-13T15:21:16.496848485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:16.498994 containerd[1500]: time="2025-02-13T15:21:16.498964920Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 15:21:16.501057 containerd[1500]: time="2025-02-13T15:21:16.501023240Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:16.503980 containerd[1500]: time="2025-02-13T15:21:16.503935158Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:16.505928 containerd[1500]: time="2025-02-13T15:21:16.505893368Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:21:16.511339 containerd[1500]: time="2025-02-13T15:21:16.511297204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:16.512725 containerd[1500]: time="2025-02-13T15:21:16.512660792Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 632.24188ms" Feb 13 15:21:16.513949 containerd[1500]: time="2025-02-13T15:21:16.513897112Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 627.183539ms" Feb 13 15:21:16.547672 kubelet[2020]: E0213 15:21:16.547604 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:16.604523 containerd[1500]: time="2025-02-13T15:21:16.604178960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:16.604523 containerd[1500]: time="2025-02-13T15:21:16.604302508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:16.604523 containerd[1500]: time="2025-02-13T15:21:16.604314187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:16.604523 containerd[1500]: time="2025-02-13T15:21:16.604434695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:16.607407 containerd[1500]: time="2025-02-13T15:21:16.606927773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:16.607407 containerd[1500]: time="2025-02-13T15:21:16.606984968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:16.607407 containerd[1500]: time="2025-02-13T15:21:16.606997327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:16.607407 containerd[1500]: time="2025-02-13T15:21:16.607068520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:16.677798 systemd[1]: Started cri-containerd-29b14766c4f20d6ebfad5d1039489ba7a3744a4839a67d4636ace4ac55f65c06.scope - libcontainer container 29b14766c4f20d6ebfad5d1039489ba7a3744a4839a67d4636ace4ac55f65c06. Feb 13 15:21:16.682116 systemd[1]: Started cri-containerd-6830b87f2a8035760a6f94e3e41c0efbd2756130d7636b0be287593b1a643b18.scope - libcontainer container 6830b87f2a8035760a6f94e3e41c0efbd2756130d7636b0be287593b1a643b18. Feb 13 15:21:16.720824 containerd[1500]: time="2025-02-13T15:21:16.720265906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pn8cf,Uid:13eb5662-ec00-4a36-af11-cc5770848f2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6830b87f2a8035760a6f94e3e41c0efbd2756130d7636b0be287593b1a643b18\"" Feb 13 15:21:16.722951 containerd[1500]: time="2025-02-13T15:21:16.722802820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-59d4w,Uid:e00e6b4d-881c-4be3-a4b5-7157b0a62670,Namespace:calico-system,Attempt:0,} returns sandbox id \"29b14766c4f20d6ebfad5d1039489ba7a3744a4839a67d4636ace4ac55f65c06\"" Feb 13 15:21:16.725408 containerd[1500]: time="2025-02-13T15:21:16.725172390Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:21:17.548823 kubelet[2020]: E0213 15:21:17.548699 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:17.632056 kubelet[2020]: E0213 15:21:17.631701 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:17.730768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount258827791.mount: Deactivated successfully. Feb 13 15:21:18.070854 containerd[1500]: time="2025-02-13T15:21:18.070766711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:18.072861 containerd[1500]: time="2025-02-13T15:21:18.072556080Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769282" Feb 13 15:21:18.074177 containerd[1500]: time="2025-02-13T15:21:18.074098469Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:18.077148 containerd[1500]: time="2025-02-13T15:21:18.077067659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:18.078060 containerd[1500]: time="2025-02-13T15:21:18.077752881Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.352544694s" Feb 13 15:21:18.078060 containerd[1500]: time="2025-02-13T15:21:18.077787558Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 15:21:18.079833 containerd[1500]: time="2025-02-13T15:21:18.079345227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:21:18.081148 containerd[1500]: time="2025-02-13T15:21:18.081108118Z" level=info msg="CreateContainer within sandbox \"6830b87f2a8035760a6f94e3e41c0efbd2756130d7636b0be287593b1a643b18\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:21:18.106368 containerd[1500]: time="2025-02-13T15:21:18.106318910Z" level=info msg="CreateContainer within sandbox \"6830b87f2a8035760a6f94e3e41c0efbd2756130d7636b0be287593b1a643b18\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a735bbc5005f8aa07e49e8e7f91d3787bd7ef4295fc18c6c5d140f4a68709f1e\"" Feb 13 15:21:18.107368 containerd[1500]: time="2025-02-13T15:21:18.107339184Z" level=info msg="StartContainer for \"a735bbc5005f8aa07e49e8e7f91d3787bd7ef4295fc18c6c5d140f4a68709f1e\"" Feb 13 15:21:18.135925 systemd[1]: Started cri-containerd-a735bbc5005f8aa07e49e8e7f91d3787bd7ef4295fc18c6c5d140f4a68709f1e.scope - libcontainer container a735bbc5005f8aa07e49e8e7f91d3787bd7ef4295fc18c6c5d140f4a68709f1e. Feb 13 15:21:18.168588 containerd[1500]: time="2025-02-13T15:21:18.168509862Z" level=info msg="StartContainer for \"a735bbc5005f8aa07e49e8e7f91d3787bd7ef4295fc18c6c5d140f4a68709f1e\" returns successfully" Feb 13 15:21:18.549888 kubelet[2020]: E0213 15:21:18.549836 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:18.666751 kubelet[2020]: I0213 15:21:18.666654 2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pn8cf" podStartSLOduration=3.311986955 podStartE2EDuration="4.666630904s" podCreationTimestamp="2025-02-13 15:21:14 +0000 UTC" firstStartedPulling="2025-02-13 15:21:16.724511814 +0000 UTC m=+2.794294009" lastFinishedPulling="2025-02-13 15:21:18.079155763 +0000 UTC m=+4.148937958" observedRunningTime="2025-02-13 15:21:18.66655739 +0000 UTC m=+4.736339625" watchObservedRunningTime="2025-02-13 15:21:18.666630904 +0000 UTC m=+4.736413139" Feb 13 15:21:18.706317 kubelet[2020]: E0213 15:21:18.706270 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.706317 kubelet[2020]: W0213 15:21:18.706304 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.706540 kubelet[2020]: E0213 15:21:18.706397 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.706711 kubelet[2020]: E0213 15:21:18.706680 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.706711 kubelet[2020]: W0213 15:21:18.706697 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.706772 kubelet[2020]: E0213 15:21:18.706716 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.706950 kubelet[2020]: E0213 15:21:18.706911 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.706950 kubelet[2020]: W0213 15:21:18.706921 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.706950 kubelet[2020]: E0213 15:21:18.706932 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.707139 kubelet[2020]: E0213 15:21:18.707127 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.707170 kubelet[2020]: W0213 15:21:18.707139 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.707170 kubelet[2020]: E0213 15:21:18.707153 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.708274 kubelet[2020]: E0213 15:21:18.707489 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.708274 kubelet[2020]: W0213 15:21:18.707506 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.708274 kubelet[2020]: E0213 15:21:18.707520 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.708274 kubelet[2020]: E0213 15:21:18.707742 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.708274 kubelet[2020]: W0213 15:21:18.707750 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.708274 kubelet[2020]: E0213 15:21:18.707759 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.708274 kubelet[2020]: E0213 15:21:18.707888 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.708274 kubelet[2020]: W0213 15:21:18.707896 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.708274 kubelet[2020]: E0213 15:21:18.707904 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.708274 kubelet[2020]: E0213 15:21:18.708020 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.709016 kubelet[2020]: W0213 15:21:18.708027 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.709016 kubelet[2020]: E0213 15:21:18.708035 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.709016 kubelet[2020]: E0213 15:21:18.708172 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.709016 kubelet[2020]: W0213 15:21:18.708178 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.709016 kubelet[2020]: E0213 15:21:18.708187 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.709016 kubelet[2020]: E0213 15:21:18.708352 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.709016 kubelet[2020]: W0213 15:21:18.708362 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.709016 kubelet[2020]: E0213 15:21:18.708371 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.709016 kubelet[2020]: E0213 15:21:18.708549 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.709016 kubelet[2020]: W0213 15:21:18.708561 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.709666 kubelet[2020]: E0213 15:21:18.708571 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.709666 kubelet[2020]: E0213 15:21:18.708705 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.709666 kubelet[2020]: W0213 15:21:18.708712 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.709666 kubelet[2020]: E0213 15:21:18.708720 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.709666 kubelet[2020]: E0213 15:21:18.709009 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.709666 kubelet[2020]: W0213 15:21:18.709026 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.709666 kubelet[2020]: E0213 15:21:18.709040 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.709666 kubelet[2020]: E0213 15:21:18.709374 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.709666 kubelet[2020]: W0213 15:21:18.709389 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.709666 kubelet[2020]: E0213 15:21:18.709406 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.709964 kubelet[2020]: E0213 15:21:18.709811 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.709964 kubelet[2020]: W0213 15:21:18.709828 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.709964 kubelet[2020]: E0213 15:21:18.709846 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.710221 kubelet[2020]: E0213 15:21:18.710187 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.710221 kubelet[2020]: W0213 15:21:18.710206 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.710221 kubelet[2020]: E0213 15:21:18.710275 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.710802 kubelet[2020]: E0213 15:21:18.710739 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.710802 kubelet[2020]: W0213 15:21:18.710785 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.711100 kubelet[2020]: E0213 15:21:18.710809 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.711191 kubelet[2020]: E0213 15:21:18.711165 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.711331 kubelet[2020]: W0213 15:21:18.711192 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.711653 kubelet[2020]: E0213 15:21:18.711432 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.711781 kubelet[2020]: E0213 15:21:18.711764 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.711823 kubelet[2020]: W0213 15:21:18.711783 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.711823 kubelet[2020]: E0213 15:21:18.711798 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.712279 kubelet[2020]: E0213 15:21:18.712196 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.712279 kubelet[2020]: W0213 15:21:18.712255 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.712279 kubelet[2020]: E0213 15:21:18.712274 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.712997 kubelet[2020]: E0213 15:21:18.712936 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.712997 kubelet[2020]: W0213 15:21:18.712967 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.713211 kubelet[2020]: E0213 15:21:18.713002 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.713507 kubelet[2020]: E0213 15:21:18.713488 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.713592 kubelet[2020]: W0213 15:21:18.713510 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.713592 kubelet[2020]: E0213 15:21:18.713572 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.714180 kubelet[2020]: E0213 15:21:18.714166 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.714180 kubelet[2020]: W0213 15:21:18.714180 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.714348 kubelet[2020]: E0213 15:21:18.714198 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.714407 kubelet[2020]: E0213 15:21:18.714396 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.714506 kubelet[2020]: W0213 15:21:18.714409 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.714506 kubelet[2020]: E0213 15:21:18.714440 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.714652 kubelet[2020]: E0213 15:21:18.714641 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.714699 kubelet[2020]: W0213 15:21:18.714653 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.714699 kubelet[2020]: E0213 15:21:18.714670 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.714831 kubelet[2020]: E0213 15:21:18.714820 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.714876 kubelet[2020]: W0213 15:21:18.714832 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.714876 kubelet[2020]: E0213 15:21:18.714847 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.714987 kubelet[2020]: E0213 15:21:18.714979 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.715033 kubelet[2020]: W0213 15:21:18.714987 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.715033 kubelet[2020]: E0213 15:21:18.715000 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.715162 kubelet[2020]: E0213 15:21:18.715154 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.715198 kubelet[2020]: W0213 15:21:18.715163 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.715198 kubelet[2020]: E0213 15:21:18.715178 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.715523 kubelet[2020]: E0213 15:21:18.715505 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.715651 kubelet[2020]: W0213 15:21:18.715596 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.715651 kubelet[2020]: E0213 15:21:18.715628 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.716078 kubelet[2020]: E0213 15:21:18.715944 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.716078 kubelet[2020]: W0213 15:21:18.715963 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.716078 kubelet[2020]: E0213 15:21:18.715985 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.716506 kubelet[2020]: E0213 15:21:18.716250 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.716506 kubelet[2020]: W0213 15:21:18.716262 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.716506 kubelet[2020]: E0213 15:21:18.716274 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:18.716831 kubelet[2020]: E0213 15:21:18.716813 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:18.716912 kubelet[2020]: W0213 15:21:18.716897 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:18.716990 kubelet[2020]: E0213 15:21:18.716957 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.550701 kubelet[2020]: E0213 15:21:19.550652 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:19.631995 kubelet[2020]: E0213 15:21:19.631464 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:19.720937 kubelet[2020]: E0213 15:21:19.720668 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.720937 kubelet[2020]: W0213 15:21:19.720694 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.720937 kubelet[2020]: E0213 15:21:19.720719 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.720937 kubelet[2020]: E0213 15:21:19.720900 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.720937 kubelet[2020]: W0213 15:21:19.720911 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.720937 kubelet[2020]: E0213 15:21:19.720922 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.721291 kubelet[2020]: E0213 15:21:19.721088 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.721291 kubelet[2020]: W0213 15:21:19.721097 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.721291 kubelet[2020]: E0213 15:21:19.721108 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.722181 kubelet[2020]: E0213 15:21:19.721296 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.722181 kubelet[2020]: W0213 15:21:19.721306 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.722181 kubelet[2020]: E0213 15:21:19.721318 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.722181 kubelet[2020]: E0213 15:21:19.721921 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.722181 kubelet[2020]: W0213 15:21:19.721941 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.722181 kubelet[2020]: E0213 15:21:19.721962 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.722606 kubelet[2020]: E0213 15:21:19.722500 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.722606 kubelet[2020]: W0213 15:21:19.722534 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.722606 kubelet[2020]: E0213 15:21:19.722549 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.723067 kubelet[2020]: E0213 15:21:19.722932 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.723067 kubelet[2020]: W0213 15:21:19.722950 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.723067 kubelet[2020]: E0213 15:21:19.722966 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.723379 kubelet[2020]: E0213 15:21:19.723305 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.723379 kubelet[2020]: W0213 15:21:19.723321 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.723379 kubelet[2020]: E0213 15:21:19.723339 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.723986 kubelet[2020]: E0213 15:21:19.723850 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.723986 kubelet[2020]: W0213 15:21:19.723869 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.723986 kubelet[2020]: E0213 15:21:19.723887 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.724218 kubelet[2020]: E0213 15:21:19.724097 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.724218 kubelet[2020]: W0213 15:21:19.724108 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.724218 kubelet[2020]: E0213 15:21:19.724120 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.724687 kubelet[2020]: E0213 15:21:19.724562 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.724687 kubelet[2020]: W0213 15:21:19.724588 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.724687 kubelet[2020]: E0213 15:21:19.724601 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.724976 kubelet[2020]: E0213 15:21:19.724797 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.724976 kubelet[2020]: W0213 15:21:19.724806 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.724976 kubelet[2020]: E0213 15:21:19.724816 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.725105 kubelet[2020]: E0213 15:21:19.725093 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.725178 kubelet[2020]: W0213 15:21:19.725166 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.725331 kubelet[2020]: E0213 15:21:19.725220 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.725697 kubelet[2020]: E0213 15:21:19.725595 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.725697 kubelet[2020]: W0213 15:21:19.725619 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.725697 kubelet[2020]: E0213 15:21:19.725631 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.726024 kubelet[2020]: E0213 15:21:19.725923 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.726024 kubelet[2020]: W0213 15:21:19.725935 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.726024 kubelet[2020]: E0213 15:21:19.725946 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.726203 kubelet[2020]: E0213 15:21:19.726159 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.726203 kubelet[2020]: W0213 15:21:19.726171 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.726203 kubelet[2020]: E0213 15:21:19.726180 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.726717 kubelet[2020]: E0213 15:21:19.726616 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.726717 kubelet[2020]: W0213 15:21:19.726630 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.726717 kubelet[2020]: E0213 15:21:19.726644 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.726993 kubelet[2020]: E0213 15:21:19.726823 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.726993 kubelet[2020]: W0213 15:21:19.726832 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.726993 kubelet[2020]: E0213 15:21:19.726841 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.727112 kubelet[2020]: E0213 15:21:19.727100 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.727415 kubelet[2020]: W0213 15:21:19.727168 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.727415 kubelet[2020]: E0213 15:21:19.727181 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.727712 kubelet[2020]: E0213 15:21:19.727632 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.727712 kubelet[2020]: W0213 15:21:19.727651 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.727712 kubelet[2020]: E0213 15:21:19.727667 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.824216 kubelet[2020]: E0213 15:21:19.824007 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.825768 kubelet[2020]: W0213 15:21:19.825543 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.825768 kubelet[2020]: E0213 15:21:19.825600 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.826549 kubelet[2020]: E0213 15:21:19.826069 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.826549 kubelet[2020]: W0213 15:21:19.826089 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.826549 kubelet[2020]: E0213 15:21:19.826109 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.826766 kubelet[2020]: E0213 15:21:19.826750 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.826933 kubelet[2020]: W0213 15:21:19.826807 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.826933 kubelet[2020]: E0213 15:21:19.826832 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.827080 kubelet[2020]: E0213 15:21:19.827062 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.827246 kubelet[2020]: W0213 15:21:19.827217 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.827455 kubelet[2020]: E0213 15:21:19.827368 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.827624 kubelet[2020]: E0213 15:21:19.827612 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.827772 kubelet[2020]: W0213 15:21:19.827690 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.827772 kubelet[2020]: E0213 15:21:19.827722 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.828124 kubelet[2020]: E0213 15:21:19.828031 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.828124 kubelet[2020]: W0213 15:21:19.828046 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.828124 kubelet[2020]: E0213 15:21:19.828069 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.828594 kubelet[2020]: E0213 15:21:19.828437 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.828594 kubelet[2020]: W0213 15:21:19.828459 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.828594 kubelet[2020]: E0213 15:21:19.828481 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.828772 kubelet[2020]: E0213 15:21:19.828761 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.828863 kubelet[2020]: W0213 15:21:19.828814 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.828894 kubelet[2020]: E0213 15:21:19.828868 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.829180 kubelet[2020]: E0213 15:21:19.829073 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.829180 kubelet[2020]: W0213 15:21:19.829085 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.829180 kubelet[2020]: E0213 15:21:19.829096 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.829623 kubelet[2020]: E0213 15:21:19.829439 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.829623 kubelet[2020]: W0213 15:21:19.829456 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.829623 kubelet[2020]: E0213 15:21:19.829471 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.829862 kubelet[2020]: E0213 15:21:19.829821 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.829895 kubelet[2020]: W0213 15:21:19.829867 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.829924 kubelet[2020]: E0213 15:21:19.829898 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.830209 kubelet[2020]: E0213 15:21:19.830192 2020 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:19.830280 kubelet[2020]: W0213 15:21:19.830212 2020 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:19.830311 kubelet[2020]: E0213 15:21:19.830280 2020 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:19.987219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411973781.mount: Deactivated successfully. Feb 13 15:21:20.058662 containerd[1500]: time="2025-02-13T15:21:20.058600810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:20.059346 containerd[1500]: time="2025-02-13T15:21:20.059295519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 15:21:20.060847 containerd[1500]: time="2025-02-13T15:21:20.060756573Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:20.066744 containerd[1500]: time="2025-02-13T15:21:20.066682023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:20.067948 containerd[1500]: time="2025-02-13T15:21:20.067487764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.988084302s" Feb 13 15:21:20.067948 containerd[1500]: time="2025-02-13T15:21:20.067531521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 15:21:20.071500 containerd[1500]: time="2025-02-13T15:21:20.071450037Z" level=info msg="CreateContainer within sandbox \"29b14766c4f20d6ebfad5d1039489ba7a3744a4839a67d4636ace4ac55f65c06\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:21:20.093204 containerd[1500]: time="2025-02-13T15:21:20.093014911Z" level=info msg="CreateContainer within sandbox \"29b14766c4f20d6ebfad5d1039489ba7a3744a4839a67d4636ace4ac55f65c06\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1311fd5859926a9b504de60821acaacf0eb92e65fbd7c09886a6d03df0be8887\"" Feb 13 15:21:20.096248 containerd[1500]: time="2025-02-13T15:21:20.094652192Z" level=info msg="StartContainer for \"1311fd5859926a9b504de60821acaacf0eb92e65fbd7c09886a6d03df0be8887\"" Feb 13 15:21:20.131685 systemd[1]: Started cri-containerd-1311fd5859926a9b504de60821acaacf0eb92e65fbd7c09886a6d03df0be8887.scope - libcontainer container 1311fd5859926a9b504de60821acaacf0eb92e65fbd7c09886a6d03df0be8887. Feb 13 15:21:20.170341 containerd[1500]: time="2025-02-13T15:21:20.170287580Z" level=info msg="StartContainer for \"1311fd5859926a9b504de60821acaacf0eb92e65fbd7c09886a6d03df0be8887\" returns successfully" Feb 13 15:21:20.183163 systemd[1]: cri-containerd-1311fd5859926a9b504de60821acaacf0eb92e65fbd7c09886a6d03df0be8887.scope: Deactivated successfully. Feb 13 15:21:20.309019 containerd[1500]: time="2025-02-13T15:21:20.308880036Z" level=info msg="shim disconnected" id=1311fd5859926a9b504de60821acaacf0eb92e65fbd7c09886a6d03df0be8887 namespace=k8s.io Feb 13 15:21:20.309019 containerd[1500]: time="2025-02-13T15:21:20.308992268Z" level=warning msg="cleaning up after shim disconnected" id=1311fd5859926a9b504de60821acaacf0eb92e65fbd7c09886a6d03df0be8887 namespace=k8s.io Feb 13 15:21:20.309019 containerd[1500]: time="2025-02-13T15:21:20.309009107Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:21:20.551714 kubelet[2020]: E0213 15:21:20.551619 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:20.656565 containerd[1500]: time="2025-02-13T15:21:20.656517193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:21:20.958190 systemd[1]: run-containerd-runc-k8s.io-1311fd5859926a9b504de60821acaacf0eb92e65fbd7c09886a6d03df0be8887-runc.aLmcQ7.mount: Deactivated successfully. Feb 13 15:21:20.958449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1311fd5859926a9b504de60821acaacf0eb92e65fbd7c09886a6d03df0be8887-rootfs.mount: Deactivated successfully. Feb 13 15:21:21.552291 kubelet[2020]: E0213 15:21:21.552207 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:21.632020 kubelet[2020]: E0213 15:21:21.631381 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:22.552573 kubelet[2020]: E0213 15:21:22.552503 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:23.553032 kubelet[2020]: E0213 15:21:23.552967 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:23.632148 kubelet[2020]: E0213 15:21:23.632053 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:24.553310 kubelet[2020]: E0213 15:21:24.553267 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:25.278458 containerd[1500]: time="2025-02-13T15:21:25.278398411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:25.280567 containerd[1500]: time="2025-02-13T15:21:25.279815025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 15:21:25.280567 containerd[1500]: time="2025-02-13T15:21:25.279989057Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:25.283660 containerd[1500]: time="2025-02-13T15:21:25.283615770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:25.284986 containerd[1500]: time="2025-02-13T15:21:25.284604164Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.628042054s" Feb 13 15:21:25.285153 containerd[1500]: time="2025-02-13T15:21:25.285134419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 15:21:25.289370 containerd[1500]: time="2025-02-13T15:21:25.289326266Z" level=info msg="CreateContainer within sandbox \"29b14766c4f20d6ebfad5d1039489ba7a3744a4839a67d4636ace4ac55f65c06\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:21:25.309570 containerd[1500]: time="2025-02-13T15:21:25.309488733Z" level=info msg="CreateContainer within sandbox \"29b14766c4f20d6ebfad5d1039489ba7a3744a4839a67d4636ace4ac55f65c06\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"91b11ad7f3a9167acdb1338f8b22c0d0c8f2daeace59c30c1283d3de2817cd55\"" Feb 13 15:21:25.311248 containerd[1500]: time="2025-02-13T15:21:25.310688838Z" level=info msg="StartContainer for \"91b11ad7f3a9167acdb1338f8b22c0d0c8f2daeace59c30c1283d3de2817cd55\"" Feb 13 15:21:25.346462 systemd[1]: Started cri-containerd-91b11ad7f3a9167acdb1338f8b22c0d0c8f2daeace59c30c1283d3de2817cd55.scope - libcontainer container 91b11ad7f3a9167acdb1338f8b22c0d0c8f2daeace59c30c1283d3de2817cd55. Feb 13 15:21:25.383531 containerd[1500]: time="2025-02-13T15:21:25.383459553Z" level=info msg="StartContainer for \"91b11ad7f3a9167acdb1338f8b22c0d0c8f2daeace59c30c1283d3de2817cd55\" returns successfully" Feb 13 15:21:25.554339 kubelet[2020]: E0213 15:21:25.554299 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:25.632197 kubelet[2020]: E0213 15:21:25.631760 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:25.883480 containerd[1500]: time="2025-02-13T15:21:25.882758825Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:21:25.887115 systemd[1]: cri-containerd-91b11ad7f3a9167acdb1338f8b22c0d0c8f2daeace59c30c1283d3de2817cd55.scope: Deactivated successfully. Feb 13 15:21:25.889352 kubelet[2020]: I0213 15:21:25.888621 2020 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:21:25.888679 systemd[1]: cri-containerd-91b11ad7f3a9167acdb1338f8b22c0d0c8f2daeace59c30c1283d3de2817cd55.scope: Consumed 489ms CPU time, 171.7M memory peak, 147.4M written to disk. Feb 13 15:21:25.918872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91b11ad7f3a9167acdb1338f8b22c0d0c8f2daeace59c30c1283d3de2817cd55-rootfs.mount: Deactivated successfully. Feb 13 15:21:26.081846 containerd[1500]: time="2025-02-13T15:21:26.081723655Z" level=info msg="shim disconnected" id=91b11ad7f3a9167acdb1338f8b22c0d0c8f2daeace59c30c1283d3de2817cd55 namespace=k8s.io Feb 13 15:21:26.082162 containerd[1500]: time="2025-02-13T15:21:26.081828691Z" level=warning msg="cleaning up after shim disconnected" id=91b11ad7f3a9167acdb1338f8b22c0d0c8f2daeace59c30c1283d3de2817cd55 namespace=k8s.io Feb 13 15:21:26.082162 containerd[1500]: time="2025-02-13T15:21:26.081872529Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:21:26.555456 kubelet[2020]: E0213 15:21:26.555380 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:26.675532 containerd[1500]: time="2025-02-13T15:21:26.675244934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:21:27.555756 kubelet[2020]: E0213 15:21:27.555692 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:27.639224 systemd[1]: Created slice kubepods-besteffort-podb4d9d4af_c074_438d_84cb_6509cb3860d9.slice - libcontainer container kubepods-besteffort-podb4d9d4af_c074_438d_84cb_6509cb3860d9.slice. Feb 13 15:21:27.642072 containerd[1500]: time="2025-02-13T15:21:27.642020807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:0,}" Feb 13 15:21:27.725087 containerd[1500]: time="2025-02-13T15:21:27.724906916Z" level=error msg="Failed to destroy network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:27.727539 containerd[1500]: time="2025-02-13T15:21:27.727482582Z" level=error msg="encountered an error cleaning up failed sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:27.727518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68-shm.mount: Deactivated successfully. Feb 13 15:21:27.727761 containerd[1500]: time="2025-02-13T15:21:27.727570498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:27.727868 kubelet[2020]: E0213 15:21:27.727816 2020 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:27.727948 kubelet[2020]: E0213 15:21:27.727923 2020 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:27.728000 kubelet[2020]: E0213 15:21:27.727948 2020 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:27.728039 kubelet[2020]: E0213 15:21:27.727986 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:28.556761 kubelet[2020]: E0213 15:21:28.556675 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:28.680046 kubelet[2020]: I0213 15:21:28.680013 2020 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68" Feb 13 15:21:28.681083 containerd[1500]: time="2025-02-13T15:21:28.681044774Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\"" Feb 13 15:21:28.681355 containerd[1500]: time="2025-02-13T15:21:28.681329885Z" level=info msg="Ensure that sandbox a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68 in task-service has been cleanup successfully" Feb 13 15:21:28.683047 systemd[1]: run-netns-cni\x2dab66f100\x2de8c2\x2dce2e\x2dd47b\x2d3999ce736b52.mount: Deactivated successfully. Feb 13 15:21:28.683640 containerd[1500]: time="2025-02-13T15:21:28.683607571Z" level=info msg="TearDown network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" successfully" Feb 13 15:21:28.683640 containerd[1500]: time="2025-02-13T15:21:28.683637210Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" returns successfully" Feb 13 15:21:28.686509 containerd[1500]: time="2025-02-13T15:21:28.686472318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:1,}" Feb 13 15:21:28.763750 containerd[1500]: time="2025-02-13T15:21:28.763685783Z" level=error msg="Failed to destroy network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:28.765374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa-shm.mount: Deactivated successfully. Feb 13 15:21:28.766490 containerd[1500]: time="2025-02-13T15:21:28.766439814Z" level=error msg="encountered an error cleaning up failed sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:28.766546 containerd[1500]: time="2025-02-13T15:21:28.766523732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:28.767440 kubelet[2020]: E0213 15:21:28.767403 2020 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:28.767567 kubelet[2020]: E0213 15:21:28.767463 2020 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:28.767567 kubelet[2020]: E0213 15:21:28.767482 2020 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:28.767567 kubelet[2020]: E0213 15:21:28.767526 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:29.557998 kubelet[2020]: E0213 15:21:29.557935 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:29.684268 kubelet[2020]: I0213 15:21:29.683527 2020 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa" Feb 13 15:21:29.684427 containerd[1500]: time="2025-02-13T15:21:29.684374332Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\"" Feb 13 15:21:29.684693 containerd[1500]: time="2025-02-13T15:21:29.684668564Z" level=info msg="Ensure that sandbox c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa in task-service has been cleanup successfully" Feb 13 15:21:29.686285 systemd[1]: run-netns-cni\x2d30108f5d\x2da93b\x2d221d\x2d1844\x2dd9d7576289c2.mount: Deactivated successfully. Feb 13 15:21:29.686866 containerd[1500]: time="2025-02-13T15:21:29.686820944Z" level=info msg="TearDown network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" successfully" Feb 13 15:21:29.686866 containerd[1500]: time="2025-02-13T15:21:29.686865342Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" returns successfully" Feb 13 15:21:29.687544 containerd[1500]: time="2025-02-13T15:21:29.687499605Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\"" Feb 13 15:21:29.687653 containerd[1500]: time="2025-02-13T15:21:29.687630041Z" level=info msg="TearDown network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" successfully" Feb 13 15:21:29.687683 containerd[1500]: time="2025-02-13T15:21:29.687650880Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" returns successfully" Feb 13 15:21:29.689118 containerd[1500]: time="2025-02-13T15:21:29.689079240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:2,}" Feb 13 15:21:29.756303 containerd[1500]: time="2025-02-13T15:21:29.756155285Z" level=error msg="Failed to destroy network for sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:29.758035 containerd[1500]: time="2025-02-13T15:21:29.756965582Z" level=error msg="encountered an error cleaning up failed sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:29.758035 containerd[1500]: time="2025-02-13T15:21:29.757053380Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:29.758179 kubelet[2020]: E0213 15:21:29.758145 2020 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:29.758225 kubelet[2020]: E0213 15:21:29.758207 2020 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:29.758225 kubelet[2020]: E0213 15:21:29.758248 2020 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:29.758320 kubelet[2020]: E0213 15:21:29.758292 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:29.759283 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171-shm.mount: Deactivated successfully. Feb 13 15:21:30.558165 kubelet[2020]: E0213 15:21:30.558076 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:30.686981 kubelet[2020]: I0213 15:21:30.686868 2020 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171" Feb 13 15:21:30.687911 containerd[1500]: time="2025-02-13T15:21:30.687874858Z" level=info msg="StopPodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\"" Feb 13 15:21:30.688246 containerd[1500]: time="2025-02-13T15:21:30.688052853Z" level=info msg="Ensure that sandbox 3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171 in task-service has been cleanup successfully" Feb 13 15:21:30.689869 systemd[1]: run-netns-cni\x2d3853e323\x2d5b48\x2dcb05\x2d6a26\x2d019efd221882.mount: Deactivated successfully. Feb 13 15:21:30.690688 containerd[1500]: time="2025-02-13T15:21:30.690645272Z" level=info msg="TearDown network for sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" successfully" Feb 13 15:21:30.690688 containerd[1500]: time="2025-02-13T15:21:30.690683831Z" level=info msg="StopPodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" returns successfully" Feb 13 15:21:30.691185 containerd[1500]: time="2025-02-13T15:21:30.691157420Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\"" Feb 13 15:21:30.691291 containerd[1500]: time="2025-02-13T15:21:30.691273017Z" level=info msg="TearDown network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" successfully" Feb 13 15:21:30.691291 containerd[1500]: time="2025-02-13T15:21:30.691287737Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" returns successfully" Feb 13 15:21:30.692113 containerd[1500]: time="2025-02-13T15:21:30.692089357Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\"" Feb 13 15:21:30.692187 containerd[1500]: time="2025-02-13T15:21:30.692173515Z" level=info msg="TearDown network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" successfully" Feb 13 15:21:30.692263 containerd[1500]: time="2025-02-13T15:21:30.692186355Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" returns successfully" Feb 13 15:21:30.692776 containerd[1500]: time="2025-02-13T15:21:30.692753582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:3,}" Feb 13 15:21:30.761544 containerd[1500]: time="2025-02-13T15:21:30.761471390Z" level=error msg="Failed to destroy network for sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:30.761544 containerd[1500]: time="2025-02-13T15:21:30.761924740Z" level=error msg="encountered an error cleaning up failed sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:30.761544 containerd[1500]: time="2025-02-13T15:21:30.761988458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:30.765190 kubelet[2020]: E0213 15:21:30.762201 2020 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:30.765190 kubelet[2020]: E0213 15:21:30.762280 2020 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:30.765190 kubelet[2020]: E0213 15:21:30.762301 2020 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:30.765332 kubelet[2020]: E0213 15:21:30.762345 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:30.765515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1-shm.mount: Deactivated successfully. Feb 13 15:21:31.558808 kubelet[2020]: E0213 15:21:31.558528 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:31.691034 kubelet[2020]: I0213 15:21:31.691000 2020 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1" Feb 13 15:21:31.691852 containerd[1500]: time="2025-02-13T15:21:31.691818731Z" level=info msg="StopPodSandbox for \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\"" Feb 13 15:21:31.692157 containerd[1500]: time="2025-02-13T15:21:31.691990328Z" level=info msg="Ensure that sandbox 0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1 in task-service has been cleanup successfully" Feb 13 15:21:31.693649 systemd[1]: run-netns-cni\x2d4c205bc8\x2d2966\x2db672\x2d326b\x2dcf8deb4cc6b4.mount: Deactivated successfully. Feb 13 15:21:31.694730 containerd[1500]: time="2025-02-13T15:21:31.694364601Z" level=info msg="TearDown network for sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\" successfully" Feb 13 15:21:31.694730 containerd[1500]: time="2025-02-13T15:21:31.694396921Z" level=info msg="StopPodSandbox for \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\" returns successfully" Feb 13 15:21:31.695223 containerd[1500]: time="2025-02-13T15:21:31.695040948Z" level=info msg="StopPodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\"" Feb 13 15:21:31.695223 containerd[1500]: time="2025-02-13T15:21:31.695163106Z" level=info msg="TearDown network for sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" successfully" Feb 13 15:21:31.695223 containerd[1500]: time="2025-02-13T15:21:31.695174025Z" level=info msg="StopPodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" returns successfully" Feb 13 15:21:31.695773 containerd[1500]: time="2025-02-13T15:21:31.695692095Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\"" Feb 13 15:21:31.696059 containerd[1500]: time="2025-02-13T15:21:31.695881411Z" level=info msg="TearDown network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" successfully" Feb 13 15:21:31.696059 containerd[1500]: time="2025-02-13T15:21:31.695899011Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" returns successfully" Feb 13 15:21:31.696546 containerd[1500]: time="2025-02-13T15:21:31.696510119Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\"" Feb 13 15:21:31.696816 containerd[1500]: time="2025-02-13T15:21:31.696766634Z" level=info msg="TearDown network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" successfully" Feb 13 15:21:31.696816 containerd[1500]: time="2025-02-13T15:21:31.696788434Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" returns successfully" Feb 13 15:21:31.697539 containerd[1500]: time="2025-02-13T15:21:31.697412461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:4,}" Feb 13 15:21:31.774241 containerd[1500]: time="2025-02-13T15:21:31.774172713Z" level=error msg="Failed to destroy network for sandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:31.776164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86-shm.mount: Deactivated successfully. Feb 13 15:21:31.777595 containerd[1500]: time="2025-02-13T15:21:31.776844260Z" level=error msg="encountered an error cleaning up failed sandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:31.777796 containerd[1500]: time="2025-02-13T15:21:31.777681484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:31.778089 kubelet[2020]: E0213 15:21:31.777968 2020 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:31.778089 kubelet[2020]: E0213 15:21:31.778023 2020 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:31.778089 kubelet[2020]: E0213 15:21:31.778042 2020 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:31.778219 kubelet[2020]: E0213 15:21:31.778076 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:32.559587 kubelet[2020]: E0213 15:21:32.559315 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:32.696976 kubelet[2020]: I0213 15:21:32.696858 2020 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86" Feb 13 15:21:32.697861 containerd[1500]: time="2025-02-13T15:21:32.697807124Z" level=info msg="StopPodSandbox for \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\"" Feb 13 15:21:32.701210 containerd[1500]: time="2025-02-13T15:21:32.698004401Z" level=info msg="Ensure that sandbox 76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86 in task-service has been cleanup successfully" Feb 13 15:21:32.701210 containerd[1500]: time="2025-02-13T15:21:32.698205518Z" level=info msg="TearDown network for sandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\" successfully" Feb 13 15:21:32.701210 containerd[1500]: time="2025-02-13T15:21:32.698222558Z" level=info msg="StopPodSandbox for \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\" returns successfully" Feb 13 15:21:32.699833 systemd[1]: run-netns-cni\x2d03e07b81\x2d15cf\x2d1310\x2dd930\x2d866589282192.mount: Deactivated successfully. Feb 13 15:21:32.702636 containerd[1500]: time="2025-02-13T15:21:32.701971819Z" level=info msg="StopPodSandbox for \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\"" Feb 13 15:21:32.702636 containerd[1500]: time="2025-02-13T15:21:32.702089017Z" level=info msg="TearDown network for sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\" successfully" Feb 13 15:21:32.702636 containerd[1500]: time="2025-02-13T15:21:32.702100737Z" level=info msg="StopPodSandbox for \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\" returns successfully" Feb 13 15:21:32.704446 containerd[1500]: time="2025-02-13T15:21:32.704406261Z" level=info msg="StopPodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\"" Feb 13 15:21:32.704918 containerd[1500]: time="2025-02-13T15:21:32.704791135Z" level=info msg="TearDown network for sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" successfully" Feb 13 15:21:32.705033 containerd[1500]: time="2025-02-13T15:21:32.705016611Z" level=info msg="StopPodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" returns successfully" Feb 13 15:21:32.705654 containerd[1500]: time="2025-02-13T15:21:32.705603082Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\"" Feb 13 15:21:32.705837 containerd[1500]: time="2025-02-13T15:21:32.705798839Z" level=info msg="TearDown network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" successfully" Feb 13 15:21:32.705893 containerd[1500]: time="2025-02-13T15:21:32.705838358Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" returns successfully" Feb 13 15:21:32.706620 containerd[1500]: time="2025-02-13T15:21:32.706446669Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\"" Feb 13 15:21:32.706620 containerd[1500]: time="2025-02-13T15:21:32.706544147Z" level=info msg="TearDown network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" successfully" Feb 13 15:21:32.706620 containerd[1500]: time="2025-02-13T15:21:32.706555027Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" returns successfully" Feb 13 15:21:32.707204 containerd[1500]: time="2025-02-13T15:21:32.707174497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:5,}" Feb 13 15:21:32.803479 containerd[1500]: time="2025-02-13T15:21:32.803316588Z" level=error msg="Failed to destroy network for sandbox \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:32.805194 containerd[1500]: time="2025-02-13T15:21:32.804998842Z" level=error msg="encountered an error cleaning up failed sandbox \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:32.805194 containerd[1500]: time="2025-02-13T15:21:32.805087921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:32.806435 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc-shm.mount: Deactivated successfully. Feb 13 15:21:32.807434 kubelet[2020]: E0213 15:21:32.807392 2020 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:32.807548 kubelet[2020]: E0213 15:21:32.807454 2020 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:32.807548 kubelet[2020]: E0213 15:21:32.807476 2020 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:32.807548 kubelet[2020]: E0213 15:21:32.807522 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:32.835790 systemd[1]: Created slice kubepods-besteffort-podcb522f00_64ef_4848_9023_80f92d0fc030.slice - libcontainer container kubepods-besteffort-podcb522f00_64ef_4848_9023_80f92d0fc030.slice. Feb 13 15:21:33.003178 kubelet[2020]: I0213 15:21:33.002979 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw2wf\" (UniqueName: \"kubernetes.io/projected/cb522f00-64ef-4848-9023-80f92d0fc030-kube-api-access-hw2wf\") pod \"nginx-deployment-8587fbcb89-cv5wj\" (UID: \"cb522f00-64ef-4848-9023-80f92d0fc030\") " pod="default/nginx-deployment-8587fbcb89-cv5wj" Feb 13 15:21:33.142253 containerd[1500]: time="2025-02-13T15:21:33.141801580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-cv5wj,Uid:cb522f00-64ef-4848-9023-80f92d0fc030,Namespace:default,Attempt:0,}" Feb 13 15:21:33.245168 containerd[1500]: time="2025-02-13T15:21:33.245032636Z" level=error msg="Failed to destroy network for sandbox \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:33.246243 containerd[1500]: time="2025-02-13T15:21:33.246095863Z" level=error msg="encountered an error cleaning up failed sandbox \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:33.246243 containerd[1500]: time="2025-02-13T15:21:33.246203342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-cv5wj,Uid:cb522f00-64ef-4848-9023-80f92d0fc030,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:33.246511 kubelet[2020]: E0213 15:21:33.246445 2020 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:33.246511 kubelet[2020]: E0213 15:21:33.246510 2020 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-cv5wj" Feb 13 15:21:33.246511 kubelet[2020]: E0213 15:21:33.246532 2020 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-cv5wj" Feb 13 15:21:33.246799 kubelet[2020]: E0213 15:21:33.246577 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-cv5wj_default(cb522f00-64ef-4848-9023-80f92d0fc030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-cv5wj_default(cb522f00-64ef-4848-9023-80f92d0fc030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-cv5wj" podUID="cb522f00-64ef-4848-9023-80f92d0fc030" Feb 13 15:21:33.559550 kubelet[2020]: E0213 15:21:33.559476 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:33.704194 kubelet[2020]: I0213 15:21:33.702386 2020 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771" Feb 13 15:21:33.705178 containerd[1500]: time="2025-02-13T15:21:33.703598839Z" level=info msg="StopPodSandbox for \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\"" Feb 13 15:21:33.705178 containerd[1500]: time="2025-02-13T15:21:33.703890756Z" level=info msg="Ensure that sandbox 3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771 in task-service has been cleanup successfully" Feb 13 15:21:33.710414 kubelet[2020]: I0213 15:21:33.706655 2020 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc" Feb 13 15:21:33.710518 containerd[1500]: time="2025-02-13T15:21:33.708694659Z" level=info msg="StopPodSandbox for \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\"" Feb 13 15:21:33.710518 containerd[1500]: time="2025-02-13T15:21:33.708873257Z" level=info msg="Ensure that sandbox 8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc in task-service has been cleanup successfully" Feb 13 15:21:33.708324 systemd[1]: run-netns-cni\x2d79bcba66\x2ded3c\x2da2ca\x2d7146\x2d8e7f6334d1ce.mount: Deactivated successfully. Feb 13 15:21:33.711077 containerd[1500]: time="2025-02-13T15:21:33.710997511Z" level=info msg="TearDown network for sandbox \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\" successfully" Feb 13 15:21:33.711077 containerd[1500]: time="2025-02-13T15:21:33.711035391Z" level=info msg="StopPodSandbox for \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\" returns successfully" Feb 13 15:21:33.711835 containerd[1500]: time="2025-02-13T15:21:33.711804702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-cv5wj,Uid:cb522f00-64ef-4848-9023-80f92d0fc030,Namespace:default,Attempt:1,}" Feb 13 15:21:33.714883 systemd[1]: run-netns-cni\x2d127e193f\x2db7eb\x2d66f3\x2d2248\x2d2d03c4ce6629.mount: Deactivated successfully. Feb 13 15:21:33.716653 containerd[1500]: time="2025-02-13T15:21:33.716194570Z" level=info msg="TearDown network for sandbox \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\" successfully" Feb 13 15:21:33.716653 containerd[1500]: time="2025-02-13T15:21:33.716245729Z" level=info msg="StopPodSandbox for \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\" returns successfully" Feb 13 15:21:33.717627 containerd[1500]: time="2025-02-13T15:21:33.717457635Z" level=info msg="StopPodSandbox for \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\"" Feb 13 15:21:33.717627 containerd[1500]: time="2025-02-13T15:21:33.717555434Z" level=info msg="TearDown network for sandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\" successfully" Feb 13 15:21:33.717627 containerd[1500]: time="2025-02-13T15:21:33.717564993Z" level=info msg="StopPodSandbox for \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\" returns successfully" Feb 13 15:21:33.719388 containerd[1500]: time="2025-02-13T15:21:33.718090107Z" level=info msg="StopPodSandbox for \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\"" Feb 13 15:21:33.719388 containerd[1500]: time="2025-02-13T15:21:33.718255185Z" level=info msg="TearDown network for sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\" successfully" Feb 13 15:21:33.719388 containerd[1500]: time="2025-02-13T15:21:33.718267105Z" level=info msg="StopPodSandbox for \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\" returns successfully" Feb 13 15:21:33.719388 containerd[1500]: time="2025-02-13T15:21:33.718775219Z" level=info msg="StopPodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\"" Feb 13 15:21:33.719388 containerd[1500]: time="2025-02-13T15:21:33.718962497Z" level=info msg="TearDown network for sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" successfully" Feb 13 15:21:33.719388 containerd[1500]: time="2025-02-13T15:21:33.718985217Z" level=info msg="StopPodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" returns successfully" Feb 13 15:21:33.719629 containerd[1500]: time="2025-02-13T15:21:33.719468931Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\"" Feb 13 15:21:33.719995 containerd[1500]: time="2025-02-13T15:21:33.719830367Z" level=info msg="TearDown network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" successfully" Feb 13 15:21:33.719995 containerd[1500]: time="2025-02-13T15:21:33.719847606Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" returns successfully" Feb 13 15:21:33.721782 containerd[1500]: time="2025-02-13T15:21:33.721555426Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\"" Feb 13 15:21:33.721782 containerd[1500]: time="2025-02-13T15:21:33.721702424Z" level=info msg="TearDown network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" successfully" Feb 13 15:21:33.721782 containerd[1500]: time="2025-02-13T15:21:33.721724464Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" returns successfully" Feb 13 15:21:33.726072 containerd[1500]: time="2025-02-13T15:21:33.725860615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:6,}" Feb 13 15:21:33.831659 containerd[1500]: time="2025-02-13T15:21:33.830448735Z" level=error msg="Failed to destroy network for sandbox \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:33.831659 containerd[1500]: time="2025-02-13T15:21:33.830839610Z" level=error msg="encountered an error cleaning up failed sandbox \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:33.831659 containerd[1500]: time="2025-02-13T15:21:33.830900930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-cv5wj,Uid:cb522f00-64ef-4848-9023-80f92d0fc030,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:33.832248 kubelet[2020]: E0213 15:21:33.831173 2020 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:33.832248 kubelet[2020]: E0213 15:21:33.831242 2020 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-cv5wj" Feb 13 15:21:33.832248 kubelet[2020]: E0213 15:21:33.831268 2020 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-cv5wj" Feb 13 15:21:33.832412 kubelet[2020]: E0213 15:21:33.831307 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-cv5wj_default(cb522f00-64ef-4848-9023-80f92d0fc030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-cv5wj_default(cb522f00-64ef-4848-9023-80f92d0fc030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-cv5wj" podUID="cb522f00-64ef-4848-9023-80f92d0fc030" Feb 13 15:21:33.855421 containerd[1500]: time="2025-02-13T15:21:33.855363440Z" level=error msg="Failed to destroy network for sandbox \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:33.855758 containerd[1500]: time="2025-02-13T15:21:33.855720835Z" level=error msg="encountered an error cleaning up failed sandbox \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:33.855828 containerd[1500]: time="2025-02-13T15:21:33.855788355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:33.856058 kubelet[2020]: E0213 15:21:33.856020 2020 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:33.857026 kubelet[2020]: E0213 15:21:33.856611 2020 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:33.857026 kubelet[2020]: E0213 15:21:33.856651 2020 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2t7hw" Feb 13 15:21:33.857026 kubelet[2020]: E0213 15:21:33.856735 2020 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2t7hw_calico-system(b4d9d4af-c074-438d-84cb-6509cb3860d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2t7hw" podUID="b4d9d4af-c074-438d-84cb-6509cb3860d9" Feb 13 15:21:34.041590 containerd[1500]: time="2025-02-13T15:21:34.041513226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:34.042651 containerd[1500]: time="2025-02-13T15:21:34.042595097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 15:21:34.043893 containerd[1500]: time="2025-02-13T15:21:34.043722488Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:34.047214 containerd[1500]: time="2025-02-13T15:21:34.047141060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:34.048070 containerd[1500]: time="2025-02-13T15:21:34.048039013Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.37275172s" Feb 13 15:21:34.048515 containerd[1500]: time="2025-02-13T15:21:34.048158612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 15:21:34.056831 containerd[1500]: time="2025-02-13T15:21:34.056779582Z" level=info msg="CreateContainer within sandbox \"29b14766c4f20d6ebfad5d1039489ba7a3744a4839a67d4636ace4ac55f65c06\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:21:34.078815 containerd[1500]: time="2025-02-13T15:21:34.078667484Z" level=info msg="CreateContainer within sandbox \"29b14766c4f20d6ebfad5d1039489ba7a3744a4839a67d4636ace4ac55f65c06\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"144aa3cac9358394be5cf3c59c0e22e02c10e43f4043aeb7f3d998e249ff0b91\"" Feb 13 15:21:34.081307 containerd[1500]: time="2025-02-13T15:21:34.079819554Z" level=info msg="StartContainer for \"144aa3cac9358394be5cf3c59c0e22e02c10e43f4043aeb7f3d998e249ff0b91\"" Feb 13 15:21:34.116582 systemd[1]: Started cri-containerd-144aa3cac9358394be5cf3c59c0e22e02c10e43f4043aeb7f3d998e249ff0b91.scope - libcontainer container 144aa3cac9358394be5cf3c59c0e22e02c10e43f4043aeb7f3d998e249ff0b91. Feb 13 15:21:34.151740 containerd[1500]: time="2025-02-13T15:21:34.151675569Z" level=info msg="StartContainer for \"144aa3cac9358394be5cf3c59c0e22e02c10e43f4043aeb7f3d998e249ff0b91\" returns successfully" Feb 13 15:21:34.262255 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:21:34.262356 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:21:34.546863 kubelet[2020]: E0213 15:21:34.546569 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:34.560329 kubelet[2020]: E0213 15:21:34.560264 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:34.705377 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1-shm.mount: Deactivated successfully. Feb 13 15:21:34.705474 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa-shm.mount: Deactivated successfully. Feb 13 15:21:34.705525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322813255.mount: Deactivated successfully. Feb 13 15:21:34.712994 kubelet[2020]: I0213 15:21:34.712578 2020 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa" Feb 13 15:21:34.713603 containerd[1500]: time="2025-02-13T15:21:34.713423836Z" level=info msg="StopPodSandbox for \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\"" Feb 13 15:21:34.716366 containerd[1500]: time="2025-02-13T15:21:34.713632795Z" level=info msg="Ensure that sandbox c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa in task-service has been cleanup successfully" Feb 13 15:21:34.716094 systemd[1]: run-netns-cni\x2d37ae98e7\x2db0e5\x2d00dd\x2d32a3\x2da99f3c0262e2.mount: Deactivated successfully. Feb 13 15:21:34.718596 containerd[1500]: time="2025-02-13T15:21:34.717483603Z" level=info msg="TearDown network for sandbox \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\" successfully" Feb 13 15:21:34.718596 containerd[1500]: time="2025-02-13T15:21:34.717520243Z" level=info msg="StopPodSandbox for \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\" returns successfully" Feb 13 15:21:34.719890 containerd[1500]: time="2025-02-13T15:21:34.719849104Z" level=info msg="StopPodSandbox for \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\"" Feb 13 15:21:34.720008 containerd[1500]: time="2025-02-13T15:21:34.719980543Z" level=info msg="TearDown network for sandbox \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\" successfully" Feb 13 15:21:34.720008 containerd[1500]: time="2025-02-13T15:21:34.720002583Z" level=info msg="StopPodSandbox for \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\" returns successfully" Feb 13 15:21:34.720760 containerd[1500]: time="2025-02-13T15:21:34.720676217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-cv5wj,Uid:cb522f00-64ef-4848-9023-80f92d0fc030,Namespace:default,Attempt:2,}" Feb 13 15:21:34.734725 kubelet[2020]: I0213 15:21:34.734676 2020 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1" Feb 13 15:21:34.738562 containerd[1500]: time="2025-02-13T15:21:34.737659359Z" level=info msg="StopPodSandbox for \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\"" Feb 13 15:21:34.741301 containerd[1500]: time="2025-02-13T15:21:34.739445744Z" level=info msg="Ensure that sandbox 51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1 in task-service has been cleanup successfully" Feb 13 15:21:34.741409 containerd[1500]: time="2025-02-13T15:21:34.741354129Z" level=info msg="TearDown network for sandbox \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\" successfully" Feb 13 15:21:34.741409 containerd[1500]: time="2025-02-13T15:21:34.741386969Z" level=info msg="StopPodSandbox for \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\" returns successfully" Feb 13 15:21:34.742087 containerd[1500]: time="2025-02-13T15:21:34.741971364Z" level=info msg="StopPodSandbox for \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\"" Feb 13 15:21:34.743679 containerd[1500]: time="2025-02-13T15:21:34.743591311Z" level=info msg="TearDown network for sandbox \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\" successfully" Feb 13 15:21:34.744031 containerd[1500]: time="2025-02-13T15:21:34.743821269Z" level=info msg="StopPodSandbox for \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\" returns successfully" Feb 13 15:21:34.744542 containerd[1500]: time="2025-02-13T15:21:34.744479663Z" level=info msg="StopPodSandbox for \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\"" Feb 13 15:21:34.744730 containerd[1500]: time="2025-02-13T15:21:34.744713741Z" level=info msg="TearDown network for sandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\" successfully" Feb 13 15:21:34.744763 systemd[1]: run-netns-cni\x2dc99c2e3c\x2d6471\x2dc6a1\x2db4a5\x2de16e0dea1f64.mount: Deactivated successfully. Feb 13 15:21:34.744969 containerd[1500]: time="2025-02-13T15:21:34.744949860Z" level=info msg="StopPodSandbox for \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\" returns successfully" Feb 13 15:21:34.745364 containerd[1500]: time="2025-02-13T15:21:34.745343256Z" level=info msg="StopPodSandbox for \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\"" Feb 13 15:21:34.745833 containerd[1500]: time="2025-02-13T15:21:34.745745413Z" level=info msg="TearDown network for sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\" successfully" Feb 13 15:21:34.745833 containerd[1500]: time="2025-02-13T15:21:34.745764533Z" level=info msg="StopPodSandbox for \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\" returns successfully" Feb 13 15:21:34.746264 containerd[1500]: time="2025-02-13T15:21:34.746237209Z" level=info msg="StopPodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\"" Feb 13 15:21:34.746568 containerd[1500]: time="2025-02-13T15:21:34.746442887Z" level=info msg="TearDown network for sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" successfully" Feb 13 15:21:34.746568 containerd[1500]: time="2025-02-13T15:21:34.746461047Z" level=info msg="StopPodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" returns successfully" Feb 13 15:21:34.746970 containerd[1500]: time="2025-02-13T15:21:34.746925803Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\"" Feb 13 15:21:34.747048 containerd[1500]: time="2025-02-13T15:21:34.747010443Z" level=info msg="TearDown network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" successfully" Feb 13 15:21:34.747048 containerd[1500]: time="2025-02-13T15:21:34.747022123Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" returns successfully" Feb 13 15:21:34.748942 containerd[1500]: time="2025-02-13T15:21:34.747997555Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\"" Feb 13 15:21:34.749498 containerd[1500]: time="2025-02-13T15:21:34.749105066Z" level=info msg="TearDown network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" successfully" Feb 13 15:21:34.749498 containerd[1500]: time="2025-02-13T15:21:34.749128906Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" returns successfully" Feb 13 15:21:34.750079 containerd[1500]: time="2025-02-13T15:21:34.750014898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:7,}" Feb 13 15:21:34.750638 kubelet[2020]: I0213 15:21:34.750580 2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-59d4w" podStartSLOduration=3.427484822 podStartE2EDuration="20.750559534s" podCreationTimestamp="2025-02-13 15:21:14 +0000 UTC" firstStartedPulling="2025-02-13 15:21:16.725743775 +0000 UTC m=+2.795525970" lastFinishedPulling="2025-02-13 15:21:34.048818527 +0000 UTC m=+20.118600682" observedRunningTime="2025-02-13 15:21:34.748216593 +0000 UTC m=+20.817998788" watchObservedRunningTime="2025-02-13 15:21:34.750559534 +0000 UTC m=+20.820341729" Feb 13 15:21:34.971120 systemd-networkd[1398]: caliba1b4dc9cb6: Link UP Feb 13 15:21:34.971889 systemd-networkd[1398]: caliba1b4dc9cb6: Gained carrier Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.808 [INFO][2912] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.847 [INFO][2912] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--2t7hw-eth0 csi-node-driver- calico-system b4d9d4af-c074-438d-84cb-6509cb3860d9 1467 0 2025-02-13 15:21:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-2t7hw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliba1b4dc9cb6 [] []}} ContainerID="15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" Namespace="calico-system" Pod="csi-node-driver-2t7hw" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2t7hw-" Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.847 [INFO][2912] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" Namespace="calico-system" Pod="csi-node-driver-2t7hw" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2t7hw-eth0" Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.889 [INFO][2932] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" HandleID="k8s-pod-network.15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" Workload="10.0.0.4-k8s-csi--node--driver--2t7hw-eth0" Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.907 [INFO][2932] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" HandleID="k8s-pod-network.15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" Workload="10.0.0.4-k8s-csi--node--driver--2t7hw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317370), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-2t7hw", "timestamp":"2025-02-13 15:21:34.88980676 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.907 [INFO][2932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.907 [INFO][2932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.907 [INFO][2932] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.912 [INFO][2932] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" host="10.0.0.4" Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.919 [INFO][2932] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.929 [INFO][2932] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.933 [INFO][2932] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.942 [INFO][2932] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.942 [INFO][2932] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" host="10.0.0.4" Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.945 [INFO][2932] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.951 [INFO][2932] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" host="10.0.0.4" Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.959 [INFO][2932] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" host="10.0.0.4" Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.959 [INFO][2932] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" host="10.0.0.4" Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.959 [INFO][2932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:21:34.990535 containerd[1500]: 2025-02-13 15:21:34.959 [INFO][2932] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" HandleID="k8s-pod-network.15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" Workload="10.0.0.4-k8s-csi--node--driver--2t7hw-eth0" Feb 13 15:21:34.991160 containerd[1500]: 2025-02-13 15:21:34.963 [INFO][2912] cni-plugin/k8s.go 386: Populated endpoint ContainerID="15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" Namespace="calico-system" Pod="csi-node-driver-2t7hw" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2t7hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--2t7hw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4d9d4af-c074-438d-84cb-6509cb3860d9", ResourceVersion:"1467", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-2t7hw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliba1b4dc9cb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:34.991160 containerd[1500]: 2025-02-13 15:21:34.964 [INFO][2912] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.193/32] ContainerID="15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" Namespace="calico-system" Pod="csi-node-driver-2t7hw" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2t7hw-eth0" Feb 13 15:21:34.991160 containerd[1500]: 2025-02-13 15:21:34.964 [INFO][2912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba1b4dc9cb6 ContainerID="15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" Namespace="calico-system" Pod="csi-node-driver-2t7hw" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2t7hw-eth0" Feb 13 15:21:34.991160 containerd[1500]: 2025-02-13 15:21:34.972 [INFO][2912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" Namespace="calico-system" Pod="csi-node-driver-2t7hw" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2t7hw-eth0" Feb 13 15:21:34.991160 containerd[1500]: 2025-02-13 15:21:34.972 [INFO][2912] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" Namespace="calico-system" Pod="csi-node-driver-2t7hw" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2t7hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--2t7hw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4d9d4af-c074-438d-84cb-6509cb3860d9", ResourceVersion:"1467", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d", Pod:"csi-node-driver-2t7hw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliba1b4dc9cb6", MAC:"6e:f4:7f:f9:fb:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:34.991160 containerd[1500]: 2025-02-13 15:21:34.988 [INFO][2912] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d" Namespace="calico-system" Pod="csi-node-driver-2t7hw" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2t7hw-eth0" Feb 13 15:21:35.011633 containerd[1500]: time="2025-02-13T15:21:35.011293491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:35.011633 containerd[1500]: time="2025-02-13T15:21:35.011385211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:35.011633 containerd[1500]: time="2025-02-13T15:21:35.011401451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:35.011633 containerd[1500]: time="2025-02-13T15:21:35.011579730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:35.031451 systemd[1]: Started cri-containerd-15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d.scope - libcontainer container 15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d. Feb 13 15:21:35.068391 systemd-networkd[1398]: cali62d617be752: Link UP Feb 13 15:21:35.071251 containerd[1500]: time="2025-02-13T15:21:35.071139579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2t7hw,Uid:b4d9d4af-c074-438d-84cb-6509cb3860d9,Namespace:calico-system,Attempt:7,} returns sandbox id \"15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d\"" Feb 13 15:21:35.071789 systemd-networkd[1398]: cali62d617be752: Gained carrier Feb 13 15:21:35.076536 containerd[1500]: time="2025-02-13T15:21:35.076502035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:34.792 [INFO][2893] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:34.841 [INFO][2893] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0 nginx-deployment-8587fbcb89- default cb522f00-64ef-4848-9023-80f92d0fc030 1561 0 2025-02-13 15:21:32 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-8587fbcb89-cv5wj eth0 default [] [] [kns.default ksa.default.default] cali62d617be752 [] []}} ContainerID="046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" Namespace="default" Pod="nginx-deployment-8587fbcb89-cv5wj" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:34.842 [INFO][2893] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" Namespace="default" Pod="nginx-deployment-8587fbcb89-cv5wj" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:34.889 [INFO][2931] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" HandleID="k8s-pod-network.046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" Workload="10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:34.908 [INFO][2931] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" HandleID="k8s-pod-network.046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" Workload="10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004dad00), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-8587fbcb89-cv5wj", "timestamp":"2025-02-13 15:21:34.88980656 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:34.908 [INFO][2931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:34.959 [INFO][2931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:34.959 [INFO][2931] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:35.012 [INFO][2931] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" host="10.0.0.4" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:35.022 [INFO][2931] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:35.030 [INFO][2931] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:35.034 [INFO][2931] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:35.039 [INFO][2931] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:35.039 [INFO][2931] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" host="10.0.0.4" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:35.042 [INFO][2931] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763 Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:35.049 [INFO][2931] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" host="10.0.0.4" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:35.056 [INFO][2931] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" host="10.0.0.4" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:35.056 [INFO][2931] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" host="10.0.0.4" Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:35.056 [INFO][2931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:21:35.087545 containerd[1500]: 2025-02-13 15:21:35.056 [INFO][2931] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" HandleID="k8s-pod-network.046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" Workload="10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0" Feb 13 15:21:35.088369 containerd[1500]: 2025-02-13 15:21:35.062 [INFO][2893] cni-plugin/k8s.go 386: Populated endpoint ContainerID="046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" Namespace="default" Pod="nginx-deployment-8587fbcb89-cv5wj" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"cb522f00-64ef-4848-9023-80f92d0fc030", ResourceVersion:"1561", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-cv5wj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali62d617be752", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:35.088369 containerd[1500]: 2025-02-13 15:21:35.062 [INFO][2893] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.194/32] ContainerID="046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" Namespace="default" Pod="nginx-deployment-8587fbcb89-cv5wj" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0" Feb 13 15:21:35.088369 containerd[1500]: 2025-02-13 15:21:35.062 [INFO][2893] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62d617be752 ContainerID="046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" Namespace="default" Pod="nginx-deployment-8587fbcb89-cv5wj" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0" Feb 13 15:21:35.088369 containerd[1500]: 2025-02-13 15:21:35.070 [INFO][2893] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" Namespace="default" Pod="nginx-deployment-8587fbcb89-cv5wj" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0" Feb 13 15:21:35.088369 containerd[1500]: 2025-02-13 15:21:35.072 [INFO][2893] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" Namespace="default" Pod="nginx-deployment-8587fbcb89-cv5wj" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"cb522f00-64ef-4848-9023-80f92d0fc030", ResourceVersion:"1561", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763", Pod:"nginx-deployment-8587fbcb89-cv5wj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali62d617be752", MAC:"a6:1e:72:a3:dd:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:35.088369 containerd[1500]: 2025-02-13 15:21:35.085 [INFO][2893] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763" Namespace="default" Pod="nginx-deployment-8587fbcb89-cv5wj" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--cv5wj-eth0" Feb 13 15:21:35.112151 containerd[1500]: time="2025-02-13T15:21:35.111899674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:35.112151 containerd[1500]: time="2025-02-13T15:21:35.111963434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:35.112151 containerd[1500]: time="2025-02-13T15:21:35.111978394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:35.112151 containerd[1500]: time="2025-02-13T15:21:35.112069474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:35.132608 systemd[1]: Started cri-containerd-046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763.scope - libcontainer container 046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763. Feb 13 15:21:35.168290 containerd[1500]: time="2025-02-13T15:21:35.168253378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-cv5wj,Uid:cb522f00-64ef-4848-9023-80f92d0fc030,Namespace:default,Attempt:2,} returns sandbox id \"046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763\"" Feb 13 15:21:35.561051 kubelet[2020]: E0213 15:21:35.560944 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:35.944276 kernel: bpftool[3188]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:21:36.143377 systemd-networkd[1398]: vxlan.calico: Link UP Feb 13 15:21:36.143384 systemd-networkd[1398]: vxlan.calico: Gained carrier Feb 13 15:21:36.561512 kubelet[2020]: E0213 15:21:36.561446 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:36.633017 systemd-networkd[1398]: caliba1b4dc9cb6: Gained IPv6LL Feb 13 15:21:36.823714 systemd-networkd[1398]: cali62d617be752: Gained IPv6LL Feb 13 15:21:36.874015 containerd[1500]: time="2025-02-13T15:21:36.873888720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:36.875698 containerd[1500]: time="2025-02-13T15:21:36.875610318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 15:21:36.877064 containerd[1500]: time="2025-02-13T15:21:36.876993316Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:36.879454 containerd[1500]: time="2025-02-13T15:21:36.879388674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:36.880855 containerd[1500]: time="2025-02-13T15:21:36.880811352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.804135878s" Feb 13 15:21:36.880920 containerd[1500]: time="2025-02-13T15:21:36.880866072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 15:21:36.882865 containerd[1500]: time="2025-02-13T15:21:36.882805470Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:21:36.884016 containerd[1500]: time="2025-02-13T15:21:36.883957149Z" level=info msg="CreateContainer within sandbox \"15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:21:36.904758 containerd[1500]: time="2025-02-13T15:21:36.904657487Z" level=info msg="CreateContainer within sandbox \"15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"75a3bf3d19c9c8d25a62e5bbd4fcf0c3d635b0de9a579b25e44a36f8c30dc81e\"" Feb 13 15:21:36.907244 containerd[1500]: time="2025-02-13T15:21:36.905480406Z" level=info msg="StartContainer for \"75a3bf3d19c9c8d25a62e5bbd4fcf0c3d635b0de9a579b25e44a36f8c30dc81e\"" Feb 13 15:21:36.944696 systemd[1]: Started cri-containerd-75a3bf3d19c9c8d25a62e5bbd4fcf0c3d635b0de9a579b25e44a36f8c30dc81e.scope - libcontainer container 75a3bf3d19c9c8d25a62e5bbd4fcf0c3d635b0de9a579b25e44a36f8c30dc81e. Feb 13 15:21:36.980531 containerd[1500]: time="2025-02-13T15:21:36.980448847Z" level=info msg="StartContainer for \"75a3bf3d19c9c8d25a62e5bbd4fcf0c3d635b0de9a579b25e44a36f8c30dc81e\" returns successfully" Feb 13 15:21:37.562207 kubelet[2020]: E0213 15:21:37.562136 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:38.040361 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Feb 13 15:21:38.563186 kubelet[2020]: E0213 15:21:38.563138 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:39.380083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856948286.mount: Deactivated successfully. Feb 13 15:21:39.563918 kubelet[2020]: E0213 15:21:39.563883 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:40.164299 containerd[1500]: time="2025-02-13T15:21:40.164103303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:40.166434 containerd[1500]: time="2025-02-13T15:21:40.166223848Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 15:21:40.167625 containerd[1500]: time="2025-02-13T15:21:40.167520263Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:40.171281 containerd[1500]: time="2025-02-13T15:21:40.171172866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:40.173823 containerd[1500]: time="2025-02-13T15:21:40.173427213Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 3.290576263s" Feb 13 15:21:40.173823 containerd[1500]: time="2025-02-13T15:21:40.173473493Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 15:21:40.176417 containerd[1500]: time="2025-02-13T15:21:40.176055284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:21:40.177363 containerd[1500]: time="2025-02-13T15:21:40.177325019Z" level=info msg="CreateContainer within sandbox \"046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 15:21:40.194910 containerd[1500]: time="2025-02-13T15:21:40.194853707Z" level=info msg="CreateContainer within sandbox \"046cf161f346ca78891d10fe83a4988aa05dcef840d8eb9f569dd89fdfc67763\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"eb8b2b26b2030a86281f0ae598d18731f2359933adedfbcdb06dd38091e323d9\"" Feb 13 15:21:40.197039 containerd[1500]: time="2025-02-13T15:21:40.195776477Z" level=info msg="StartContainer for \"eb8b2b26b2030a86281f0ae598d18731f2359933adedfbcdb06dd38091e323d9\"" Feb 13 15:21:40.235683 systemd[1]: Started cri-containerd-eb8b2b26b2030a86281f0ae598d18731f2359933adedfbcdb06dd38091e323d9.scope - libcontainer container eb8b2b26b2030a86281f0ae598d18731f2359933adedfbcdb06dd38091e323d9. Feb 13 15:21:40.274039 containerd[1500]: time="2025-02-13T15:21:40.273515638Z" level=info msg="StartContainer for \"eb8b2b26b2030a86281f0ae598d18731f2359933adedfbcdb06dd38091e323d9\" returns successfully" Feb 13 15:21:40.380821 systemd[1]: run-containerd-runc-k8s.io-eb8b2b26b2030a86281f0ae598d18731f2359933adedfbcdb06dd38091e323d9-runc.4VuUDD.mount: Deactivated successfully. Feb 13 15:21:40.565031 kubelet[2020]: E0213 15:21:40.564967 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:40.782314 kubelet[2020]: I0213 15:21:40.782256 2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-cv5wj" podStartSLOduration=3.776686916 podStartE2EDuration="8.78221206s" podCreationTimestamp="2025-02-13 15:21:32 +0000 UTC" firstStartedPulling="2025-02-13 15:21:35.169794251 +0000 UTC m=+21.239576446" lastFinishedPulling="2025-02-13 15:21:40.175319395 +0000 UTC m=+26.245101590" observedRunningTime="2025-02-13 15:21:40.782153179 +0000 UTC m=+26.851935374" watchObservedRunningTime="2025-02-13 15:21:40.78221206 +0000 UTC m=+26.851994255" Feb 13 15:21:41.566016 kubelet[2020]: E0213 15:21:41.565947 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:42.039758 containerd[1500]: time="2025-02-13T15:21:42.039549750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:42.040899 containerd[1500]: time="2025-02-13T15:21:42.040771971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 15:21:42.042252 containerd[1500]: time="2025-02-13T15:21:42.042067674Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:42.044742 containerd[1500]: time="2025-02-13T15:21:42.044698041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:42.045922 containerd[1500]: time="2025-02-13T15:21:42.045607937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.869468812s" Feb 13 15:21:42.045922 containerd[1500]: time="2025-02-13T15:21:42.045647938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 15:21:42.048440 containerd[1500]: time="2025-02-13T15:21:42.048404907Z" level=info msg="CreateContainer within sandbox \"15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:21:42.065751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680600097.mount: Deactivated successfully. Feb 13 15:21:42.071473 containerd[1500]: time="2025-02-13T15:21:42.071421794Z" level=info msg="CreateContainer within sandbox \"15859748f415d0ecffe57630631f1dd631c60f93c186ff21bf8f75251bba492d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e7693be995b443662500b87c57568b7117a15baf07fc84229a2b4cf8442efca3\"" Feb 13 15:21:42.072123 containerd[1500]: time="2025-02-13T15:21:42.072089926Z" level=info msg="StartContainer for \"e7693be995b443662500b87c57568b7117a15baf07fc84229a2b4cf8442efca3\"" Feb 13 15:21:42.111603 systemd[1]: Started cri-containerd-e7693be995b443662500b87c57568b7117a15baf07fc84229a2b4cf8442efca3.scope - libcontainer container e7693be995b443662500b87c57568b7117a15baf07fc84229a2b4cf8442efca3. Feb 13 15:21:42.152430 containerd[1500]: time="2025-02-13T15:21:42.152189743Z" level=info msg="StartContainer for \"e7693be995b443662500b87c57568b7117a15baf07fc84229a2b4cf8442efca3\" returns successfully" Feb 13 15:21:42.566922 kubelet[2020]: E0213 15:21:42.566834 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:42.688982 kubelet[2020]: I0213 15:21:42.688821 2020 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:21:42.688982 kubelet[2020]: I0213 15:21:42.688869 2020 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:21:42.797533 kubelet[2020]: I0213 15:21:42.797391 2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2t7hw" podStartSLOduration=21.826675915 podStartE2EDuration="28.79736696s" podCreationTimestamp="2025-02-13 15:21:14 +0000 UTC" firstStartedPulling="2025-02-13 15:21:35.076292036 +0000 UTC m=+21.146074191" lastFinishedPulling="2025-02-13 15:21:42.046983041 +0000 UTC m=+28.116765236" observedRunningTime="2025-02-13 15:21:42.795613129 +0000 UTC m=+28.865395404" watchObservedRunningTime="2025-02-13 15:21:42.79736696 +0000 UTC m=+28.867149155" Feb 13 15:21:43.567526 kubelet[2020]: E0213 15:21:43.567452 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:44.568275 kubelet[2020]: E0213 15:21:44.568179 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:45.569212 kubelet[2020]: E0213 15:21:45.569157 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:46.570166 kubelet[2020]: E0213 15:21:46.570106 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:47.570527 kubelet[2020]: E0213 15:21:47.570448 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:47.850817 systemd[1]: Created slice kubepods-besteffort-pod77362b31_345b_4344_8b44_a9882f0f93fc.slice - libcontainer container kubepods-besteffort-pod77362b31_345b_4344_8b44_a9882f0f93fc.slice. Feb 13 15:21:48.008315 kubelet[2020]: I0213 15:21:48.008021 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xt66\" (UniqueName: \"kubernetes.io/projected/77362b31-345b-4344-8b44-a9882f0f93fc-kube-api-access-5xt66\") pod \"nfs-server-provisioner-0\" (UID: \"77362b31-345b-4344-8b44-a9882f0f93fc\") " pod="default/nfs-server-provisioner-0" Feb 13 15:21:48.008315 kubelet[2020]: I0213 15:21:48.008127 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/77362b31-345b-4344-8b44-a9882f0f93fc-data\") pod \"nfs-server-provisioner-0\" (UID: \"77362b31-345b-4344-8b44-a9882f0f93fc\") " pod="default/nfs-server-provisioner-0" Feb 13 15:21:48.154854 containerd[1500]: time="2025-02-13T15:21:48.154669825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:77362b31-345b-4344-8b44-a9882f0f93fc,Namespace:default,Attempt:0,}" Feb 13 15:21:48.320607 systemd-networkd[1398]: cali60e51b789ff: Link UP Feb 13 15:21:48.321964 systemd-networkd[1398]: cali60e51b789ff: Gained carrier Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.215 [INFO][3469] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 77362b31-345b-4344-8b44-a9882f0f93fc 1654 0 2025-02-13 15:21:47 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.215 [INFO][3469] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.248 [INFO][3479] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" HandleID="k8s-pod-network.0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.269 [INFO][3479] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" HandleID="k8s-pod-network.0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d0a00), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 15:21:48.248183089 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.269 [INFO][3479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.269 [INFO][3479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.269 [INFO][3479] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.273 [INFO][3479] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" host="10.0.0.4" Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.278 [INFO][3479] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.285 [INFO][3479] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.289 [INFO][3479] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.295 [INFO][3479] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.295 [INFO][3479] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" host="10.0.0.4" Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.298 [INFO][3479] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6 Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.303 [INFO][3479] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" host="10.0.0.4" Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.313 [INFO][3479] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" host="10.0.0.4" Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.313 [INFO][3479] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" host="10.0.0.4" Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.313 [INFO][3479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:21:48.341304 containerd[1500]: 2025-02-13 15:21:48.313 [INFO][3479] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" HandleID="k8s-pod-network.0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:21:48.341847 containerd[1500]: 2025-02-13 15:21:48.316 [INFO][3469] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"77362b31-345b-4344-8b44-a9882f0f93fc", ResourceVersion:"1654", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:48.341847 containerd[1500]: 2025-02-13 15:21:48.316 [INFO][3469] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.195/32] ContainerID="0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:21:48.341847 containerd[1500]: 2025-02-13 15:21:48.317 [INFO][3469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:21:48.341847 containerd[1500]: 2025-02-13 15:21:48.321 [INFO][3469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:21:48.341986 containerd[1500]: 2025-02-13 15:21:48.323 [INFO][3469] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"77362b31-345b-4344-8b44-a9882f0f93fc", ResourceVersion:"1654", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"a6:84:de:b1:20:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:48.341986 containerd[1500]: 2025-02-13 15:21:48.337 [INFO][3469] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:21:48.374001 containerd[1500]: time="2025-02-13T15:21:48.372925429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:48.374001 containerd[1500]: time="2025-02-13T15:21:48.372985351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:48.374001 containerd[1500]: time="2025-02-13T15:21:48.373000952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:48.374001 containerd[1500]: time="2025-02-13T15:21:48.373151317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:48.397705 systemd[1]: Started cri-containerd-0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6.scope - libcontainer container 0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6. Feb 13 15:21:48.443102 containerd[1500]: time="2025-02-13T15:21:48.443061117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:77362b31-345b-4344-8b44-a9882f0f93fc,Namespace:default,Attempt:0,} returns sandbox id \"0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6\"" Feb 13 15:21:48.444934 containerd[1500]: time="2025-02-13T15:21:48.444897618Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 15:21:48.571703 kubelet[2020]: E0213 15:21:48.571635 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:49.369451 systemd-networkd[1398]: cali60e51b789ff: Gained IPv6LL Feb 13 15:21:49.572963 kubelet[2020]: E0213 15:21:49.572922 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:50.237206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1251555921.mount: Deactivated successfully. Feb 13 15:21:50.573931 kubelet[2020]: E0213 15:21:50.573891 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:51.575714 kubelet[2020]: E0213 15:21:51.575635 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:51.832456 containerd[1500]: time="2025-02-13T15:21:51.832055869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:51.834534 containerd[1500]: time="2025-02-13T15:21:51.834466885Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373691" Feb 13 15:21:51.835281 containerd[1500]: time="2025-02-13T15:21:51.834662413Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:51.837974 containerd[1500]: time="2025-02-13T15:21:51.837883142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:51.839358 containerd[1500]: time="2025-02-13T15:21:51.839315319Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.394372499s" Feb 13 15:21:51.839474 containerd[1500]: time="2025-02-13T15:21:51.839459324Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 15:21:51.842727 containerd[1500]: time="2025-02-13T15:21:51.842679533Z" level=info msg="CreateContainer within sandbox \"0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 15:21:51.864383 containerd[1500]: time="2025-02-13T15:21:51.864187591Z" level=info msg="CreateContainer within sandbox \"0194c544f8b66e02a3f96cfec8eca506c22df34a84571baf457cd226c8362ec6\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d564c7e2b9e39109f62e71920cab90e471f7b04b1b243293b4f495a9e2f2c542\"" Feb 13 15:21:51.866451 containerd[1500]: time="2025-02-13T15:21:51.865197351Z" level=info msg="StartContainer for \"d564c7e2b9e39109f62e71920cab90e471f7b04b1b243293b4f495a9e2f2c542\"" Feb 13 15:21:51.896052 systemd[1]: run-containerd-runc-k8s.io-d564c7e2b9e39109f62e71920cab90e471f7b04b1b243293b4f495a9e2f2c542-runc.BJ7Or4.mount: Deactivated successfully. Feb 13 15:21:51.903637 systemd[1]: Started cri-containerd-d564c7e2b9e39109f62e71920cab90e471f7b04b1b243293b4f495a9e2f2c542.scope - libcontainer container d564c7e2b9e39109f62e71920cab90e471f7b04b1b243293b4f495a9e2f2c542. Feb 13 15:21:51.934479 containerd[1500]: time="2025-02-13T15:21:51.934324870Z" level=info msg="StartContainer for \"d564c7e2b9e39109f62e71920cab90e471f7b04b1b243293b4f495a9e2f2c542\" returns successfully" Feb 13 15:21:52.576121 kubelet[2020]: E0213 15:21:52.576068 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:52.836083 kubelet[2020]: I0213 15:21:52.835441 2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.439078841 podStartE2EDuration="5.835418455s" podCreationTimestamp="2025-02-13 15:21:47 +0000 UTC" firstStartedPulling="2025-02-13 15:21:48.444406162 +0000 UTC m=+34.514188357" lastFinishedPulling="2025-02-13 15:21:51.840745776 +0000 UTC m=+37.910527971" observedRunningTime="2025-02-13 15:21:52.828735494 +0000 UTC m=+38.898517689" watchObservedRunningTime="2025-02-13 15:21:52.835418455 +0000 UTC m=+38.905200650" Feb 13 15:21:53.576938 kubelet[2020]: E0213 15:21:53.576850 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:54.547020 kubelet[2020]: E0213 15:21:54.546925 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:54.577135 kubelet[2020]: E0213 15:21:54.577076 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:55.577449 kubelet[2020]: E0213 15:21:55.577389 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:56.578194 kubelet[2020]: E0213 15:21:56.578119 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:57.578779 kubelet[2020]: E0213 15:21:57.578726 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:58.578916 kubelet[2020]: E0213 15:21:58.578859 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:21:59.579576 kubelet[2020]: E0213 15:21:59.579515 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:00.580699 kubelet[2020]: E0213 15:22:00.580629 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:01.474891 systemd[1]: Created slice kubepods-besteffort-pod3c789d6e_e9db_4f96_a539_57c490f61f9a.slice - libcontainer container kubepods-besteffort-pod3c789d6e_e9db_4f96_a539_57c490f61f9a.slice. Feb 13 15:22:01.581308 kubelet[2020]: E0213 15:22:01.581267 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:01.598596 kubelet[2020]: I0213 15:22:01.598518 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e527e066-1884-4a8b-a160-2fcd4004c96b\" (UniqueName: \"kubernetes.io/nfs/3c789d6e-e9db-4f96-a539-57c490f61f9a-pvc-e527e066-1884-4a8b-a160-2fcd4004c96b\") pod \"test-pod-1\" (UID: \"3c789d6e-e9db-4f96-a539-57c490f61f9a\") " pod="default/test-pod-1" Feb 13 15:22:01.598596 kubelet[2020]: I0213 15:22:01.598578 2020 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s229t\" (UniqueName: \"kubernetes.io/projected/3c789d6e-e9db-4f96-a539-57c490f61f9a-kube-api-access-s229t\") pod \"test-pod-1\" (UID: \"3c789d6e-e9db-4f96-a539-57c490f61f9a\") " pod="default/test-pod-1" Feb 13 15:22:01.728314 kernel: FS-Cache: Loaded Feb 13 15:22:01.753561 kernel: RPC: Registered named UNIX socket transport module. Feb 13 15:22:01.753774 kernel: RPC: Registered udp transport module. Feb 13 15:22:01.753869 kernel: RPC: Registered tcp transport module. Feb 13 15:22:01.753921 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 15:22:01.753968 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 15:22:01.920355 kernel: NFS: Registering the id_resolver key type Feb 13 15:22:01.920547 kernel: Key type id_resolver registered Feb 13 15:22:01.920640 kernel: Key type id_legacy registered Feb 13 15:22:01.954868 nfsidmap[3691]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 15:22:01.958696 nfsidmap[3692]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 15:22:02.079218 containerd[1500]: time="2025-02-13T15:22:02.079175648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3c789d6e-e9db-4f96-a539-57c490f61f9a,Namespace:default,Attempt:0,}" Feb 13 15:22:02.244012 systemd-networkd[1398]: cali5ec59c6bf6e: Link UP Feb 13 15:22:02.244364 systemd-networkd[1398]: cali5ec59c6bf6e: Gained carrier Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.145 [INFO][3693] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default 3c789d6e-e9db-4f96-a539-57c490f61f9a 1717 0 2025-02-13 15:21:49 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.145 [INFO][3693] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.175 [INFO][3704] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" HandleID="k8s-pod-network.03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" Workload="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.195 [INFO][3704] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" HandleID="k8s-pod-network.03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000220b90), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2025-02-13 15:22:02.175204781 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.195 [INFO][3704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.195 [INFO][3704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.195 [INFO][3704] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.200 [INFO][3704] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" host="10.0.0.4" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.206 [INFO][3704] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.213 [INFO][3704] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.217 [INFO][3704] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.221 [INFO][3704] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.221 [INFO][3704] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" host="10.0.0.4" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.224 [INFO][3704] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.229 [INFO][3704] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" host="10.0.0.4" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.237 [INFO][3704] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" host="10.0.0.4" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.237 [INFO][3704] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" host="10.0.0.4" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.237 [INFO][3704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.237 [INFO][3704] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" HandleID="k8s-pod-network.03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" Workload="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 15:22:02.264619 containerd[1500]: 2025-02-13 15:22:02.239 [INFO][3693] cni-plugin/k8s.go 386: Populated endpoint ContainerID="03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3c789d6e-e9db-4f96-a539-57c490f61f9a", ResourceVersion:"1717", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:22:02.266080 containerd[1500]: 2025-02-13 15:22:02.239 [INFO][3693] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.196/32] ContainerID="03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 15:22:02.266080 containerd[1500]: 2025-02-13 15:22:02.239 [INFO][3693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 15:22:02.266080 containerd[1500]: 2025-02-13 15:22:02.244 [INFO][3693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 15:22:02.266080 containerd[1500]: 2025-02-13 15:22:02.245 [INFO][3693] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3c789d6e-e9db-4f96-a539-57c490f61f9a", ResourceVersion:"1717", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"2e:5f:0b:27:86:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:22:02.266080 containerd[1500]: 2025-02-13 15:22:02.262 [INFO][3693] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 15:22:02.295776 containerd[1500]: time="2025-02-13T15:22:02.295485400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:22:02.295776 containerd[1500]: time="2025-02-13T15:22:02.295627769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:22:02.296731 containerd[1500]: time="2025-02-13T15:22:02.296458698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:22:02.297023 containerd[1500]: time="2025-02-13T15:22:02.296890364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:22:02.318554 systemd[1]: Started cri-containerd-03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad.scope - libcontainer container 03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad. Feb 13 15:22:02.363700 containerd[1500]: time="2025-02-13T15:22:02.363647709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3c789d6e-e9db-4f96-a539-57c490f61f9a,Namespace:default,Attempt:0,} returns sandbox id \"03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad\"" Feb 13 15:22:02.366075 containerd[1500]: time="2025-02-13T15:22:02.365854601Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:22:02.582741 kubelet[2020]: E0213 15:22:02.582689 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:02.826649 containerd[1500]: time="2025-02-13T15:22:02.826558701Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:02.828102 containerd[1500]: time="2025-02-13T15:22:02.828031189Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 15:22:02.831633 containerd[1500]: time="2025-02-13T15:22:02.831574520Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 465.681438ms" Feb 13 15:22:02.831633 containerd[1500]: time="2025-02-13T15:22:02.831631844Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 15:22:02.835809 containerd[1500]: time="2025-02-13T15:22:02.835536117Z" level=info msg="CreateContainer within sandbox \"03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 15:22:02.857411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364100201.mount: Deactivated successfully. Feb 13 15:22:02.864658 containerd[1500]: time="2025-02-13T15:22:02.864594651Z" level=info msg="CreateContainer within sandbox \"03edb5cd836b32f96ab6764fa5b511a3602b22c57d3f95eda80eef7540c82dad\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6720eadb8055dc6e1d733efe4c8aefd4c3b60bebe184235ac0a65fd56b9398fd\"" Feb 13 15:22:02.865493 containerd[1500]: time="2025-02-13T15:22:02.865399739Z" level=info msg="StartContainer for \"6720eadb8055dc6e1d733efe4c8aefd4c3b60bebe184235ac0a65fd56b9398fd\"" Feb 13 15:22:02.891490 systemd[1]: Started cri-containerd-6720eadb8055dc6e1d733efe4c8aefd4c3b60bebe184235ac0a65fd56b9398fd.scope - libcontainer container 6720eadb8055dc6e1d733efe4c8aefd4c3b60bebe184235ac0a65fd56b9398fd. Feb 13 15:22:02.921944 containerd[1500]: time="2025-02-13T15:22:02.921828068Z" level=info msg="StartContainer for \"6720eadb8055dc6e1d733efe4c8aefd4c3b60bebe184235ac0a65fd56b9398fd\" returns successfully" Feb 13 15:22:03.384565 systemd-networkd[1398]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 15:22:03.583581 kubelet[2020]: E0213 15:22:03.583491 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:04.584728 kubelet[2020]: E0213 15:22:04.584619 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:05.585700 kubelet[2020]: E0213 15:22:05.585611 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:06.586042 kubelet[2020]: E0213 15:22:06.585867 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:07.586830 kubelet[2020]: E0213 15:22:07.586757 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:08.587888 kubelet[2020]: E0213 15:22:08.587794 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:09.589249 kubelet[2020]: E0213 15:22:09.588992 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:10.590104 kubelet[2020]: E0213 15:22:10.590007 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:11.590964 kubelet[2020]: E0213 15:22:11.590875 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:12.591174 kubelet[2020]: E0213 15:22:12.591100 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:13.592124 kubelet[2020]: E0213 15:22:13.592003 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:14.547365 kubelet[2020]: E0213 15:22:14.547272 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:14.571908 containerd[1500]: time="2025-02-13T15:22:14.571783136Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\"" Feb 13 15:22:14.572771 containerd[1500]: time="2025-02-13T15:22:14.572045756Z" level=info msg="TearDown network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" successfully" Feb 13 15:22:14.572771 containerd[1500]: time="2025-02-13T15:22:14.572170405Z" level=info msg="StopPodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" returns successfully" Feb 13 15:22:14.573151 containerd[1500]: time="2025-02-13T15:22:14.573083673Z" level=info msg="RemovePodSandbox for \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\"" Feb 13 15:22:14.573245 containerd[1500]: time="2025-02-13T15:22:14.573163159Z" level=info msg="Forcibly stopping sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\"" Feb 13 15:22:14.573406 containerd[1500]: time="2025-02-13T15:22:14.573340893Z" level=info msg="TearDown network for sandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" successfully" Feb 13 15:22:14.578205 containerd[1500]: time="2025-02-13T15:22:14.578088847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:14.578627 containerd[1500]: time="2025-02-13T15:22:14.578254780Z" level=info msg="RemovePodSandbox \"a2d6c088c32d357391c4bb2a939e92924d1f461cbd92578d7cdd49c81412ac68\" returns successfully" Feb 13 15:22:14.579056 containerd[1500]: time="2025-02-13T15:22:14.578833703Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\"" Feb 13 15:22:14.579056 containerd[1500]: time="2025-02-13T15:22:14.578953472Z" level=info msg="TearDown network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" successfully" Feb 13 15:22:14.579056 containerd[1500]: time="2025-02-13T15:22:14.578965873Z" level=info msg="StopPodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" returns successfully" Feb 13 15:22:14.580970 containerd[1500]: time="2025-02-13T15:22:14.579712008Z" level=info msg="RemovePodSandbox for \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\"" Feb 13 15:22:14.580970 containerd[1500]: time="2025-02-13T15:22:14.579742771Z" level=info msg="Forcibly stopping sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\"" Feb 13 15:22:14.580970 containerd[1500]: time="2025-02-13T15:22:14.579819056Z" level=info msg="TearDown network for sandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" successfully" Feb 13 15:22:14.583399 containerd[1500]: time="2025-02-13T15:22:14.583180987Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:14.583399 containerd[1500]: time="2025-02-13T15:22:14.583256193Z" level=info msg="RemovePodSandbox \"c28073673a3a448aeffacd3ae0e230d53428f488b254f04cf00cf1377893e7aa\" returns successfully" Feb 13 15:22:14.584015 containerd[1500]: time="2025-02-13T15:22:14.583978607Z" level=info msg="StopPodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\"" Feb 13 15:22:14.584150 containerd[1500]: time="2025-02-13T15:22:14.584092216Z" level=info msg="TearDown network for sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" successfully" Feb 13 15:22:14.584183 containerd[1500]: time="2025-02-13T15:22:14.584148180Z" level=info msg="StopPodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" returns successfully" Feb 13 15:22:14.584576 containerd[1500]: time="2025-02-13T15:22:14.584552650Z" level=info msg="RemovePodSandbox for \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\"" Feb 13 15:22:14.584647 containerd[1500]: time="2025-02-13T15:22:14.584579612Z" level=info msg="Forcibly stopping sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\"" Feb 13 15:22:14.584676 containerd[1500]: time="2025-02-13T15:22:14.584655378Z" level=info msg="TearDown network for sandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" successfully" Feb 13 15:22:14.587713 containerd[1500]: time="2025-02-13T15:22:14.587652281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:14.587713 containerd[1500]: time="2025-02-13T15:22:14.587722567Z" level=info msg="RemovePodSandbox \"3ccc57f47e1284ea3d78b994f27698fa58de9f3d26fc36917a4accc7ce6b7171\" returns successfully" Feb 13 15:22:14.588765 containerd[1500]: time="2025-02-13T15:22:14.588587671Z" level=info msg="StopPodSandbox for \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\"" Feb 13 15:22:14.588765 containerd[1500]: time="2025-02-13T15:22:14.588697239Z" level=info msg="TearDown network for sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\" successfully" Feb 13 15:22:14.588765 containerd[1500]: time="2025-02-13T15:22:14.588706880Z" level=info msg="StopPodSandbox for \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\" returns successfully" Feb 13 15:22:14.589531 containerd[1500]: time="2025-02-13T15:22:14.589299684Z" level=info msg="RemovePodSandbox for \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\"" Feb 13 15:22:14.589531 containerd[1500]: time="2025-02-13T15:22:14.589329887Z" level=info msg="Forcibly stopping sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\"" Feb 13 15:22:14.589531 containerd[1500]: time="2025-02-13T15:22:14.589401092Z" level=info msg="TearDown network for sandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\" successfully" Feb 13 15:22:14.592919 kubelet[2020]: E0213 15:22:14.592873 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:14.593443 containerd[1500]: time="2025-02-13T15:22:14.592999481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:14.593443 containerd[1500]: time="2025-02-13T15:22:14.593069766Z" level=info msg="RemovePodSandbox \"0db4cfe64b1f5e848d97efc857eaeff6a8a6482b2739f57d9f467f0df91a21a1\" returns successfully" Feb 13 15:22:14.594048 containerd[1500]: time="2025-02-13T15:22:14.594021157Z" level=info msg="StopPodSandbox for \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\"" Feb 13 15:22:14.594178 containerd[1500]: time="2025-02-13T15:22:14.594159767Z" level=info msg="TearDown network for sandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\" successfully" Feb 13 15:22:14.594178 containerd[1500]: time="2025-02-13T15:22:14.594176289Z" level=info msg="StopPodSandbox for \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\" returns successfully" Feb 13 15:22:14.594550 containerd[1500]: time="2025-02-13T15:22:14.594522875Z" level=info msg="RemovePodSandbox for \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\"" Feb 13 15:22:14.594588 containerd[1500]: time="2025-02-13T15:22:14.594554357Z" level=info msg="Forcibly stopping sandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\"" Feb 13 15:22:14.594634 containerd[1500]: time="2025-02-13T15:22:14.594619402Z" level=info msg="TearDown network for sandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\" successfully" Feb 13 15:22:14.597532 containerd[1500]: time="2025-02-13T15:22:14.597486016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:14.597697 containerd[1500]: time="2025-02-13T15:22:14.597545940Z" level=info msg="RemovePodSandbox \"76b033cd3fd8cdbf46e8d618e20d3d24daf8d454c6ab23cea056313165aa1e86\" returns successfully" Feb 13 15:22:14.597959 containerd[1500]: time="2025-02-13T15:22:14.597930169Z" level=info msg="StopPodSandbox for \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\"" Feb 13 15:22:14.598415 containerd[1500]: time="2025-02-13T15:22:14.598277675Z" level=info msg="TearDown network for sandbox \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\" successfully" Feb 13 15:22:14.598415 containerd[1500]: time="2025-02-13T15:22:14.598305477Z" level=info msg="StopPodSandbox for \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\" returns successfully" Feb 13 15:22:14.599803 containerd[1500]: time="2025-02-13T15:22:14.598578017Z" level=info msg="RemovePodSandbox for \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\"" Feb 13 15:22:14.599803 containerd[1500]: time="2025-02-13T15:22:14.598602739Z" level=info msg="Forcibly stopping sandbox \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\"" Feb 13 15:22:14.599803 containerd[1500]: time="2025-02-13T15:22:14.598657903Z" level=info msg="TearDown network for sandbox \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\" successfully" Feb 13 15:22:14.601649 containerd[1500]: time="2025-02-13T15:22:14.601597683Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:14.601887 containerd[1500]: time="2025-02-13T15:22:14.601864183Z" level=info msg="RemovePodSandbox \"8beed6528f649246199db3322f18386401663ca26bd347949b740359f86fa2bc\" returns successfully" Feb 13 15:22:14.602778 containerd[1500]: time="2025-02-13T15:22:14.602734768Z" level=info msg="StopPodSandbox for \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\"" Feb 13 15:22:14.602919 containerd[1500]: time="2025-02-13T15:22:14.602892420Z" level=info msg="TearDown network for sandbox \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\" successfully" Feb 13 15:22:14.602994 containerd[1500]: time="2025-02-13T15:22:14.602917582Z" level=info msg="StopPodSandbox for \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\" returns successfully" Feb 13 15:22:14.604532 containerd[1500]: time="2025-02-13T15:22:14.603539668Z" level=info msg="RemovePodSandbox for \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\"" Feb 13 15:22:14.604532 containerd[1500]: time="2025-02-13T15:22:14.603577031Z" level=info msg="Forcibly stopping sandbox \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\"" Feb 13 15:22:14.604532 containerd[1500]: time="2025-02-13T15:22:14.603673838Z" level=info msg="TearDown network for sandbox \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\" successfully" Feb 13 15:22:14.607080 containerd[1500]: time="2025-02-13T15:22:14.607022288Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:14.607397 containerd[1500]: time="2025-02-13T15:22:14.607086053Z" level=info msg="RemovePodSandbox \"51cdbf271b54b2c9779769e815c9f63e02c6f1dcb6e25a608b82983e902f3dc1\" returns successfully" Feb 13 15:22:14.608326 containerd[1500]: time="2025-02-13T15:22:14.607902594Z" level=info msg="StopPodSandbox for \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\"" Feb 13 15:22:14.608326 containerd[1500]: time="2025-02-13T15:22:14.608024763Z" level=info msg="TearDown network for sandbox \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\" successfully" Feb 13 15:22:14.608326 containerd[1500]: time="2025-02-13T15:22:14.608036044Z" level=info msg="StopPodSandbox for \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\" returns successfully" Feb 13 15:22:14.609154 containerd[1500]: time="2025-02-13T15:22:14.609079682Z" level=info msg="RemovePodSandbox for \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\"" Feb 13 15:22:14.609154 containerd[1500]: time="2025-02-13T15:22:14.609128726Z" level=info msg="Forcibly stopping sandbox \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\"" Feb 13 15:22:14.609507 containerd[1500]: time="2025-02-13T15:22:14.609341141Z" level=info msg="TearDown network for sandbox \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\" successfully" Feb 13 15:22:14.615644 containerd[1500]: time="2025-02-13T15:22:14.615482000Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:14.615644 containerd[1500]: time="2025-02-13T15:22:14.615547765Z" level=info msg="RemovePodSandbox \"3e7317898ef0edf77e77713a7d4b4080bcd1122c9539e86dac47f5a654ca8771\" returns successfully" Feb 13 15:22:14.616527 containerd[1500]: time="2025-02-13T15:22:14.616194933Z" level=info msg="StopPodSandbox for \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\"" Feb 13 15:22:14.616527 containerd[1500]: time="2025-02-13T15:22:14.616321103Z" level=info msg="TearDown network for sandbox \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\" successfully" Feb 13 15:22:14.616527 containerd[1500]: time="2025-02-13T15:22:14.616331624Z" level=info msg="StopPodSandbox for \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\" returns successfully" Feb 13 15:22:14.616971 containerd[1500]: time="2025-02-13T15:22:14.616901666Z" level=info msg="RemovePodSandbox for \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\"" Feb 13 15:22:14.616971 containerd[1500]: time="2025-02-13T15:22:14.616939509Z" level=info msg="Forcibly stopping sandbox \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\"" Feb 13 15:22:14.617210 containerd[1500]: time="2025-02-13T15:22:14.617117282Z" level=info msg="TearDown network for sandbox \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\" successfully" Feb 13 15:22:14.620561 containerd[1500]: time="2025-02-13T15:22:14.620420609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:14.620561 containerd[1500]: time="2025-02-13T15:22:14.620507575Z" level=info msg="RemovePodSandbox \"c1f555a770cbfaa0c2f6e894327efeca8c475717e984b511475a7bf88da453fa\" returns successfully" Feb 13 15:22:15.593901 kubelet[2020]: E0213 15:22:15.593800 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:16.595082 kubelet[2020]: E0213 15:22:16.595002 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:17.596175 kubelet[2020]: E0213 15:22:17.596104 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:18.597424 kubelet[2020]: E0213 15:22:18.597314 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:19.597739 kubelet[2020]: E0213 15:22:19.597631 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:20.598434 kubelet[2020]: E0213 15:22:20.598343 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:21.362186 kubelet[2020]: E0213 15:22:21.362136 2020 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43456->10.0.0.2:2379: read: connection timed out" Feb 13 15:22:21.598587 kubelet[2020]: E0213 15:22:21.598530 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:22.599354 kubelet[2020]: E0213 15:22:22.599135 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:23.600073 kubelet[2020]: E0213 15:22:23.600009 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:24.601035 kubelet[2020]: E0213 15:22:24.600905 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:25.601514 kubelet[2020]: E0213 15:22:25.601440 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:25.669489 kubelet[2020]: E0213 15:22:25.669324 2020 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T15:22:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T15:22:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T15:22:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T15:22:15Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\\\",\\\"ghcr.io/flatcar/calico/node:v3.29.1\\\"],\\\"sizeBytes\\\":137671624},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\\\",\\\"ghcr.io/flatcar/calico/cni:v3.29.1\\\"],\\\"sizeBytes\\\":91072777},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":69692964},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\\\",\\\"registry.k8s.io/kube-proxy:v1.31.6\\\"],\\\"sizeBytes\\\":26768275},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\\\"],\\\"sizeBytes\\\":11252974},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\\\",\\\"ghcr.io/flatcar/calico/csi:v3.29.1\\\"],\\\"sizeBytes\\\":8834384},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\\\"],\\\"sizeBytes\\\":6487425},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":268403}]}}\" for node \"10.0.0.4\": Patch \"https://188.245.200.94:6443/api/v1/nodes/10.0.0.4/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:22:25.906020 kubelet[2020]: E0213 15:22:25.905395 2020 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43350->10.0.0.2:2379: read: connection timed out" Feb 13 15:22:26.602155 kubelet[2020]: E0213 15:22:26.602088 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:27.602404 kubelet[2020]: E0213 15:22:27.602293 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:28.602610 kubelet[2020]: E0213 15:22:28.602537 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:29.603521 kubelet[2020]: E0213 15:22:29.603451 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:30.604497 kubelet[2020]: E0213 15:22:30.604444 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:31.363186 kubelet[2020]: E0213 15:22:31.363064 2020 controller.go:195] "Failed to update lease" err="Put \"https://188.245.200.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:22:31.605162 kubelet[2020]: E0213 15:22:31.605093 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:32.605294 kubelet[2020]: E0213 15:22:32.605222 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:33.605579 kubelet[2020]: E0213 15:22:33.605490 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:34.547048 kubelet[2020]: E0213 15:22:34.546977 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:34.606362 kubelet[2020]: E0213 15:22:34.606281 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:35.607245 kubelet[2020]: E0213 15:22:35.606895 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:22:35.905979 kubelet[2020]: E0213 15:22:35.905783 2020 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": Get \"https://188.245.200.94:6443/api/v1/nodes/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:22:36.607454 kubelet[2020]: E0213 15:22:36.607386 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"