Mar 17 17:51:04.902962 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:51:04.902990 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Mar 17 16:11:40 -00 2025 Mar 17 17:51:04.903002 kernel: KASLR enabled Mar 17 17:51:04.903008 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 17 17:51:04.903013 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Mar 17 17:51:04.903019 kernel: random: crng init done Mar 17 17:51:04.903026 kernel: secureboot: Secure boot disabled Mar 17 17:51:04.903032 kernel: ACPI: Early table checksum verification disabled Mar 17 17:51:04.903038 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 17 17:51:04.903046 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:51:04.903052 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:04.903058 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:04.903064 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:04.903070 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:04.903078 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:04.903085 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:04.903092 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:04.903098 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:04.903104 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:04.903111 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 17:51:04.903117 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 17 17:51:04.903123 kernel: NUMA: Failed to initialise from firmware Mar 17 17:51:04.903129 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 17:51:04.903136 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Mar 17 17:51:04.903142 kernel: Zone ranges: Mar 17 17:51:04.903150 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:51:04.903156 kernel: DMA32 empty Mar 17 17:51:04.903162 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 17 17:51:04.903168 kernel: Movable zone start for each node Mar 17 17:51:04.903174 kernel: Early memory node ranges Mar 17 17:51:04.903181 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Mar 17 17:51:04.903187 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Mar 17 17:51:04.903193 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Mar 17 17:51:04.903200 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 17 17:51:04.903206 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 17 17:51:04.903212 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 17 17:51:04.903218 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 17 17:51:04.903225 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 17 17:51:04.903231 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 17 17:51:04.903237 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 17:51:04.903247 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 17 17:51:04.903253 kernel: psci: probing for conduit method from ACPI. Mar 17 17:51:04.903260 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:51:04.903268 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:51:04.903275 kernel: psci: Trusted OS migration not required Mar 17 17:51:04.903282 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:51:04.903288 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:51:04.903294 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:51:04.903301 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:51:04.903308 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:51:04.903314 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:51:04.903321 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:51:04.903328 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:51:04.903336 kernel: CPU features: detected: Spectre-v4 Mar 17 17:51:04.903342 kernel: CPU features: detected: Spectre-BHB Mar 17 17:51:04.903349 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:51:04.903355 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:51:04.903362 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:51:04.903368 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:51:04.903375 kernel: alternatives: applying boot alternatives Mar 17 17:51:04.903395 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:51:04.903404 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:51:04.903411 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:51:04.903418 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:51:04.903427 kernel: Fallback order for Node 0: 0 Mar 17 17:51:04.903433 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 17 17:51:04.903440 kernel: Policy zone: Normal Mar 17 17:51:04.903447 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:51:04.903453 kernel: software IO TLB: area num 2. Mar 17 17:51:04.903460 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 17 17:51:04.903467 kernel: Memory: 3883896K/4096000K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 212104K reserved, 0K cma-reserved) Mar 17 17:51:04.903473 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:51:04.903480 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:51:04.903487 kernel: rcu: RCU event tracing is enabled. Mar 17 17:51:04.903494 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:51:04.903501 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:51:04.903509 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:51:04.903516 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:51:04.903523 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:51:04.903529 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:51:04.903536 kernel: GICv3: 256 SPIs implemented Mar 17 17:51:04.903542 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:51:04.903549 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:51:04.903555 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:51:04.903562 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:51:04.903568 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:51:04.903575 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:51:04.903583 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:51:04.903590 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 17 17:51:04.903597 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 17 17:51:04.903604 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:51:04.903610 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:51:04.903617 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:51:04.903624 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:51:04.903631 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:51:04.903637 kernel: Console: colour dummy device 80x25 Mar 17 17:51:04.903644 kernel: ACPI: Core revision 20230628 Mar 17 17:51:04.903651 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:51:04.903659 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:51:04.903666 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:51:04.903673 kernel: landlock: Up and running. Mar 17 17:51:04.903680 kernel: SELinux: Initializing. Mar 17 17:51:04.903687 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:51:04.903694 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:51:04.903701 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:51:04.903708 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:51:04.903715 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:51:04.903723 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:51:04.903733 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:51:04.903741 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:51:04.903747 kernel: Remapping and enabling EFI services. Mar 17 17:51:04.903754 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:51:04.903760 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:51:04.903767 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:51:04.903775 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 17 17:51:04.903781 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:51:04.903790 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:51:04.903797 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:51:04.903809 kernel: SMP: Total of 2 processors activated. Mar 17 17:51:04.903818 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:51:04.903826 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:51:04.903833 kernel: CPU features: detected: Common not Private translations Mar 17 17:51:04.903841 kernel: CPU features: detected: CRC32 instructions Mar 17 17:51:04.903848 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:51:04.903855 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:51:04.903864 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:51:04.903871 kernel: CPU features: detected: Privileged Access Never Mar 17 17:51:04.903878 kernel: CPU features: detected: RAS Extension Support Mar 17 17:51:04.903885 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:51:04.903892 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:51:04.903909 kernel: alternatives: applying system-wide alternatives Mar 17 17:51:04.903917 kernel: devtmpfs: initialized Mar 17 17:51:04.903924 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:51:04.903934 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:51:04.903941 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:51:04.903949 kernel: SMBIOS 3.0.0 present. Mar 17 17:51:04.903956 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 17 17:51:04.903963 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:51:04.903970 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:51:04.903977 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:51:04.903985 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:51:04.903992 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:51:04.904000 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Mar 17 17:51:04.904007 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:51:04.904014 kernel: cpuidle: using governor menu Mar 17 17:51:04.904022 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:51:04.904029 kernel: ASID allocator initialised with 32768 entries Mar 17 17:51:04.904036 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:51:04.904043 kernel: Serial: AMBA PL011 UART driver Mar 17 17:51:04.904051 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:51:04.904058 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:51:04.904067 kernel: Modules: 509280 pages in range for PLT usage Mar 17 17:51:04.904074 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:51:04.904081 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:51:04.904088 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:51:04.904096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:51:04.904103 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:51:04.904110 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:51:04.904117 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:51:04.904124 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:51:04.904133 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:51:04.904140 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:51:04.904147 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:51:04.904154 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:51:04.904161 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:51:04.904169 kernel: ACPI: Interpreter enabled Mar 17 17:51:04.904176 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:51:04.904183 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:51:04.904190 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:51:04.904199 kernel: printk: console [ttyAMA0] enabled Mar 17 17:51:04.904206 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:51:04.904361 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:51:04.904457 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:51:04.904528 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:51:04.904595 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:51:04.904663 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:51:04.904676 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:51:04.904683 kernel: PCI host bridge to bus 0000:00 Mar 17 17:51:04.904763 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:51:04.904827 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:51:04.904889 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:51:04.904994 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:51:04.905088 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:51:04.905176 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 17 17:51:04.905250 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 17 17:51:04.905324 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 17:51:04.905452 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:51:04.905536 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 17 17:51:04.905617 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 17:51:04.905694 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 17 17:51:04.905772 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 17:51:04.905843 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 17 17:51:04.905939 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 17:51:04.906017 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 17 17:51:04.906102 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 17:51:04.906176 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 17 17:51:04.906256 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 17:51:04.906327 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 17 17:51:04.906424 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 17:51:04.906498 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 17 17:51:04.906576 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 17:51:04.906652 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 17 17:51:04.906731 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:51:04.906802 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 17 17:51:04.906889 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 17 17:51:04.906991 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 17 17:51:04.907074 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:51:04.907151 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 17 17:51:04.907230 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:51:04.907302 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 17:51:04.907382 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 17:51:04.907472 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 17 17:51:04.907557 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 17 17:51:04.907629 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 17 17:51:04.907706 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 17 17:51:04.907787 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 17 17:51:04.907860 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 17 17:51:04.909024 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 17:51:04.909125 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 17 17:51:04.909199 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 17 17:51:04.909285 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 17 17:51:04.909366 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 17 17:51:04.909458 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 17:51:04.909545 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:51:04.909617 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 17 17:51:04.909691 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 17 17:51:04.909776 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 17:51:04.909861 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 17 17:51:04.910049 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 17 17:51:04.910127 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 17 17:51:04.910201 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 17 17:51:04.910269 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 17 17:51:04.910337 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 17 17:51:04.910425 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 17 17:51:04.910507 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 17 17:51:04.910576 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 17 17:51:04.910649 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 17 17:51:04.910719 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 17 17:51:04.910787 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 17 17:51:04.910859 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 17 17:51:04.910983 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 17 17:51:04.911054 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 17 17:51:04.911130 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 17 17:51:04.911199 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 17 17:51:04.911266 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 17 17:51:04.911340 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 17:51:04.911456 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 17 17:51:04.911530 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 17 17:51:04.911603 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 17:51:04.911677 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 17 17:51:04.911747 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 17 17:51:04.911822 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 17:51:04.911892 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 17 17:51:04.911982 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 17 17:51:04.912053 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 17 17:51:04.912123 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:51:04.912197 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 17 17:51:04.912269 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:51:04.912340 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 17 17:51:04.912428 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:51:04.912505 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 17 17:51:04.912581 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:51:04.912653 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 17 17:51:04.912726 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:51:04.912797 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 17 17:51:04.912865 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:51:04.912969 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 17 17:51:04.913041 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:51:04.913113 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 17 17:51:04.913183 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:51:04.913258 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 17 17:51:04.913329 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:51:04.913414 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 17 17:51:04.913486 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 17 17:51:04.913558 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 17 17:51:04.913627 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 17:51:04.913697 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 17 17:51:04.913771 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 17:51:04.913846 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 17 17:51:04.914380 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 17:51:04.914533 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 17 17:51:04.914607 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 17 17:51:04.914680 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 17 17:51:04.914749 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 17 17:51:04.914820 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 17 17:51:04.915178 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 17 17:51:04.915666 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 17 17:51:04.915764 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 17 17:51:04.915842 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 17 17:51:04.917045 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 17 17:51:04.917153 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 17 17:51:04.917225 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 17 17:51:04.917299 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 17 17:51:04.917421 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 17 17:51:04.917502 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:51:04.917576 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 17 17:51:04.917647 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 17:51:04.919088 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 17 17:51:04.919182 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 17 17:51:04.919251 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:51:04.919331 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 17 17:51:04.919460 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 17:51:04.919536 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 17 17:51:04.919606 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 17 17:51:04.919675 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:51:04.919760 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 17:51:04.919833 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 17 17:51:04.919947 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 17:51:04.921342 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 17 17:51:04.921459 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 17 17:51:04.921548 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:51:04.921636 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 17:51:04.921712 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 17:51:04.921794 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 17 17:51:04.921868 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 17 17:51:04.921993 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:51:04.922077 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 17 17:51:04.922153 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 17 17:51:04.922228 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 17:51:04.922307 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 17 17:51:04.922416 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 17 17:51:04.922514 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:51:04.922599 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 17 17:51:04.922675 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 17 17:51:04.922750 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 17:51:04.922822 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 17 17:51:04.922911 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 17 17:51:04.923007 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:51:04.923093 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 17 17:51:04.923176 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 17 17:51:04.923252 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 17 17:51:04.923327 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 17:51:04.923416 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 17 17:51:04.923493 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 17 17:51:04.923567 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:51:04.923642 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 17:51:04.923715 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 17 17:51:04.923793 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 17 17:51:04.923875 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:51:04.924099 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 17:51:04.924178 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 17 17:51:04.924248 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 17 17:51:04.924316 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:51:04.924423 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:51:04.924501 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:51:04.924570 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:51:04.924646 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 17 17:51:04.924714 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 17 17:51:04.924778 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:51:04.924850 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 17 17:51:04.924943 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 17 17:51:04.925018 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:51:04.925104 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 17 17:51:04.925172 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 17 17:51:04.925237 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:51:04.925321 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 17 17:51:04.925406 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 17 17:51:04.925479 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:51:04.925563 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 17 17:51:04.925629 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 17 17:51:04.925695 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:51:04.925772 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 17 17:51:04.925841 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 17 17:51:04.927989 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:51:04.928101 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 17 17:51:04.928171 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 17 17:51:04.928236 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:51:04.928313 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 17 17:51:04.928379 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 17 17:51:04.928477 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:51:04.928554 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 17 17:51:04.928621 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 17 17:51:04.928688 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:51:04.928698 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:51:04.928705 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:51:04.928713 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:51:04.928724 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:51:04.928731 kernel: iommu: Default domain type: Translated Mar 17 17:51:04.928739 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:51:04.928746 kernel: efivars: Registered efivars operations Mar 17 17:51:04.928754 kernel: vgaarb: loaded Mar 17 17:51:04.928761 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:51:04.928768 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:51:04.928776 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:51:04.928783 kernel: pnp: PnP ACPI init Mar 17 17:51:04.928864 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:51:04.928875 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:51:04.928883 kernel: NET: Registered PF_INET protocol family Mar 17 17:51:04.928891 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:51:04.930040 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:51:04.930053 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:51:04.930061 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:51:04.930069 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:51:04.930077 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:51:04.930091 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:51:04.930098 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:51:04.930106 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:51:04.930223 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 17 17:51:04.930237 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:51:04.930246 kernel: kvm [1]: HYP mode not available Mar 17 17:51:04.930253 kernel: Initialise system trusted keyrings Mar 17 17:51:04.930261 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:51:04.930269 kernel: Key type asymmetric registered Mar 17 17:51:04.930280 kernel: Asymmetric key parser 'x509' registered Mar 17 17:51:04.930288 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:51:04.930295 kernel: io scheduler mq-deadline registered Mar 17 17:51:04.930303 kernel: io scheduler kyber registered Mar 17 17:51:04.930310 kernel: io scheduler bfq registered Mar 17 17:51:04.930319 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:51:04.930441 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 17 17:51:04.930522 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 17 17:51:04.930600 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:51:04.930671 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 17 17:51:04.930742 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 17 17:51:04.930811 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:51:04.930885 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 17 17:51:04.934090 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 17 17:51:04.934192 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:51:04.934268 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 17 17:51:04.934341 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 17 17:51:04.934431 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:51:04.934509 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 17 17:51:04.934579 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 17 17:51:04.934653 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:51:04.934731 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 17 17:51:04.934802 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 17 17:51:04.934872 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:51:04.934961 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 17 17:51:04.935033 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 17 17:51:04.935105 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:51:04.935180 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 17 17:51:04.935250 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 17 17:51:04.935321 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:51:04.935332 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 17 17:51:04.935421 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 17 17:51:04.935499 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 17 17:51:04.935569 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:51:04.935580 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:51:04.935588 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:51:04.935596 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:51:04.935672 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 17 17:51:04.935752 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 17 17:51:04.935763 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:51:04.935774 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:51:04.935849 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 17 17:51:04.935860 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 17 17:51:04.935868 kernel: thunder_xcv, ver 1.0 Mar 17 17:51:04.935876 kernel: thunder_bgx, ver 1.0 Mar 17 17:51:04.935883 kernel: nicpf, ver 1.0 Mar 17 17:51:04.935891 kernel: nicvf, ver 1.0 Mar 17 17:51:04.938793 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:51:04.938881 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:51:04 UTC (1742233864) Mar 17 17:51:04.938892 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:51:04.938915 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:51:04.938924 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:51:04.938932 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:51:04.938940 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:51:04.938947 kernel: Segment Routing with IPv6 Mar 17 17:51:04.938955 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:51:04.938963 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:51:04.938974 kernel: Key type dns_resolver registered Mar 17 17:51:04.938983 kernel: registered taskstats version 1 Mar 17 17:51:04.938991 kernel: Loading compiled-in X.509 certificates Mar 17 17:51:04.938999 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: f4ff2820cf7379ce82b759137d15b536f0a99b51' Mar 17 17:51:04.939006 kernel: Key type .fscrypt registered Mar 17 17:51:04.939014 kernel: Key type fscrypt-provisioning registered Mar 17 17:51:04.939021 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:51:04.939029 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:51:04.939037 kernel: ima: No architecture policies found Mar 17 17:51:04.939047 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:51:04.939057 kernel: clk: Disabling unused clocks Mar 17 17:51:04.939065 kernel: Freeing unused kernel memory: 38336K Mar 17 17:51:04.939072 kernel: Run /init as init process Mar 17 17:51:04.939080 kernel: with arguments: Mar 17 17:51:04.939088 kernel: /init Mar 17 17:51:04.939095 kernel: with environment: Mar 17 17:51:04.939103 kernel: HOME=/ Mar 17 17:51:04.939111 kernel: TERM=linux Mar 17 17:51:04.939120 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:51:04.939129 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:51:04.939141 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:51:04.939150 systemd[1]: Detected virtualization kvm. Mar 17 17:51:04.939158 systemd[1]: Detected architecture arm64. Mar 17 17:51:04.939165 systemd[1]: Running in initrd. Mar 17 17:51:04.939173 systemd[1]: No hostname configured, using default hostname. Mar 17 17:51:04.939183 systemd[1]: Hostname set to . Mar 17 17:51:04.939191 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:51:04.939199 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:51:04.939208 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:51:04.939216 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:51:04.939225 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:51:04.939233 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:51:04.939241 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:51:04.939252 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:51:04.939261 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:51:04.939269 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:51:04.939277 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:51:04.939285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:51:04.939293 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:51:04.939301 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:51:04.939311 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:51:04.939319 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:51:04.939327 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:51:04.939335 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:51:04.939343 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:51:04.939351 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:51:04.939360 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:51:04.939368 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:51:04.939376 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:51:04.939400 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:51:04.939409 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:51:04.939417 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:51:04.939425 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:51:04.939433 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:51:04.939441 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:51:04.939449 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:51:04.939457 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:51:04.939498 systemd-journald[236]: Collecting audit messages is disabled. Mar 17 17:51:04.939519 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:51:04.939527 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:51:04.939538 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:51:04.939547 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:51:04.939555 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:51:04.939564 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:04.939573 systemd-journald[236]: Journal started Mar 17 17:51:04.939594 systemd-journald[236]: Runtime Journal (/run/log/journal/3f9f275d8cb94c84823b4c08c7d1fc66) is 8M, max 76.6M, 68.6M free. Mar 17 17:51:04.916481 systemd-modules-load[237]: Inserted module 'overlay' Mar 17 17:51:04.941374 kernel: Bridge firewalling registered Mar 17 17:51:04.940978 systemd-modules-load[237]: Inserted module 'br_netfilter' Mar 17 17:51:04.943941 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:51:04.944319 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:51:04.945207 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:51:04.955248 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:51:04.959262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:51:04.964170 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:51:04.968765 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:51:04.987105 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:51:04.991691 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:51:04.994573 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:51:04.995725 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:51:05.007770 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:51:05.014143 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:51:05.030085 dracut-cmdline[273]: dracut-dracut-053 Mar 17 17:51:05.035028 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:51:05.064368 systemd-resolved[275]: Positive Trust Anchors: Mar 17 17:51:05.064394 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:51:05.064427 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:51:05.072047 systemd-resolved[275]: Defaulting to hostname 'linux'. Mar 17 17:51:05.073331 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:51:05.074091 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:51:05.143938 kernel: SCSI subsystem initialized Mar 17 17:51:05.150954 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:51:05.159000 kernel: iscsi: registered transport (tcp) Mar 17 17:51:05.176180 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:51:05.176335 kernel: QLogic iSCSI HBA Driver Mar 17 17:51:05.233534 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:51:05.250208 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:51:05.273066 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:51:05.273197 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:51:05.273227 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:51:05.344912 kernel: raid6: neonx8 gen() 15714 MB/s Mar 17 17:51:05.345991 kernel: raid6: neonx4 gen() 15714 MB/s Mar 17 17:51:05.362980 kernel: raid6: neonx2 gen() 13141 MB/s Mar 17 17:51:05.379942 kernel: raid6: neonx1 gen() 10359 MB/s Mar 17 17:51:05.396988 kernel: raid6: int64x8 gen() 6767 MB/s Mar 17 17:51:05.413956 kernel: raid6: int64x4 gen() 7262 MB/s Mar 17 17:51:05.430958 kernel: raid6: int64x2 gen() 6086 MB/s Mar 17 17:51:05.447942 kernel: raid6: int64x1 gen() 5033 MB/s Mar 17 17:51:05.448009 kernel: raid6: using algorithm neonx8 gen() 15714 MB/s Mar 17 17:51:05.464983 kernel: raid6: .... xor() 11821 MB/s, rmw enabled Mar 17 17:51:05.465059 kernel: raid6: using neon recovery algorithm Mar 17 17:51:05.469939 kernel: xor: measuring software checksum speed Mar 17 17:51:05.471161 kernel: 8regs : 19276 MB/sec Mar 17 17:51:05.471193 kernel: 32regs : 20892 MB/sec Mar 17 17:51:05.471209 kernel: arm64_neon : 27889 MB/sec Mar 17 17:51:05.471224 kernel: xor: using function: arm64_neon (27889 MB/sec) Mar 17 17:51:05.522943 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:51:05.537892 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:51:05.546131 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:51:05.562035 systemd-udevd[457]: Using default interface naming scheme 'v255'. Mar 17 17:51:05.566188 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:51:05.577104 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:51:05.592713 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Mar 17 17:51:05.630875 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:51:05.637114 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:51:05.690706 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:51:05.700157 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:51:05.726869 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:51:05.729722 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:51:05.732230 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:51:05.733826 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:51:05.741194 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:51:05.758706 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:51:05.801058 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:51:05.820091 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:51:05.820187 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 17 17:51:05.834655 kernel: ACPI: bus type USB registered Mar 17 17:51:05.834856 kernel: usbcore: registered new interface driver usbfs Mar 17 17:51:05.835021 kernel: usbcore: registered new interface driver hub Mar 17 17:51:05.835074 kernel: usbcore: registered new device driver usb Mar 17 17:51:05.857234 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 17 17:51:05.859687 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 17 17:51:05.859819 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:51:05.859830 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:51:05.861185 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:51:05.861330 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:51:05.863526 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:51:05.867823 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:51:05.884453 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 17 17:51:05.884594 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 17:51:05.884697 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:51:05.884787 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 17 17:51:05.884874 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 17 17:51:05.884988 kernel: hub 1-0:1.0: USB hub found Mar 17 17:51:05.885116 kernel: hub 1-0:1.0: 4 ports detected Mar 17 17:51:05.885202 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 17:51:05.885301 kernel: hub 2-0:1.0: USB hub found Mar 17 17:51:05.885413 kernel: hub 2-0:1.0: 4 ports detected Mar 17 17:51:05.864123 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:51:05.865003 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:05.867936 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:51:05.877174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:51:05.898652 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:05.901251 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 17 17:51:05.913318 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 17 17:51:05.913483 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 17 17:51:05.913590 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 17 17:51:05.913696 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 17:51:05.913779 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:51:05.913793 kernel: GPT:17805311 != 80003071 Mar 17 17:51:05.913803 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:51:05.913816 kernel: GPT:17805311 != 80003071 Mar 17 17:51:05.913824 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:51:05.913838 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:51:05.913852 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 17 17:51:05.910370 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:51:05.935985 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:51:05.968607 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (523) Mar 17 17:51:05.983307 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 17 17:51:05.986086 kernel: BTRFS: device fsid 5ecee764-de70-4de1-8711-3798360e0d13 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (506) Mar 17 17:51:05.999540 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:51:06.017338 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 17 17:51:06.025267 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 17 17:51:06.027038 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 17 17:51:06.034136 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:51:06.042612 disk-uuid[574]: Primary Header is updated. Mar 17 17:51:06.042612 disk-uuid[574]: Secondary Entries is updated. Mar 17 17:51:06.042612 disk-uuid[574]: Secondary Header is updated. Mar 17 17:51:06.046927 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:51:06.127933 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 17:51:06.367965 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 17 17:51:06.504208 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 17 17:51:06.504283 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 17 17:51:06.505734 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 17 17:51:06.560789 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 17 17:51:06.561122 kernel: usbcore: registered new interface driver usbhid Mar 17 17:51:06.561152 kernel: usbhid: USB HID core driver Mar 17 17:51:07.068969 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:51:07.070909 disk-uuid[575]: The operation has completed successfully. Mar 17 17:51:07.142398 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:51:07.142540 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:51:07.174184 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:51:07.180125 sh[590]: Success Mar 17 17:51:07.195506 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:51:07.274362 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:51:07.277174 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:51:07.278636 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:51:07.299480 kernel: BTRFS info (device dm-0): first mount of filesystem 5ecee764-de70-4de1-8711-3798360e0d13 Mar 17 17:51:07.299551 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:51:07.299569 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:51:07.299583 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:51:07.299955 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:51:07.309009 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:51:07.311811 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:51:07.314601 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:51:07.321106 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:51:07.325260 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:51:07.340089 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:07.340170 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:51:07.340202 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:51:07.359074 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:51:07.359166 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:51:07.373476 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:51:07.374836 kernel: BTRFS info (device sda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:07.383358 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:51:07.392763 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:51:07.470607 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:51:07.482300 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:51:07.512812 ignition[689]: Ignition 2.20.0 Mar 17 17:51:07.512824 ignition[689]: Stage: fetch-offline Mar 17 17:51:07.512882 ignition[689]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:07.512893 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:51:07.513209 ignition[689]: parsed url from cmdline: "" Mar 17 17:51:07.515268 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:51:07.513213 ignition[689]: no config URL provided Mar 17 17:51:07.516559 systemd-networkd[778]: lo: Link UP Mar 17 17:51:07.513219 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:51:07.516563 systemd-networkd[778]: lo: Gained carrier Mar 17 17:51:07.513226 ignition[689]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:51:07.519153 systemd-networkd[778]: Enumeration completed Mar 17 17:51:07.513234 ignition[689]: failed to fetch config: resource requires networking Mar 17 17:51:07.519821 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:51:07.513598 ignition[689]: Ignition finished successfully Mar 17 17:51:07.520038 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:07.520042 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:51:07.521382 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:07.521389 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:51:07.521644 systemd[1]: Reached target network.target - Network. Mar 17 17:51:07.522101 systemd-networkd[778]: eth0: Link UP Mar 17 17:51:07.522105 systemd-networkd[778]: eth0: Gained carrier Mar 17 17:51:07.522115 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:07.526236 systemd-networkd[778]: eth1: Link UP Mar 17 17:51:07.526239 systemd-networkd[778]: eth1: Gained carrier Mar 17 17:51:07.526251 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:07.528127 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:51:07.542013 ignition[782]: Ignition 2.20.0 Mar 17 17:51:07.542023 ignition[782]: Stage: fetch Mar 17 17:51:07.542228 ignition[782]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:07.542237 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:51:07.542324 ignition[782]: parsed url from cmdline: "" Mar 17 17:51:07.542328 ignition[782]: no config URL provided Mar 17 17:51:07.542333 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:51:07.542340 ignition[782]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:51:07.542497 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 17 17:51:07.543816 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 17 17:51:07.563020 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:51:07.582024 systemd-networkd[778]: eth0: DHCPv4 address 49.12.184.245/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:51:07.744923 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 17 17:51:07.752570 ignition[782]: GET result: OK Mar 17 17:51:07.752655 ignition[782]: parsing config with SHA512: 94462ec39ead32590676cbee8d480cf502f3887dbe5174df226e8247e097c3f9ec2d8e82af1bb4122302aadf3c716f5943dafa38792aa75e1d1aa30c59ee9ceb Mar 17 17:51:07.757239 unknown[782]: fetched base config from "system" Mar 17 17:51:07.757254 unknown[782]: fetched base config from "system" Mar 17 17:51:07.757261 unknown[782]: fetched user config from "hetzner" Mar 17 17:51:07.758939 ignition[782]: fetch: fetch complete Mar 17 17:51:07.758971 ignition[782]: fetch: fetch passed Mar 17 17:51:07.759065 ignition[782]: Ignition finished successfully Mar 17 17:51:07.762871 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:51:07.777832 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:51:07.798885 ignition[789]: Ignition 2.20.0 Mar 17 17:51:07.798923 ignition[789]: Stage: kargs Mar 17 17:51:07.799121 ignition[789]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:07.799132 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:51:07.799852 ignition[789]: kargs: kargs passed Mar 17 17:51:07.802810 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:51:07.799993 ignition[789]: Ignition finished successfully Mar 17 17:51:07.809112 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:51:07.835446 ignition[795]: Ignition 2.20.0 Mar 17 17:51:07.835464 ignition[795]: Stage: disks Mar 17 17:51:07.835658 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:07.835669 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:51:07.836437 ignition[795]: disks: disks passed Mar 17 17:51:07.836496 ignition[795]: Ignition finished successfully Mar 17 17:51:07.840146 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:51:07.842658 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:51:07.843471 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:51:07.844728 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:51:07.846098 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:51:07.847104 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:51:07.855229 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:51:07.877405 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 17:51:07.882592 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:51:08.306104 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:51:08.366921 kernel: EXT4-fs (sda9): mounted filesystem 3914ef65-c5cd-468c-8ee7-964383d8e9e2 r/w with ordered data mode. Quota mode: none. Mar 17 17:51:08.368597 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:51:08.370854 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:51:08.382106 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:51:08.388137 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:51:08.390355 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:51:08.392082 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:51:08.392131 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:51:08.400164 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:51:08.402305 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (811) Mar 17 17:51:08.404954 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:08.405013 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:51:08.405973 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:51:08.408176 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:51:08.415055 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:51:08.415120 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:51:08.418788 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:51:08.481747 coreos-metadata[813]: Mar 17 17:51:08.480 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 17 17:51:08.483446 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:51:08.485673 coreos-metadata[813]: Mar 17 17:51:08.482 INFO Fetch successful Mar 17 17:51:08.485673 coreos-metadata[813]: Mar 17 17:51:08.482 INFO wrote hostname ci-4230-1-0-9-b1fb8ed835 to /sysroot/etc/hostname Mar 17 17:51:08.486619 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:51:08.492921 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:51:08.498172 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:51:08.504691 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:51:08.620010 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:51:08.627071 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:51:08.635226 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:51:08.647588 kernel: BTRFS info (device sda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:08.683536 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:51:08.701484 ignition[930]: INFO : Ignition 2.20.0 Mar 17 17:51:08.701484 ignition[930]: INFO : Stage: mount Mar 17 17:51:08.702988 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:08.702988 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:51:08.702988 ignition[930]: INFO : mount: mount passed Mar 17 17:51:08.702988 ignition[930]: INFO : Ignition finished successfully Mar 17 17:51:08.704067 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:51:08.712056 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:51:08.761460 systemd-networkd[778]: eth1: Gained IPv6LL Mar 17 17:51:09.298603 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:51:09.312291 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:51:09.324062 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (941) Mar 17 17:51:09.326098 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:09.326150 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:51:09.326163 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:51:09.329989 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:51:09.330054 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:51:09.333458 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:51:09.337055 systemd-networkd[778]: eth0: Gained IPv6LL Mar 17 17:51:09.353028 ignition[958]: INFO : Ignition 2.20.0 Mar 17 17:51:09.353028 ignition[958]: INFO : Stage: files Mar 17 17:51:09.354263 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:09.354263 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:51:09.354263 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:51:09.358292 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:51:09.358292 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:51:09.360508 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:51:09.360508 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:51:09.360508 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:51:09.360094 unknown[958]: wrote ssh authorized keys file for user: core Mar 17 17:51:09.365156 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:51:09.365156 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:51:09.365156 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:51:09.365156 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:51:09.365156 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:51:09.365156 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:51:09.365156 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:51:09.365156 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 17 17:51:09.951765 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 17 17:51:10.248843 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:51:10.248843 ignition[958]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Mar 17 17:51:10.252718 ignition[958]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:51:10.252718 ignition[958]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:51:10.252718 ignition[958]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Mar 17 17:51:10.252718 ignition[958]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:51:10.252718 ignition[958]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:51:10.252718 ignition[958]: INFO : files: files passed Mar 17 17:51:10.252718 ignition[958]: INFO : Ignition finished successfully Mar 17 17:51:10.252816 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:51:10.261183 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:51:10.265029 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:51:10.268400 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:51:10.268824 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:51:10.290153 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:51:10.290153 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:51:10.293127 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:51:10.295077 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:51:10.296562 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:51:10.304098 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:51:10.342111 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:51:10.342277 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:51:10.345514 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:51:10.346278 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:51:10.347806 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:51:10.353143 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:51:10.368882 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:51:10.376098 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:51:10.390794 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:51:10.392405 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:51:10.393817 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:51:10.394483 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:51:10.394626 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:51:10.395958 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:51:10.397214 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:51:10.398335 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:51:10.399482 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:51:10.400414 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:51:10.401474 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:51:10.402578 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:51:10.403726 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:51:10.404724 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:51:10.405759 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:51:10.406597 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:51:10.406767 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:51:10.407971 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:51:10.409100 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:51:10.410102 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:51:10.411796 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:51:10.413327 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:51:10.413527 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:51:10.415030 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:51:10.415207 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:51:10.416612 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:51:10.416758 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:51:10.417598 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:51:10.417740 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:51:10.427287 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:51:10.431335 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:51:10.432028 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:51:10.432245 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:51:10.435258 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:51:10.435517 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:51:10.445920 ignition[1010]: INFO : Ignition 2.20.0 Mar 17 17:51:10.445920 ignition[1010]: INFO : Stage: umount Mar 17 17:51:10.445920 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:10.445920 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:51:10.450397 ignition[1010]: INFO : umount: umount passed Mar 17 17:51:10.450397 ignition[1010]: INFO : Ignition finished successfully Mar 17 17:51:10.448107 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:51:10.448215 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:51:10.454813 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:51:10.455017 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:51:10.456628 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:51:10.456731 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:51:10.458218 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:51:10.458277 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:51:10.459283 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:51:10.459337 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:51:10.460774 systemd[1]: Stopped target network.target - Network. Mar 17 17:51:10.461577 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:51:10.461650 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:51:10.462682 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:51:10.465548 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:51:10.469973 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:51:10.471971 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:51:10.473447 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:51:10.476143 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:51:10.476205 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:51:10.477534 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:51:10.477592 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:51:10.479044 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:51:10.479105 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:51:10.481342 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:51:10.481424 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:51:10.482658 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:51:10.483751 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:51:10.486255 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:51:10.496372 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:51:10.496556 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:51:10.502378 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:51:10.502703 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:51:10.502827 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:51:10.505733 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:51:10.506078 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:51:10.506188 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:51:10.508954 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:51:10.509029 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:51:10.510879 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:51:10.510961 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:51:10.516094 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:51:10.516622 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:51:10.516695 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:51:10.517806 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:51:10.517853 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:51:10.521406 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:51:10.521466 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:51:10.522590 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:51:10.522641 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:51:10.525611 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:51:10.528247 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:51:10.528315 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:51:10.538988 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:51:10.539135 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:51:10.549626 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:51:10.551523 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:51:10.553416 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:51:10.553468 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:51:10.555200 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:51:10.555318 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:51:10.556512 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:51:10.556569 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:51:10.560073 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:51:10.560143 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:51:10.562398 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:51:10.562459 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:51:10.569202 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:51:10.569881 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:51:10.569968 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:51:10.573474 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:51:10.573530 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:51:10.576693 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:51:10.576755 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:51:10.578761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:51:10.578821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:10.580701 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:51:10.580766 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:51:10.583533 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:51:10.583639 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:51:10.586780 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:51:10.592232 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:51:10.603458 systemd[1]: Switching root. Mar 17 17:51:10.635774 systemd-journald[236]: Journal stopped Mar 17 17:51:11.604550 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Mar 17 17:51:11.604631 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:51:11.604645 kernel: SELinux: policy capability open_perms=1 Mar 17 17:51:11.604655 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:51:11.604665 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:51:11.604683 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:51:11.604693 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:51:11.604703 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:51:11.604712 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:51:11.604721 kernel: audit: type=1403 audit(1742233870.751:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:51:11.604732 systemd[1]: Successfully loaded SELinux policy in 39.122ms. Mar 17 17:51:11.604758 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.504ms. Mar 17 17:51:11.604769 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:51:11.604782 systemd[1]: Detected virtualization kvm. Mar 17 17:51:11.604793 systemd[1]: Detected architecture arm64. Mar 17 17:51:11.604803 systemd[1]: Detected first boot. Mar 17 17:51:11.604814 systemd[1]: Hostname set to . Mar 17 17:51:11.604825 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:51:11.604835 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:51:11.604845 zram_generator::config[1054]: No configuration found. Mar 17 17:51:11.604861 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:51:11.604874 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:51:11.604885 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:51:11.604907 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:51:11.604924 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:51:11.604936 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:51:11.604951 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:51:11.604961 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:51:11.604971 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:51:11.604986 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:51:11.604999 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:51:11.605011 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:51:11.605021 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:51:11.605032 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:51:11.605044 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:51:11.605055 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:51:11.605066 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:51:11.605077 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:51:11.605090 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:51:11.605100 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:51:11.605111 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:51:11.605122 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:51:11.605134 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:51:11.605144 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:51:11.605157 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:51:11.605169 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:51:11.605179 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:51:11.605191 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:51:11.605201 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:51:11.605212 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:51:11.605224 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:51:11.605234 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:51:11.605245 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:51:11.605258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:51:11.605271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:51:11.605282 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:51:11.605292 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:51:11.605303 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:51:11.605314 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:51:11.605324 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:51:11.605335 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:51:11.605378 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:51:11.605394 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:51:11.605407 systemd[1]: Reached target machines.target - Containers. Mar 17 17:51:11.605418 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:51:11.605433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:51:11.605444 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:51:11.605455 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:51:11.605465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:51:11.605476 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:51:11.605492 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:51:11.605505 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:51:11.605516 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:51:11.605527 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:51:11.605539 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:51:11.605549 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:51:11.605562 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:51:11.605573 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:51:11.605583 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:51:11.605594 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:51:11.605605 kernel: fuse: init (API version 7.39) Mar 17 17:51:11.605615 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:51:11.605626 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:51:11.605637 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:51:11.605647 kernel: loop: module loaded Mar 17 17:51:11.605659 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:51:11.605669 kernel: ACPI: bus type drm_connector registered Mar 17 17:51:11.605680 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:51:11.605690 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:51:11.605701 systemd[1]: Stopped verity-setup.service. Mar 17 17:51:11.605713 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:51:11.605723 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:51:11.605735 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:51:11.605746 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:51:11.605757 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:51:11.605808 systemd-journald[1129]: Collecting audit messages is disabled. Mar 17 17:51:11.605831 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:51:11.605843 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:51:11.605855 systemd-journald[1129]: Journal started Mar 17 17:51:11.605878 systemd-journald[1129]: Runtime Journal (/run/log/journal/3f9f275d8cb94c84823b4c08c7d1fc66) is 8M, max 76.6M, 68.6M free. Mar 17 17:51:11.330007 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:51:11.343465 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:51:11.344447 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:51:11.610175 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:51:11.611017 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:51:11.611239 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:51:11.612434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:51:11.614948 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:51:11.616471 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:51:11.617974 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:51:11.619026 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:51:11.619196 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:51:11.621309 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:51:11.621537 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:51:11.622655 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:51:11.623267 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:51:11.626288 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:51:11.627726 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:51:11.629459 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:51:11.635894 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:51:11.646586 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:51:11.650707 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:51:11.658013 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:51:11.663883 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:51:11.664670 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:51:11.664773 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:51:11.666813 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:51:11.675249 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:51:11.680427 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:51:11.682752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:51:11.686143 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:51:11.692253 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:51:11.693682 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:51:11.696386 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:51:11.697108 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:51:11.700837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:51:11.706229 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:51:11.713136 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:51:11.717727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:51:11.718788 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:51:11.720781 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:51:11.730867 systemd-journald[1129]: Time spent on flushing to /var/log/journal/3f9f275d8cb94c84823b4c08c7d1fc66 is 43.232ms for 1125 entries. Mar 17 17:51:11.730867 systemd-journald[1129]: System Journal (/var/log/journal/3f9f275d8cb94c84823b4c08c7d1fc66) is 8M, max 584.8M, 576.8M free. Mar 17 17:51:11.790186 systemd-journald[1129]: Received client request to flush runtime journal. Mar 17 17:51:11.790273 kernel: loop0: detected capacity change from 0 to 201592 Mar 17 17:51:11.732124 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:51:11.751216 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:51:11.755338 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:51:11.757582 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:51:11.778187 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:51:11.791988 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:51:11.797957 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:51:11.809100 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:51:11.818992 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Mar 17 17:51:11.819710 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Mar 17 17:51:11.825007 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:51:11.828328 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:51:11.837954 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:51:11.839506 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:51:11.863974 kernel: loop1: detected capacity change from 0 to 8 Mar 17 17:51:11.883458 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:51:11.884976 kernel: loop2: detected capacity change from 0 to 123192 Mar 17 17:51:11.897136 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:51:11.918664 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Mar 17 17:51:11.918684 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Mar 17 17:51:11.929530 kernel: loop3: detected capacity change from 0 to 113512 Mar 17 17:51:11.928117 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:51:11.965286 kernel: loop4: detected capacity change from 0 to 201592 Mar 17 17:51:12.001878 kernel: loop5: detected capacity change from 0 to 8 Mar 17 17:51:12.007944 kernel: loop6: detected capacity change from 0 to 123192 Mar 17 17:51:12.025001 kernel: loop7: detected capacity change from 0 to 113512 Mar 17 17:51:12.047120 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 17 17:51:12.049156 (sd-merge)[1203]: Merged extensions into '/usr'. Mar 17 17:51:12.058055 systemd[1]: Reload requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:51:12.058077 systemd[1]: Reloading... Mar 17 17:51:12.235275 zram_generator::config[1235]: No configuration found. Mar 17 17:51:12.249803 ldconfig[1169]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:51:12.361989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:51:12.424622 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:51:12.425162 systemd[1]: Reloading finished in 366 ms. Mar 17 17:51:12.445653 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:51:12.446829 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:51:12.462972 systemd[1]: Starting ensure-sysext.service... Mar 17 17:51:12.467499 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:51:12.495040 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:51:12.495057 systemd[1]: Reloading... Mar 17 17:51:12.496218 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:51:12.496547 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:51:12.497481 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:51:12.497693 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Mar 17 17:51:12.497752 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Mar 17 17:51:12.502701 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:51:12.502713 systemd-tmpfiles[1270]: Skipping /boot Mar 17 17:51:12.514538 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:51:12.514697 systemd-tmpfiles[1270]: Skipping /boot Mar 17 17:51:12.594942 zram_generator::config[1296]: No configuration found. Mar 17 17:51:12.692917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:51:12.755841 systemd[1]: Reloading finished in 260 ms. Mar 17 17:51:12.770884 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:51:12.784359 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:51:12.796391 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:51:12.800326 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:51:12.809179 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:51:12.813598 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:51:12.819267 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:51:12.823712 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:51:12.829558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:51:12.834800 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:51:12.839437 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:51:12.845637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:51:12.847103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:51:12.847241 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:51:12.850470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:51:12.850644 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:51:12.850728 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:51:12.854577 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:51:12.861985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:51:12.874231 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:51:12.874976 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:51:12.875117 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:51:12.882380 systemd[1]: Finished ensure-sysext.service. Mar 17 17:51:12.886858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:51:12.887281 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:51:12.902410 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:51:12.904525 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:51:12.906618 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:51:12.908702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:51:12.909410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:51:12.918803 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:51:12.920537 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:51:12.922830 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:51:12.923530 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:51:12.925246 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Mar 17 17:51:12.928645 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:51:12.930161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:51:12.936174 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:51:12.967224 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:51:12.969001 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:51:12.979171 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:51:12.991188 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:51:12.992307 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:51:13.005058 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:51:13.008782 augenrules[1389]: No rules Mar 17 17:51:13.009261 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:51:13.009566 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:51:13.098251 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:51:13.247867 systemd-networkd[1378]: lo: Link UP Mar 17 17:51:13.248586 systemd-networkd[1378]: lo: Gained carrier Mar 17 17:51:13.252120 systemd-networkd[1378]: Enumeration completed Mar 17 17:51:13.252469 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:51:13.256026 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:13.256036 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:51:13.257138 systemd-networkd[1378]: eth0: Link UP Mar 17 17:51:13.257534 systemd-resolved[1342]: Positive Trust Anchors: Mar 17 17:51:13.257557 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:51:13.257588 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:51:13.258025 systemd-networkd[1378]: eth0: Gained carrier Mar 17 17:51:13.258057 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:13.268274 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:51:13.271467 systemd-networkd[1378]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:13.272016 systemd-networkd[1378]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:51:13.272163 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:51:13.273600 systemd-resolved[1342]: Using system hostname 'ci-4230-1-0-9-b1fb8ed835'. Mar 17 17:51:13.274893 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:51:13.275009 systemd-networkd[1378]: eth1: Link UP Mar 17 17:51:13.275014 systemd-networkd[1378]: eth1: Gained carrier Mar 17 17:51:13.275038 systemd-networkd[1378]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:13.276078 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:51:13.281087 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:51:13.281866 systemd[1]: Reached target network.target - Network. Mar 17 17:51:13.282407 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:51:13.302738 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:51:13.303053 systemd-networkd[1378]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:51:13.304638 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Mar 17 17:51:13.307947 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:51:13.319001 systemd-networkd[1378]: eth0: DHCPv4 address 49.12.184.245/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:51:13.320806 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Mar 17 17:51:13.378945 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1391) Mar 17 17:51:13.386521 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 17 17:51:13.386664 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:51:13.394373 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:51:13.398154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:51:13.403560 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 17 17:51:13.403894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:51:13.405048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:51:13.405101 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:51:13.405129 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:51:13.405555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:51:13.406154 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:51:13.411932 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:51:13.412048 kernel: [drm] features: -context_init Mar 17 17:51:13.414946 kernel: [drm] number of scanouts: 1 Mar 17 17:51:13.415051 kernel: [drm] number of cap sets: 0 Mar 17 17:51:13.423723 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:51:13.428768 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:51:13.434475 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:51:13.434729 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:51:13.436282 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:51:13.436352 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:51:13.471928 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 17 17:51:13.482141 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 17:51:13.488955 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:51:13.492314 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:51:13.508050 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:51:13.514972 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:51:13.518809 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:51:13.520937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:13.524132 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:51:13.529292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:51:13.530500 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:51:13.596154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:13.631673 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:51:13.639378 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:51:13.653697 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:51:13.683538 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:51:13.685030 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:51:13.685813 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:51:13.686725 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:51:13.687647 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:51:13.688855 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:51:13.690536 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:51:13.691293 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:51:13.692151 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:51:13.692197 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:51:13.692953 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:51:13.695803 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:51:13.698832 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:51:13.704525 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:51:13.705518 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:51:13.706285 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:51:13.715207 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:51:13.717392 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:51:13.729254 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:51:13.733052 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:51:13.734678 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:51:13.735120 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:51:13.736666 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:51:13.737303 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:51:13.737375 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:51:13.744740 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:51:13.747070 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:51:13.749114 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:51:13.757078 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:51:13.769249 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:51:13.770399 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:51:13.771882 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:51:13.774550 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 17 17:51:13.778160 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:51:13.783130 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:51:13.788923 jq[1468]: false Mar 17 17:51:13.788330 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:51:13.790019 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:51:13.790609 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:51:13.793154 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:51:13.795690 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:51:13.797818 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:51:13.802313 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:51:13.802606 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:51:13.846535 extend-filesystems[1469]: Found loop4 Mar 17 17:51:13.853645 extend-filesystems[1469]: Found loop5 Mar 17 17:51:13.853645 extend-filesystems[1469]: Found loop6 Mar 17 17:51:13.853645 extend-filesystems[1469]: Found loop7 Mar 17 17:51:13.853645 extend-filesystems[1469]: Found sda Mar 17 17:51:13.853645 extend-filesystems[1469]: Found sda1 Mar 17 17:51:13.853645 extend-filesystems[1469]: Found sda2 Mar 17 17:51:13.853645 extend-filesystems[1469]: Found sda3 Mar 17 17:51:13.853645 extend-filesystems[1469]: Found usr Mar 17 17:51:13.853645 extend-filesystems[1469]: Found sda4 Mar 17 17:51:13.853645 extend-filesystems[1469]: Found sda6 Mar 17 17:51:13.853645 extend-filesystems[1469]: Found sda7 Mar 17 17:51:13.853645 extend-filesystems[1469]: Found sda9 Mar 17 17:51:13.853645 extend-filesystems[1469]: Checking size of /dev/sda9 Mar 17 17:51:13.892077 jq[1478]: true Mar 17 17:51:13.861466 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:51:13.892290 coreos-metadata[1466]: Mar 17 17:51:13.860 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 17 17:51:13.892290 coreos-metadata[1466]: Mar 17 17:51:13.865 INFO Fetch successful Mar 17 17:51:13.892290 coreos-metadata[1466]: Mar 17 17:51:13.865 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 17 17:51:13.892290 coreos-metadata[1466]: Mar 17 17:51:13.868 INFO Fetch successful Mar 17 17:51:13.861221 dbus-daemon[1467]: [system] SELinux support is enabled Mar 17 17:51:13.868676 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:51:13.869578 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:51:13.875667 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:51:13.875699 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:51:13.881137 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:51:13.881164 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:51:13.898026 (ntainerd)[1493]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:51:13.902107 jq[1492]: true Mar 17 17:51:13.915459 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:51:13.916988 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:51:13.925118 extend-filesystems[1469]: Resized partition /dev/sda9 Mar 17 17:51:13.943745 extend-filesystems[1514]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:51:13.955926 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 17 17:51:13.984001 update_engine[1477]: I20250317 17:51:13.983210 1477 main.cc:92] Flatcar Update Engine starting Mar 17 17:51:13.998264 update_engine[1477]: I20250317 17:51:13.998184 1477 update_check_scheduler.cc:74] Next update check in 11m13s Mar 17 17:51:13.999629 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:51:14.012393 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:51:14.015789 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:51:14.020257 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:51:14.046075 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:51:14.046986 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:51:14.062271 systemd[1]: Starting sshkeys.service... Mar 17 17:51:14.099093 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1379) Mar 17 17:51:14.105218 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 17 17:51:14.103459 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:51:14.110604 systemd-logind[1476]: New seat seat0. Mar 17 17:51:14.116401 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:51:14.126240 extend-filesystems[1514]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 17:51:14.126240 extend-filesystems[1514]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 17 17:51:14.126240 extend-filesystems[1514]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 17 17:51:14.134852 extend-filesystems[1469]: Resized filesystem in /dev/sda9 Mar 17 17:51:14.134852 extend-filesystems[1469]: Found sr0 Mar 17 17:51:14.127448 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:51:14.128440 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:51:14.139187 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:51:14.139205 systemd-logind[1476]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 17 17:51:14.143311 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:51:14.249927 coreos-metadata[1539]: Mar 17 17:51:14.248 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 17 17:51:14.251453 coreos-metadata[1539]: Mar 17 17:51:14.251 INFO Fetch successful Mar 17 17:51:14.268208 unknown[1539]: wrote ssh authorized keys file for user: core Mar 17 17:51:14.310131 update-ssh-keys[1548]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:51:14.311543 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:51:14.320941 systemd[1]: Finished sshkeys.service. Mar 17 17:51:14.330044 systemd-networkd[1378]: eth1: Gained IPv6LL Mar 17 17:51:14.330643 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Mar 17 17:51:14.336792 containerd[1493]: time="2025-03-17T17:51:14.336684480Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:51:14.339455 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:51:14.345840 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:51:14.352164 locksmithd[1532]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:51:14.356171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:51:14.364254 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:51:14.397090 containerd[1493]: time="2025-03-17T17:51:14.397029480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:14.398975 containerd[1493]: time="2025-03-17T17:51:14.398924400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:14.399728 containerd[1493]: time="2025-03-17T17:51:14.399706520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:51:14.399854 containerd[1493]: time="2025-03-17T17:51:14.399837440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:51:14.400213 containerd[1493]: time="2025-03-17T17:51:14.400184960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:51:14.400408 containerd[1493]: time="2025-03-17T17:51:14.400384160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:14.401322 containerd[1493]: time="2025-03-17T17:51:14.401284920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:14.401957 containerd[1493]: time="2025-03-17T17:51:14.401934800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:14.403211 containerd[1493]: time="2025-03-17T17:51:14.402380920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:14.403211 containerd[1493]: time="2025-03-17T17:51:14.402432240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:14.403211 containerd[1493]: time="2025-03-17T17:51:14.402451400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:14.403211 containerd[1493]: time="2025-03-17T17:51:14.402462080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:14.403211 containerd[1493]: time="2025-03-17T17:51:14.402563360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:14.403211 containerd[1493]: time="2025-03-17T17:51:14.402785280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:14.405452 containerd[1493]: time="2025-03-17T17:51:14.405408520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:14.405857 containerd[1493]: time="2025-03-17T17:51:14.405540040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:51:14.405857 containerd[1493]: time="2025-03-17T17:51:14.405680880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:51:14.405857 containerd[1493]: time="2025-03-17T17:51:14.405788200Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.414037280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.414118040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.414149360Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.414191160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.414212320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.414479160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.414781240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.414980200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.415003080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.415027040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.415045600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.415061880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.415080440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:51:14.416262 containerd[1493]: time="2025-03-17T17:51:14.415099640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415120520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415139240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415158200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415173920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415201320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415218680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415241680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415261440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415277520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415292920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415312960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415345320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415365680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.416826 containerd[1493]: time="2025-03-17T17:51:14.415386000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.417100 containerd[1493]: time="2025-03-17T17:51:14.415404720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.417100 containerd[1493]: time="2025-03-17T17:51:14.415419920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.417100 containerd[1493]: time="2025-03-17T17:51:14.415435880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.417100 containerd[1493]: time="2025-03-17T17:51:14.415458200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:51:14.417100 containerd[1493]: time="2025-03-17T17:51:14.415488160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.417100 containerd[1493]: time="2025-03-17T17:51:14.415505680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.417100 containerd[1493]: time="2025-03-17T17:51:14.415523600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:51:14.417100 containerd[1493]: time="2025-03-17T17:51:14.415842440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:51:14.417100 containerd[1493]: time="2025-03-17T17:51:14.415868440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:51:14.417100 containerd[1493]: time="2025-03-17T17:51:14.415882480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:51:14.419113 containerd[1493]: time="2025-03-17T17:51:14.419078040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:51:14.419228 containerd[1493]: time="2025-03-17T17:51:14.419214120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.422850 containerd[1493]: time="2025-03-17T17:51:14.421012000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:51:14.422850 containerd[1493]: time="2025-03-17T17:51:14.421047800Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:51:14.422850 containerd[1493]: time="2025-03-17T17:51:14.421064320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:51:14.422978 containerd[1493]: time="2025-03-17T17:51:14.422680640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:51:14.422978 containerd[1493]: time="2025-03-17T17:51:14.422742840Z" level=info msg="Connect containerd service" Mar 17 17:51:14.422978 containerd[1493]: time="2025-03-17T17:51:14.422804240Z" level=info msg="using legacy CRI server" Mar 17 17:51:14.422978 containerd[1493]: time="2025-03-17T17:51:14.422812360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:51:14.423863 containerd[1493]: time="2025-03-17T17:51:14.423183040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:51:14.424911 containerd[1493]: time="2025-03-17T17:51:14.424874480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:51:14.427089 containerd[1493]: time="2025-03-17T17:51:14.427047680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:51:14.427165 containerd[1493]: time="2025-03-17T17:51:14.427131080Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:51:14.427221 containerd[1493]: time="2025-03-17T17:51:14.427188320Z" level=info msg="Start subscribing containerd event" Mar 17 17:51:14.427246 containerd[1493]: time="2025-03-17T17:51:14.427233880Z" level=info msg="Start recovering state" Mar 17 17:51:14.427321 containerd[1493]: time="2025-03-17T17:51:14.427305400Z" level=info msg="Start event monitor" Mar 17 17:51:14.427392 containerd[1493]: time="2025-03-17T17:51:14.427322360Z" level=info msg="Start snapshots syncer" Mar 17 17:51:14.427439 containerd[1493]: time="2025-03-17T17:51:14.427393720Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:51:14.427439 containerd[1493]: time="2025-03-17T17:51:14.427406560Z" level=info msg="Start streaming server" Mar 17 17:51:14.428424 containerd[1493]: time="2025-03-17T17:51:14.427580320Z" level=info msg="containerd successfully booted in 0.091923s" Mar 17 17:51:14.427702 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:51:14.435439 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:51:14.841074 systemd-networkd[1378]: eth0: Gained IPv6LL Mar 17 17:51:14.841681 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Mar 17 17:51:15.055381 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:51:15.077557 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:51:15.097321 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:51:15.109121 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:51:15.109730 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:51:15.119009 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:51:15.138002 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:51:15.147424 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:51:15.153371 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:51:15.155430 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:51:15.158726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:51:15.161641 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:51:15.163543 systemd[1]: Startup finished in 853ms (kernel) + 6.057s (initrd) + 4.451s (userspace) = 11.361s. Mar 17 17:51:15.182543 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:51:15.714193 kubelet[1590]: E0317 17:51:15.713949 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:51:15.717318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:51:15.717535 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:51:15.718108 systemd[1]: kubelet.service: Consumed 876ms CPU time, 246.3M memory peak. Mar 17 17:51:25.968372 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:51:25.975170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:51:26.106238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:51:26.108079 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:51:26.156854 kubelet[1610]: E0317 17:51:26.156769 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:51:26.160604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:51:26.160742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:51:26.161664 systemd[1]: kubelet.service: Consumed 159ms CPU time, 101.2M memory peak. Mar 17 17:51:36.412772 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:51:36.427708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:51:36.562466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:51:36.567810 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:51:36.615068 kubelet[1624]: E0317 17:51:36.614956 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:51:36.618373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:51:36.618540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:51:36.620031 systemd[1]: kubelet.service: Consumed 171ms CPU time, 102.2M memory peak. Mar 17 17:51:45.341718 systemd-timesyncd[1358]: Contacted time server 213.239.234.28:123 (2.flatcar.pool.ntp.org). Mar 17 17:51:45.341995 systemd-timesyncd[1358]: Initial clock synchronization to Mon 2025-03-17 17:51:45.116985 UTC. Mar 17 17:51:46.869063 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:51:46.878172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:51:46.992470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:51:47.004970 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:51:47.053202 kubelet[1639]: E0317 17:51:47.053127 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:51:47.055674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:51:47.056104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:51:47.056772 systemd[1]: kubelet.service: Consumed 164ms CPU time, 100.2M memory peak. Mar 17 17:51:57.117381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:51:57.133224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:51:57.269330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:51:57.283006 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:51:57.336548 kubelet[1655]: E0317 17:51:57.336488 1655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:51:57.340306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:51:57.341772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:51:57.342929 systemd[1]: kubelet.service: Consumed 170ms CPU time, 102.3M memory peak. Mar 17 17:51:58.825771 update_engine[1477]: I20250317 17:51:58.825088 1477 update_attempter.cc:509] Updating boot flags... Mar 17 17:51:58.879922 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1671) Mar 17 17:51:58.947230 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1670) Mar 17 17:51:59.008937 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1670) Mar 17 17:52:07.366662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 17:52:07.374274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:07.498004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:07.509747 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:07.558385 kubelet[1691]: E0317 17:52:07.558261 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:07.560870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:07.561136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:07.561560 systemd[1]: kubelet.service: Consumed 160ms CPU time, 101.6M memory peak. Mar 17 17:52:17.618009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 17:52:17.624577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:17.749261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:17.754812 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:17.799837 kubelet[1706]: E0317 17:52:17.799717 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:17.802833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:17.803033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:17.803713 systemd[1]: kubelet.service: Consumed 161ms CPU time, 100M memory peak. Mar 17 17:52:27.867694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 17:52:27.879232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:28.020192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:28.023282 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:28.072085 kubelet[1721]: E0317 17:52:28.071968 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:28.075406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:28.075626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:28.076390 systemd[1]: kubelet.service: Consumed 167ms CPU time, 100.2M memory peak. Mar 17 17:52:38.117089 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 17:52:38.130394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:38.278148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:38.288526 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:38.347000 kubelet[1736]: E0317 17:52:38.346839 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:38.350128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:38.350284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:38.351422 systemd[1]: kubelet.service: Consumed 177ms CPU time, 102.1M memory peak. Mar 17 17:52:48.367202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 17 17:52:48.376267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:48.503351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:48.517858 (kubelet)[1752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:48.568738 kubelet[1752]: E0317 17:52:48.568642 1752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:48.571892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:48.572103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:48.572540 systemd[1]: kubelet.service: Consumed 157ms CPU time, 100.4M memory peak. Mar 17 17:52:58.618217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 17 17:52:58.626335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:58.770828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:58.786952 (kubelet)[1767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:58.849953 kubelet[1767]: E0317 17:52:58.849792 1767 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:58.853436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:58.853892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:58.854799 systemd[1]: kubelet.service: Consumed 187ms CPU time, 102.1M memory peak. Mar 17 17:53:08.735378 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:53:08.741334 systemd[1]: Started sshd@0-49.12.184.245:22-139.178.89.65:55604.service - OpenSSH per-connection server daemon (139.178.89.65:55604). Mar 17 17:53:08.867672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 17 17:53:08.886680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:53:09.034109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:09.044523 (kubelet)[1785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:53:09.102235 kubelet[1785]: E0317 17:53:09.102180 1785 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:53:09.105872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:53:09.106346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:53:09.107206 systemd[1]: kubelet.service: Consumed 163ms CPU time, 102.2M memory peak. Mar 17 17:53:09.749775 sshd[1775]: Accepted publickey for core from 139.178.89.65 port 55604 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:09.752217 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:09.767225 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:53:09.775435 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:53:09.780046 systemd-logind[1476]: New session 1 of user core. Mar 17 17:53:09.788077 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:53:09.794764 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:53:09.806982 (systemd)[1794]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:53:09.809875 systemd-logind[1476]: New session c1 of user core. Mar 17 17:53:09.954939 systemd[1794]: Queued start job for default target default.target. Mar 17 17:53:09.964638 systemd[1794]: Created slice app.slice - User Application Slice. Mar 17 17:53:09.964694 systemd[1794]: Reached target paths.target - Paths. Mar 17 17:53:09.964761 systemd[1794]: Reached target timers.target - Timers. Mar 17 17:53:09.967371 systemd[1794]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:53:09.983040 systemd[1794]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:53:09.983196 systemd[1794]: Reached target sockets.target - Sockets. Mar 17 17:53:09.983258 systemd[1794]: Reached target basic.target - Basic System. Mar 17 17:53:09.983329 systemd[1794]: Reached target default.target - Main User Target. Mar 17 17:53:09.983374 systemd[1794]: Startup finished in 165ms. Mar 17 17:53:09.983914 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:53:09.996733 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:53:10.699463 systemd[1]: Started sshd@1-49.12.184.245:22-139.178.89.65:55606.service - OpenSSH per-connection server daemon (139.178.89.65:55606). Mar 17 17:53:11.696791 sshd[1805]: Accepted publickey for core from 139.178.89.65 port 55606 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:11.699420 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:11.705597 systemd-logind[1476]: New session 2 of user core. Mar 17 17:53:11.714226 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:53:12.388951 sshd[1807]: Connection closed by 139.178.89.65 port 55606 Mar 17 17:53:12.390174 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:12.396453 systemd[1]: sshd@1-49.12.184.245:22-139.178.89.65:55606.service: Deactivated successfully. Mar 17 17:53:12.399784 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:53:12.401676 systemd-logind[1476]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:53:12.403360 systemd-logind[1476]: Removed session 2. Mar 17 17:53:12.568325 systemd[1]: Started sshd@2-49.12.184.245:22-139.178.89.65:52890.service - OpenSSH per-connection server daemon (139.178.89.65:52890). Mar 17 17:53:13.556062 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 52890 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:13.558186 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:13.569042 systemd-logind[1476]: New session 3 of user core. Mar 17 17:53:13.571149 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:53:14.239060 sshd[1815]: Connection closed by 139.178.89.65 port 52890 Mar 17 17:53:14.238857 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:14.248235 systemd[1]: sshd@2-49.12.184.245:22-139.178.89.65:52890.service: Deactivated successfully. Mar 17 17:53:14.252894 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:53:14.254580 systemd-logind[1476]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:53:14.257304 systemd-logind[1476]: Removed session 3. Mar 17 17:53:14.431019 systemd[1]: Started sshd@3-49.12.184.245:22-139.178.89.65:52896.service - OpenSSH per-connection server daemon (139.178.89.65:52896). Mar 17 17:53:15.428151 sshd[1821]: Accepted publickey for core from 139.178.89.65 port 52896 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:15.430323 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:15.435570 systemd-logind[1476]: New session 4 of user core. Mar 17 17:53:15.444306 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:53:16.119853 sshd[1823]: Connection closed by 139.178.89.65 port 52896 Mar 17 17:53:16.119030 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:16.127720 systemd[1]: sshd@3-49.12.184.245:22-139.178.89.65:52896.service: Deactivated successfully. Mar 17 17:53:16.132844 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:53:16.136872 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:53:16.139589 systemd-logind[1476]: Removed session 4. Mar 17 17:53:16.292637 systemd[1]: Started sshd@4-49.12.184.245:22-139.178.89.65:52906.service - OpenSSH per-connection server daemon (139.178.89.65:52906). Mar 17 17:53:17.300560 sshd[1829]: Accepted publickey for core from 139.178.89.65 port 52906 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:17.304393 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:17.312532 systemd-logind[1476]: New session 5 of user core. Mar 17 17:53:17.329311 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:53:17.841888 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:53:17.842776 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:53:17.861970 sudo[1832]: pam_unix(sudo:session): session closed for user root Mar 17 17:53:18.022174 sshd[1831]: Connection closed by 139.178.89.65 port 52906 Mar 17 17:53:18.023503 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:18.031048 systemd[1]: sshd@4-49.12.184.245:22-139.178.89.65:52906.service: Deactivated successfully. Mar 17 17:53:18.034866 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:53:18.039568 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:53:18.041704 systemd-logind[1476]: Removed session 5. Mar 17 17:53:18.200222 systemd[1]: Started sshd@5-49.12.184.245:22-139.178.89.65:52912.service - OpenSSH per-connection server daemon (139.178.89.65:52912). Mar 17 17:53:19.116506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 17 17:53:19.124532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:53:19.189931 sshd[1838]: Accepted publickey for core from 139.178.89.65 port 52912 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:19.191137 sshd-session[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:19.201688 systemd-logind[1476]: New session 6 of user core. Mar 17 17:53:19.210561 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:53:19.257869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:19.271591 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:53:19.324211 kubelet[1849]: E0317 17:53:19.324142 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:53:19.327037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:53:19.327796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:53:19.328723 systemd[1]: kubelet.service: Consumed 170ms CPU time, 99.9M memory peak. Mar 17 17:53:19.711867 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:53:19.712785 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:53:19.717608 sudo[1857]: pam_unix(sudo:session): session closed for user root Mar 17 17:53:19.724192 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:53:19.724537 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:53:19.743368 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:53:19.781711 augenrules[1879]: No rules Mar 17 17:53:19.783404 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:53:19.783634 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:53:19.788287 sudo[1856]: pam_unix(sudo:session): session closed for user root Mar 17 17:53:19.948991 sshd[1843]: Connection closed by 139.178.89.65 port 52912 Mar 17 17:53:19.949596 sshd-session[1838]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:19.954227 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:53:19.955516 systemd[1]: sshd@5-49.12.184.245:22-139.178.89.65:52912.service: Deactivated successfully. Mar 17 17:53:19.958121 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:53:19.960678 systemd-logind[1476]: Removed session 6. Mar 17 17:53:20.121951 systemd[1]: Started sshd@6-49.12.184.245:22-139.178.89.65:52920.service - OpenSSH per-connection server daemon (139.178.89.65:52920). Mar 17 17:53:21.122037 sshd[1888]: Accepted publickey for core from 139.178.89.65 port 52920 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:53:21.124834 sshd-session[1888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:21.133429 systemd-logind[1476]: New session 7 of user core. Mar 17 17:53:21.138442 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:53:21.647563 sudo[1891]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:53:21.647866 sudo[1891]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:53:22.302325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:22.303329 systemd[1]: kubelet.service: Consumed 170ms CPU time, 99.9M memory peak. Mar 17 17:53:22.311220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:53:22.351225 systemd[1]: Reload requested from client PID 1924 ('systemctl') (unit session-7.scope)... Mar 17 17:53:22.351410 systemd[1]: Reloading... Mar 17 17:53:22.498087 zram_generator::config[1971]: No configuration found. Mar 17 17:53:22.623222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:53:22.720165 systemd[1]: Reloading finished in 368 ms. Mar 17 17:53:22.783726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:22.790585 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:53:22.796791 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:53:22.797138 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:22.797212 systemd[1]: kubelet.service: Consumed 113ms CPU time, 90.1M memory peak. Mar 17 17:53:22.816402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:53:22.968852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:22.975252 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:53:23.025821 kubelet[2018]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:53:23.027950 kubelet[2018]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:53:23.027950 kubelet[2018]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:53:23.027950 kubelet[2018]: I0317 17:53:23.026439 2018 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:53:23.731219 kubelet[2018]: I0317 17:53:23.731128 2018 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:53:23.732184 kubelet[2018]: I0317 17:53:23.731529 2018 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:53:23.732890 kubelet[2018]: I0317 17:53:23.732839 2018 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:53:23.768719 kubelet[2018]: I0317 17:53:23.768092 2018 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:53:23.781862 kubelet[2018]: E0317 17:53:23.781781 2018 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:53:23.781862 kubelet[2018]: I0317 17:53:23.781837 2018 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:53:23.786417 kubelet[2018]: I0317 17:53:23.786315 2018 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:53:23.786694 kubelet[2018]: I0317 17:53:23.786633 2018 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:53:23.786883 kubelet[2018]: I0317 17:53:23.786697 2018 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:53:23.787021 kubelet[2018]: I0317 17:53:23.787011 2018 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:53:23.787072 kubelet[2018]: I0317 17:53:23.787023 2018 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:53:23.787285 kubelet[2018]: I0317 17:53:23.787265 2018 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:53:23.791559 kubelet[2018]: I0317 17:53:23.791386 2018 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:53:23.791559 kubelet[2018]: I0317 17:53:23.791420 2018 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:53:23.791559 kubelet[2018]: I0317 17:53:23.791447 2018 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:53:23.791559 kubelet[2018]: I0317 17:53:23.791459 2018 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:53:23.795848 kubelet[2018]: E0317 17:53:23.794997 2018 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:23.795848 kubelet[2018]: E0317 17:53:23.795069 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:23.796038 kubelet[2018]: I0317 17:53:23.796003 2018 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:53:23.796827 kubelet[2018]: I0317 17:53:23.796791 2018 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:53:23.798598 kubelet[2018]: W0317 17:53:23.796990 2018 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:53:23.798598 kubelet[2018]: I0317 17:53:23.798303 2018 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:53:23.798598 kubelet[2018]: I0317 17:53:23.798384 2018 server.go:1287] "Started kubelet" Mar 17 17:53:23.799621 kubelet[2018]: I0317 17:53:23.799533 2018 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:53:23.801082 kubelet[2018]: I0317 17:53:23.800972 2018 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:53:23.802284 kubelet[2018]: I0317 17:53:23.802203 2018 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:53:23.803938 kubelet[2018]: I0317 17:53:23.802930 2018 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:53:23.806056 kubelet[2018]: I0317 17:53:23.806015 2018 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:53:23.816797 kubelet[2018]: I0317 17:53:23.816744 2018 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:53:23.818392 kubelet[2018]: E0317 17:53:23.818004 2018 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.182da8988001f5a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-03-17 17:53:23.798357415 +0000 UTC m=+0.818676965,LastTimestamp:2025-03-17 17:53:23.798357415 +0000 UTC m=+0.818676965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Mar 17 17:53:23.818633 kubelet[2018]: I0317 17:53:23.818598 2018 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:53:23.820773 kubelet[2018]: E0317 17:53:23.820739 2018 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 17:53:23.821904 kubelet[2018]: I0317 17:53:23.821845 2018 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:53:23.822229 kubelet[2018]: I0317 17:53:23.822201 2018 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:53:23.828952 kubelet[2018]: W0317 17:53:23.827504 2018 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:53:23.828952 kubelet[2018]: E0317 17:53:23.827562 2018 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 17:53:23.828952 kubelet[2018]: W0317 17:53:23.827655 2018 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:53:23.828952 kubelet[2018]: E0317 17:53:23.827674 2018 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 17:53:23.832137 kubelet[2018]: E0317 17:53:23.832101 2018 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:53:23.834238 kubelet[2018]: I0317 17:53:23.834207 2018 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:53:23.835928 kubelet[2018]: I0317 17:53:23.834457 2018 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:53:23.835928 kubelet[2018]: I0317 17:53:23.834593 2018 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:53:23.851174 kubelet[2018]: I0317 17:53:23.851143 2018 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:53:23.851389 kubelet[2018]: I0317 17:53:23.851370 2018 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:53:23.851463 kubelet[2018]: I0317 17:53:23.851455 2018 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:53:23.858570 kubelet[2018]: I0317 17:53:23.858540 2018 policy_none.go:49] "None policy: Start" Mar 17 17:53:23.858744 kubelet[2018]: I0317 17:53:23.858722 2018 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:53:23.858820 kubelet[2018]: I0317 17:53:23.858804 2018 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:53:23.864173 kubelet[2018]: E0317 17:53:23.864133 2018 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.4\" not found" node="10.0.0.4" Mar 17 17:53:23.872411 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:53:23.881641 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:53:23.885191 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:53:23.896627 kubelet[2018]: I0317 17:53:23.896423 2018 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:53:23.898262 kubelet[2018]: I0317 17:53:23.897070 2018 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:53:23.898262 kubelet[2018]: I0317 17:53:23.897091 2018 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:53:23.899644 kubelet[2018]: I0317 17:53:23.899581 2018 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:53:23.902608 kubelet[2018]: E0317 17:53:23.902577 2018 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:53:23.902883 kubelet[2018]: E0317 17:53:23.902755 2018 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" Mar 17 17:53:23.904839 kubelet[2018]: I0317 17:53:23.904759 2018 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:53:23.906551 kubelet[2018]: I0317 17:53:23.906520 2018 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:53:23.906551 kubelet[2018]: I0317 17:53:23.906590 2018 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:53:23.906551 kubelet[2018]: I0317 17:53:23.906615 2018 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:53:23.906551 kubelet[2018]: I0317 17:53:23.906621 2018 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:53:23.906551 kubelet[2018]: E0317 17:53:23.906739 2018 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 17:53:23.999303 kubelet[2018]: I0317 17:53:23.999129 2018 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.4" Mar 17 17:53:24.010208 kubelet[2018]: I0317 17:53:24.010121 2018 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.4" Mar 17 17:53:24.010208 kubelet[2018]: E0317 17:53:24.010195 2018 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": node \"10.0.0.4\" not found" Mar 17 17:53:24.017027 kubelet[2018]: E0317 17:53:24.016976 2018 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 17:53:24.035648 sudo[1891]: pam_unix(sudo:session): session closed for user root Mar 17 17:53:24.117234 kubelet[2018]: E0317 17:53:24.117143 2018 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 17:53:24.195994 sshd[1890]: Connection closed by 139.178.89.65 port 52920 Mar 17 17:53:24.196728 sshd-session[1888]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:24.201783 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:53:24.202984 systemd[1]: sshd@6-49.12.184.245:22-139.178.89.65:52920.service: Deactivated successfully. Mar 17 17:53:24.206386 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:53:24.206732 systemd[1]: session-7.scope: Consumed 531ms CPU time, 69M memory peak. Mar 17 17:53:24.208601 systemd-logind[1476]: Removed session 7. Mar 17 17:53:24.217911 kubelet[2018]: E0317 17:53:24.217803 2018 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 17:53:24.318410 kubelet[2018]: E0317 17:53:24.318219 2018 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 17:53:24.419289 kubelet[2018]: E0317 17:53:24.419200 2018 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 17:53:24.520269 kubelet[2018]: E0317 17:53:24.520208 2018 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 17:53:24.621187 kubelet[2018]: E0317 17:53:24.621129 2018 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 17:53:24.722184 kubelet[2018]: E0317 17:53:24.722119 2018 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 17:53:24.736698 kubelet[2018]: I0317 17:53:24.736297 2018 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 17:53:24.736698 kubelet[2018]: W0317 17:53:24.736605 2018 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:53:24.736698 kubelet[2018]: W0317 17:53:24.736657 2018 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:53:24.795724 kubelet[2018]: E0317 17:53:24.795630 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:24.822808 kubelet[2018]: E0317 17:53:24.822714 2018 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 17:53:24.923986 kubelet[2018]: E0317 17:53:24.923821 2018 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 17:53:25.024940 kubelet[2018]: E0317 17:53:25.024850 2018 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 17:53:25.127764 kubelet[2018]: I0317 17:53:25.127728 2018 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 17:53:25.128833 containerd[1493]: time="2025-03-17T17:53:25.128759196Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:53:25.129655 kubelet[2018]: I0317 17:53:25.129535 2018 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 17:53:25.799554 kubelet[2018]: E0317 17:53:25.796514 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:25.799554 kubelet[2018]: I0317 17:53:25.796622 2018 apiserver.go:52] "Watching apiserver" Mar 17 17:53:25.809657 kubelet[2018]: E0317 17:53:25.809125 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zszl8" podUID="c0027786-a11b-45ae-ada7-03122d7bb3ca" Mar 17 17:53:25.819499 systemd[1]: Created slice kubepods-besteffort-poddaf64c0b_3a05_4376_90e5_1ee47c3b7569.slice - libcontainer container kubepods-besteffort-poddaf64c0b_3a05_4376_90e5_1ee47c3b7569.slice. Mar 17 17:53:25.822443 kubelet[2018]: I0317 17:53:25.822334 2018 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:53:25.835010 systemd[1]: Created slice kubepods-besteffort-podd0fe17bc_6e93_4f28_a3f2_7e1c05b4571d.slice - libcontainer container kubepods-besteffort-podd0fe17bc_6e93_4f28_a3f2_7e1c05b4571d.slice. Mar 17 17:53:25.836031 kubelet[2018]: I0317 17:53:25.835992 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d-tigera-ca-bundle\") pod \"calico-node-7tpzg\" (UID: \"d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d\") " pod="calico-system/calico-node-7tpzg" Mar 17 17:53:25.836113 kubelet[2018]: I0317 17:53:25.836043 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d-cni-net-dir\") pod \"calico-node-7tpzg\" (UID: \"d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d\") " pod="calico-system/calico-node-7tpzg" Mar 17 17:53:25.836113 kubelet[2018]: I0317 17:53:25.836069 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c0027786-a11b-45ae-ada7-03122d7bb3ca-varrun\") pod \"csi-node-driver-zszl8\" (UID: \"c0027786-a11b-45ae-ada7-03122d7bb3ca\") " pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:25.836113 kubelet[2018]: I0317 17:53:25.836093 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0027786-a11b-45ae-ada7-03122d7bb3ca-kubelet-dir\") pod \"csi-node-driver-zszl8\" (UID: \"c0027786-a11b-45ae-ada7-03122d7bb3ca\") " pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:25.836205 kubelet[2018]: I0317 17:53:25.836115 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c0027786-a11b-45ae-ada7-03122d7bb3ca-registration-dir\") pod \"csi-node-driver-zszl8\" (UID: \"c0027786-a11b-45ae-ada7-03122d7bb3ca\") " pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:25.836205 kubelet[2018]: I0317 17:53:25.836136 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/daf64c0b-3a05-4376-90e5-1ee47c3b7569-xtables-lock\") pod \"kube-proxy-k2kq9\" (UID: \"daf64c0b-3a05-4376-90e5-1ee47c3b7569\") " pod="kube-system/kube-proxy-k2kq9" Mar 17 17:53:25.836205 kubelet[2018]: I0317 17:53:25.836156 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d-var-run-calico\") pod \"calico-node-7tpzg\" (UID: \"d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d\") " pod="calico-system/calico-node-7tpzg" Mar 17 17:53:25.836205 kubelet[2018]: I0317 17:53:25.836176 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c0027786-a11b-45ae-ada7-03122d7bb3ca-socket-dir\") pod \"csi-node-driver-zszl8\" (UID: \"c0027786-a11b-45ae-ada7-03122d7bb3ca\") " pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:25.836284 kubelet[2018]: I0317 17:53:25.836208 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pljgl\" (UniqueName: \"kubernetes.io/projected/daf64c0b-3a05-4376-90e5-1ee47c3b7569-kube-api-access-pljgl\") pod \"kube-proxy-k2kq9\" (UID: \"daf64c0b-3a05-4376-90e5-1ee47c3b7569\") " pod="kube-system/kube-proxy-k2kq9" Mar 17 17:53:25.836284 kubelet[2018]: I0317 17:53:25.836229 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d-node-certs\") pod \"calico-node-7tpzg\" (UID: \"d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d\") " pod="calico-system/calico-node-7tpzg" Mar 17 17:53:25.836284 kubelet[2018]: I0317 17:53:25.836252 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d-var-lib-calico\") pod \"calico-node-7tpzg\" (UID: \"d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d\") " pod="calico-system/calico-node-7tpzg" Mar 17 17:53:25.836284 kubelet[2018]: I0317 17:53:25.836272 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d-cni-log-dir\") pod \"calico-node-7tpzg\" (UID: \"d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d\") " pod="calico-system/calico-node-7tpzg" Mar 17 17:53:25.836364 kubelet[2018]: I0317 17:53:25.836292 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2bqp\" (UniqueName: \"kubernetes.io/projected/c0027786-a11b-45ae-ada7-03122d7bb3ca-kube-api-access-j2bqp\") pod \"csi-node-driver-zszl8\" (UID: \"c0027786-a11b-45ae-ada7-03122d7bb3ca\") " pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:25.836364 kubelet[2018]: I0317 17:53:25.836313 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/daf64c0b-3a05-4376-90e5-1ee47c3b7569-kube-proxy\") pod \"kube-proxy-k2kq9\" (UID: \"daf64c0b-3a05-4376-90e5-1ee47c3b7569\") " pod="kube-system/kube-proxy-k2kq9" Mar 17 17:53:25.836364 kubelet[2018]: I0317 17:53:25.836333 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/daf64c0b-3a05-4376-90e5-1ee47c3b7569-lib-modules\") pod \"kube-proxy-k2kq9\" (UID: \"daf64c0b-3a05-4376-90e5-1ee47c3b7569\") " pod="kube-system/kube-proxy-k2kq9" Mar 17 17:53:25.836364 kubelet[2018]: I0317 17:53:25.836353 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d-lib-modules\") pod \"calico-node-7tpzg\" (UID: \"d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d\") " pod="calico-system/calico-node-7tpzg" Mar 17 17:53:25.836537 kubelet[2018]: I0317 17:53:25.836389 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d-xtables-lock\") pod \"calico-node-7tpzg\" (UID: \"d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d\") " pod="calico-system/calico-node-7tpzg" Mar 17 17:53:25.836537 kubelet[2018]: I0317 17:53:25.836416 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d-policysync\") pod \"calico-node-7tpzg\" (UID: \"d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d\") " pod="calico-system/calico-node-7tpzg" Mar 17 17:53:25.836537 kubelet[2018]: I0317 17:53:25.836455 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d-cni-bin-dir\") pod \"calico-node-7tpzg\" (UID: \"d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d\") " pod="calico-system/calico-node-7tpzg" Mar 17 17:53:25.836537 kubelet[2018]: I0317 17:53:25.836477 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d-flexvol-driver-host\") pod \"calico-node-7tpzg\" (UID: \"d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d\") " pod="calico-system/calico-node-7tpzg" Mar 17 17:53:25.836537 kubelet[2018]: I0317 17:53:25.836501 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gm42\" (UniqueName: \"kubernetes.io/projected/d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d-kube-api-access-2gm42\") pod \"calico-node-7tpzg\" (UID: \"d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d\") " pod="calico-system/calico-node-7tpzg" Mar 17 17:53:25.949600 kubelet[2018]: E0317 17:53:25.948066 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:25.949600 kubelet[2018]: W0317 17:53:25.949503 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:25.949600 kubelet[2018]: E0317 17:53:25.949542 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:25.974772 kubelet[2018]: E0317 17:53:25.973023 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:25.974772 kubelet[2018]: W0317 17:53:25.973060 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:25.974772 kubelet[2018]: E0317 17:53:25.973090 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:25.983248 kubelet[2018]: E0317 17:53:25.983218 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:25.983493 kubelet[2018]: W0317 17:53:25.983421 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:25.983493 kubelet[2018]: E0317 17:53:25.983454 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:25.991364 kubelet[2018]: E0317 17:53:25.990712 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:25.991364 kubelet[2018]: W0317 17:53:25.990740 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:25.991364 kubelet[2018]: E0317 17:53:25.990763 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:26.132614 containerd[1493]: time="2025-03-17T17:53:26.132562250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k2kq9,Uid:daf64c0b-3a05-4376-90e5-1ee47c3b7569,Namespace:kube-system,Attempt:0,}" Mar 17 17:53:26.140678 containerd[1493]: time="2025-03-17T17:53:26.140627640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7tpzg,Uid:d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d,Namespace:calico-system,Attempt:0,}" Mar 17 17:53:26.733164 containerd[1493]: time="2025-03-17T17:53:26.733095006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:53:26.734993 containerd[1493]: time="2025-03-17T17:53:26.734936938Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 17 17:53:26.740086 containerd[1493]: time="2025-03-17T17:53:26.739092656Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:53:26.742969 containerd[1493]: time="2025-03-17T17:53:26.742920405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:53:26.744801 containerd[1493]: time="2025-03-17T17:53:26.744750257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:53:26.745978 containerd[1493]: time="2025-03-17T17:53:26.745893849Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 613.228916ms" Mar 17 17:53:26.746858 containerd[1493]: time="2025-03-17T17:53:26.746666311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:53:26.750543 containerd[1493]: time="2025-03-17T17:53:26.750405378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 609.50093ms" Mar 17 17:53:26.797496 kubelet[2018]: E0317 17:53:26.797446 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:26.850084 containerd[1493]: time="2025-03-17T17:53:26.848545408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:26.850084 containerd[1493]: time="2025-03-17T17:53:26.848634051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:26.850084 containerd[1493]: time="2025-03-17T17:53:26.848645491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:26.850084 containerd[1493]: time="2025-03-17T17:53:26.848764774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:26.855555 containerd[1493]: time="2025-03-17T17:53:26.855316681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:26.855555 containerd[1493]: time="2025-03-17T17:53:26.855463325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:26.855555 containerd[1493]: time="2025-03-17T17:53:26.855484126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:26.856603 containerd[1493]: time="2025-03-17T17:53:26.856527075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:26.959329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872626107.mount: Deactivated successfully. Mar 17 17:53:26.972347 systemd[1]: Started cri-containerd-0a7f04db3200b37993e263207de23e717642d6c5c1f9936664e736b67665176e.scope - libcontainer container 0a7f04db3200b37993e263207de23e717642d6c5c1f9936664e736b67665176e. Mar 17 17:53:26.975115 systemd[1]: Started cri-containerd-26b10c9c9d5428df1f1fab6f47806912e6751ec1cd2f85fdebd9816d21a88667.scope - libcontainer container 26b10c9c9d5428df1f1fab6f47806912e6751ec1cd2f85fdebd9816d21a88667. Mar 17 17:53:27.011136 containerd[1493]: time="2025-03-17T17:53:27.011011419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k2kq9,Uid:daf64c0b-3a05-4376-90e5-1ee47c3b7569,Namespace:kube-system,Attempt:0,} returns sandbox id \"26b10c9c9d5428df1f1fab6f47806912e6751ec1cd2f85fdebd9816d21a88667\"" Mar 17 17:53:27.019580 containerd[1493]: time="2025-03-17T17:53:27.019282047Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 17:53:27.026875 containerd[1493]: time="2025-03-17T17:53:27.026827815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7tpzg,Uid:d0fe17bc-6e93-4f28-a3f2-7e1c05b4571d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a7f04db3200b37993e263207de23e717642d6c5c1f9936664e736b67665176e\"" Mar 17 17:53:27.798049 kubelet[2018]: E0317 17:53:27.798004 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:27.909076 kubelet[2018]: E0317 17:53:27.908177 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zszl8" podUID="c0027786-a11b-45ae-ada7-03122d7bb3ca" Mar 17 17:53:28.051315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1491778419.mount: Deactivated successfully. Mar 17 17:53:28.325778 containerd[1493]: time="2025-03-17T17:53:28.325637723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:28.326576 containerd[1493]: time="2025-03-17T17:53:28.326391064Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370121" Mar 17 17:53:28.327374 containerd[1493]: time="2025-03-17T17:53:28.327276327Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:28.330879 containerd[1493]: time="2025-03-17T17:53:28.330541174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:28.331479 containerd[1493]: time="2025-03-17T17:53:28.331431998Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 1.31208767s" Mar 17 17:53:28.331479 containerd[1493]: time="2025-03-17T17:53:28.331476679Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 17 17:53:28.333342 containerd[1493]: time="2025-03-17T17:53:28.333291328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:53:28.335969 containerd[1493]: time="2025-03-17T17:53:28.335594669Z" level=info msg="CreateContainer within sandbox \"26b10c9c9d5428df1f1fab6f47806912e6751ec1cd2f85fdebd9816d21a88667\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:53:28.351141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2064610672.mount: Deactivated successfully. Mar 17 17:53:28.360925 containerd[1493]: time="2025-03-17T17:53:28.360843423Z" level=info msg="CreateContainer within sandbox \"26b10c9c9d5428df1f1fab6f47806912e6751ec1cd2f85fdebd9816d21a88667\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6e1f8de15d34f5d61d571c9dbc80def216631a2b62ffc4e2eee51006e9ece7f1\"" Mar 17 17:53:28.362063 containerd[1493]: time="2025-03-17T17:53:28.362015455Z" level=info msg="StartContainer for \"6e1f8de15d34f5d61d571c9dbc80def216631a2b62ffc4e2eee51006e9ece7f1\"" Mar 17 17:53:28.394144 systemd[1]: Started cri-containerd-6e1f8de15d34f5d61d571c9dbc80def216631a2b62ffc4e2eee51006e9ece7f1.scope - libcontainer container 6e1f8de15d34f5d61d571c9dbc80def216631a2b62ffc4e2eee51006e9ece7f1. Mar 17 17:53:28.431155 containerd[1493]: time="2025-03-17T17:53:28.430990696Z" level=info msg="StartContainer for \"6e1f8de15d34f5d61d571c9dbc80def216631a2b62ffc4e2eee51006e9ece7f1\" returns successfully" Mar 17 17:53:28.798478 kubelet[2018]: E0317 17:53:28.798403 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:28.944421 kubelet[2018]: E0317 17:53:28.944363 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.944421 kubelet[2018]: W0317 17:53:28.944398 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.944421 kubelet[2018]: E0317 17:53:28.944433 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.945067 kubelet[2018]: E0317 17:53:28.945028 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.945243 kubelet[2018]: W0317 17:53:28.945056 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.946842 kubelet[2018]: E0317 17:53:28.945357 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.946842 kubelet[2018]: E0317 17:53:28.945803 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.946842 kubelet[2018]: W0317 17:53:28.945818 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.946842 kubelet[2018]: E0317 17:53:28.945835 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.946842 kubelet[2018]: E0317 17:53:28.946404 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.946842 kubelet[2018]: W0317 17:53:28.946421 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.946842 kubelet[2018]: E0317 17:53:28.946481 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.946842 kubelet[2018]: E0317 17:53:28.946739 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.946842 kubelet[2018]: W0317 17:53:28.946750 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.946842 kubelet[2018]: E0317 17:53:28.946763 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.947377 kubelet[2018]: E0317 17:53:28.947340 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.947377 kubelet[2018]: W0317 17:53:28.947375 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.947467 kubelet[2018]: E0317 17:53:28.947391 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.947697 kubelet[2018]: E0317 17:53:28.947669 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.947697 kubelet[2018]: W0317 17:53:28.947688 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.947697 kubelet[2018]: E0317 17:53:28.947699 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.948027 kubelet[2018]: E0317 17:53:28.948000 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.948027 kubelet[2018]: W0317 17:53:28.948020 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.948129 kubelet[2018]: E0317 17:53:28.948045 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.948330 kubelet[2018]: E0317 17:53:28.948302 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.948330 kubelet[2018]: W0317 17:53:28.948320 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.948489 kubelet[2018]: E0317 17:53:28.948331 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.948693 kubelet[2018]: E0317 17:53:28.948665 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.948693 kubelet[2018]: W0317 17:53:28.948685 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.948771 kubelet[2018]: E0317 17:53:28.948705 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.948922 kubelet[2018]: E0317 17:53:28.948878 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.948922 kubelet[2018]: W0317 17:53:28.948892 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.948922 kubelet[2018]: E0317 17:53:28.948919 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.949104 kubelet[2018]: E0317 17:53:28.949081 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.949163 kubelet[2018]: W0317 17:53:28.949146 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.949163 kubelet[2018]: E0317 17:53:28.949161 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.950221 kubelet[2018]: E0317 17:53:28.950193 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.950221 kubelet[2018]: W0317 17:53:28.950219 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.950379 kubelet[2018]: E0317 17:53:28.950234 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.950839 kubelet[2018]: E0317 17:53:28.950718 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.950839 kubelet[2018]: W0317 17:53:28.950742 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.950839 kubelet[2018]: E0317 17:53:28.950756 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.951420 kubelet[2018]: E0317 17:53:28.951108 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.951420 kubelet[2018]: W0317 17:53:28.951119 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.951420 kubelet[2018]: E0317 17:53:28.951132 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.951420 kubelet[2018]: E0317 17:53:28.951357 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.951420 kubelet[2018]: W0317 17:53:28.951367 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.951420 kubelet[2018]: E0317 17:53:28.951377 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.951598 kubelet[2018]: E0317 17:53:28.951590 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.951623 kubelet[2018]: W0317 17:53:28.951600 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.951623 kubelet[2018]: E0317 17:53:28.951611 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.952788 kubelet[2018]: E0317 17:53:28.951784 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.952788 kubelet[2018]: W0317 17:53:28.951796 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.952788 kubelet[2018]: E0317 17:53:28.951817 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.953290 kubelet[2018]: E0317 17:53:28.953265 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.953290 kubelet[2018]: W0317 17:53:28.953285 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.953360 kubelet[2018]: E0317 17:53:28.953312 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.953636 kubelet[2018]: E0317 17:53:28.953598 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.953636 kubelet[2018]: W0317 17:53:28.953618 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.953636 kubelet[2018]: E0317 17:53:28.953631 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.959868 kubelet[2018]: E0317 17:53:28.959664 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.959868 kubelet[2018]: W0317 17:53:28.959684 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.959868 kubelet[2018]: E0317 17:53:28.959699 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.960278 kubelet[2018]: E0317 17:53:28.960263 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.960525 kubelet[2018]: W0317 17:53:28.960327 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.960525 kubelet[2018]: E0317 17:53:28.960354 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.960761 kubelet[2018]: E0317 17:53:28.960745 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.960829 kubelet[2018]: W0317 17:53:28.960817 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.960892 kubelet[2018]: E0317 17:53:28.960880 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.961504 kubelet[2018]: E0317 17:53:28.961477 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.961504 kubelet[2018]: W0317 17:53:28.961497 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.961596 kubelet[2018]: E0317 17:53:28.961517 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.961980 kubelet[2018]: E0317 17:53:28.961731 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.961980 kubelet[2018]: W0317 17:53:28.961746 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.961980 kubelet[2018]: E0317 17:53:28.961757 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.962281 kubelet[2018]: E0317 17:53:28.962263 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.962281 kubelet[2018]: W0317 17:53:28.962280 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.962833 kubelet[2018]: E0317 17:53:28.962658 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.962833 kubelet[2018]: W0317 17:53:28.962673 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.962833 kubelet[2018]: E0317 17:53:28.962686 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.963140 kubelet[2018]: E0317 17:53:28.963126 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.963343 kubelet[2018]: W0317 17:53:28.963201 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.963343 kubelet[2018]: E0317 17:53:28.963218 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.963491 kubelet[2018]: E0317 17:53:28.963366 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.963604 kubelet[2018]: E0317 17:53:28.963590 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.963670 kubelet[2018]: W0317 17:53:28.963658 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.963739 kubelet[2018]: E0317 17:53:28.963729 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.964325 kubelet[2018]: E0317 17:53:28.964306 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.964665 kubelet[2018]: W0317 17:53:28.964402 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.964665 kubelet[2018]: E0317 17:53:28.964447 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.964889 kubelet[2018]: E0317 17:53:28.964872 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.964956 kubelet[2018]: W0317 17:53:28.964889 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.964995 kubelet[2018]: E0317 17:53:28.964968 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:28.965275 kubelet[2018]: E0317 17:53:28.965260 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:28.965275 kubelet[2018]: W0317 17:53:28.965274 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:28.965345 kubelet[2018]: E0317 17:53:28.965291 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.799682 kubelet[2018]: E0317 17:53:29.799605 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:29.909184 kubelet[2018]: E0317 17:53:29.908621 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zszl8" podUID="c0027786-a11b-45ae-ada7-03122d7bb3ca" Mar 17 17:53:29.962368 kubelet[2018]: E0317 17:53:29.960438 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.962368 kubelet[2018]: W0317 17:53:29.960493 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.962368 kubelet[2018]: E0317 17:53:29.960556 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.962368 kubelet[2018]: E0317 17:53:29.960975 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.962368 kubelet[2018]: W0317 17:53:29.960996 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.962368 kubelet[2018]: E0317 17:53:29.961019 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.962368 kubelet[2018]: E0317 17:53:29.961321 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.962368 kubelet[2018]: W0317 17:53:29.961343 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.962368 kubelet[2018]: E0317 17:53:29.961378 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.962368 kubelet[2018]: E0317 17:53:29.961724 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.963709 kubelet[2018]: W0317 17:53:29.961738 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.963709 kubelet[2018]: E0317 17:53:29.961781 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.963709 kubelet[2018]: E0317 17:53:29.962048 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.963709 kubelet[2018]: W0317 17:53:29.962059 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.963709 kubelet[2018]: E0317 17:53:29.962069 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.963709 kubelet[2018]: E0317 17:53:29.962271 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.963709 kubelet[2018]: W0317 17:53:29.962281 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.963709 kubelet[2018]: E0317 17:53:29.962290 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.963709 kubelet[2018]: E0317 17:53:29.962883 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.963709 kubelet[2018]: W0317 17:53:29.962909 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.964123 kubelet[2018]: E0317 17:53:29.962923 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.964123 kubelet[2018]: E0317 17:53:29.963152 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.964123 kubelet[2018]: W0317 17:53:29.963165 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.964123 kubelet[2018]: E0317 17:53:29.963175 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.964123 kubelet[2018]: E0317 17:53:29.963354 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.964123 kubelet[2018]: W0317 17:53:29.963362 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.964123 kubelet[2018]: E0317 17:53:29.963371 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.964123 kubelet[2018]: E0317 17:53:29.963544 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.964123 kubelet[2018]: W0317 17:53:29.963556 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.964123 kubelet[2018]: E0317 17:53:29.963566 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.964396 kubelet[2018]: E0317 17:53:29.963828 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.964396 kubelet[2018]: W0317 17:53:29.963839 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.964396 kubelet[2018]: E0317 17:53:29.963854 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.964396 kubelet[2018]: E0317 17:53:29.964132 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.964396 kubelet[2018]: W0317 17:53:29.964142 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.964396 kubelet[2018]: E0317 17:53:29.964152 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.964396 kubelet[2018]: E0317 17:53:29.964304 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.964396 kubelet[2018]: W0317 17:53:29.964311 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.964396 kubelet[2018]: E0317 17:53:29.964319 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.965039 kubelet[2018]: E0317 17:53:29.964428 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.965039 kubelet[2018]: W0317 17:53:29.964436 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.965039 kubelet[2018]: E0317 17:53:29.964443 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.965039 kubelet[2018]: E0317 17:53:29.964581 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.965039 kubelet[2018]: W0317 17:53:29.964589 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.965039 kubelet[2018]: E0317 17:53:29.964597 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.965039 kubelet[2018]: E0317 17:53:29.964723 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.965039 kubelet[2018]: W0317 17:53:29.964731 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.965039 kubelet[2018]: E0317 17:53:29.964738 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.965039 kubelet[2018]: E0317 17:53:29.965022 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.965293 kubelet[2018]: W0317 17:53:29.965033 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.965293 kubelet[2018]: E0317 17:53:29.965044 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.965293 kubelet[2018]: E0317 17:53:29.965195 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.965293 kubelet[2018]: W0317 17:53:29.965203 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.965293 kubelet[2018]: E0317 17:53:29.965211 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.965513 kubelet[2018]: E0317 17:53:29.965339 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.965513 kubelet[2018]: W0317 17:53:29.965346 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.965513 kubelet[2018]: E0317 17:53:29.965354 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.965513 kubelet[2018]: E0317 17:53:29.965502 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.965513 kubelet[2018]: W0317 17:53:29.965511 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.965631 kubelet[2018]: E0317 17:53:29.965519 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.966955 kubelet[2018]: E0317 17:53:29.966790 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.966955 kubelet[2018]: W0317 17:53:29.966806 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.966955 kubelet[2018]: E0317 17:53:29.966829 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.967219 kubelet[2018]: E0317 17:53:29.967205 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.967410 kubelet[2018]: W0317 17:53:29.967277 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.967410 kubelet[2018]: E0317 17:53:29.967295 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.968765 kubelet[2018]: E0317 17:53:29.968618 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.968765 kubelet[2018]: W0317 17:53:29.968636 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.969099 kubelet[2018]: E0317 17:53:29.969001 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.969724 kubelet[2018]: E0317 17:53:29.969684 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.969954 kubelet[2018]: W0317 17:53:29.969794 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.970176 kubelet[2018]: E0317 17:53:29.969916 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.970542 kubelet[2018]: E0317 17:53:29.970522 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.970766 kubelet[2018]: W0317 17:53:29.970573 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.970766 kubelet[2018]: E0317 17:53:29.970611 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.971413 kubelet[2018]: E0317 17:53:29.971305 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.971413 kubelet[2018]: W0317 17:53:29.971346 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.971413 kubelet[2018]: E0317 17:53:29.971367 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.974854 kubelet[2018]: E0317 17:53:29.972809 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.974854 kubelet[2018]: W0317 17:53:29.972853 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.974854 kubelet[2018]: E0317 17:53:29.972873 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.974854 kubelet[2018]: E0317 17:53:29.973212 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.974854 kubelet[2018]: W0317 17:53:29.973373 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.974854 kubelet[2018]: E0317 17:53:29.973400 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.974854 kubelet[2018]: E0317 17:53:29.974080 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.974854 kubelet[2018]: W0317 17:53:29.974130 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.974854 kubelet[2018]: E0317 17:53:29.974143 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.974854 kubelet[2018]: E0317 17:53:29.974386 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.975990 kubelet[2018]: W0317 17:53:29.974396 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.975990 kubelet[2018]: E0317 17:53:29.974406 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.975990 kubelet[2018]: E0317 17:53:29.974679 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.975990 kubelet[2018]: W0317 17:53:29.974740 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.975990 kubelet[2018]: E0317 17:53:29.974751 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:29.975990 kubelet[2018]: E0317 17:53:29.975811 2018 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:29.975990 kubelet[2018]: W0317 17:53:29.975824 2018 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:29.975990 kubelet[2018]: E0317 17:53:29.975835 2018 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:30.075693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167685724.mount: Deactivated successfully. Mar 17 17:53:30.167908 containerd[1493]: time="2025-03-17T17:53:30.167819983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:30.168994 containerd[1493]: time="2025-03-17T17:53:30.168934211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6490047" Mar 17 17:53:30.171269 containerd[1493]: time="2025-03-17T17:53:30.169965597Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:30.173141 containerd[1493]: time="2025-03-17T17:53:30.173093035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:30.174068 containerd[1493]: time="2025-03-17T17:53:30.174019898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.840680489s" Mar 17 17:53:30.174068 containerd[1493]: time="2025-03-17T17:53:30.174068220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 17 17:53:30.178014 containerd[1493]: time="2025-03-17T17:53:30.177973797Z" level=info msg="CreateContainer within sandbox \"0a7f04db3200b37993e263207de23e717642d6c5c1f9936664e736b67665176e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:53:30.205403 containerd[1493]: time="2025-03-17T17:53:30.205337404Z" level=info msg="CreateContainer within sandbox \"0a7f04db3200b37993e263207de23e717642d6c5c1f9936664e736b67665176e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d8cf8c643007118e398bfac581dc2f029ad4d58e6b69979c8b8c2716dd1ebb8b\"" Mar 17 17:53:30.206432 containerd[1493]: time="2025-03-17T17:53:30.206388350Z" level=info msg="StartContainer for \"d8cf8c643007118e398bfac581dc2f029ad4d58e6b69979c8b8c2716dd1ebb8b\"" Mar 17 17:53:30.245726 systemd[1]: Started cri-containerd-d8cf8c643007118e398bfac581dc2f029ad4d58e6b69979c8b8c2716dd1ebb8b.scope - libcontainer container d8cf8c643007118e398bfac581dc2f029ad4d58e6b69979c8b8c2716dd1ebb8b. Mar 17 17:53:30.289389 containerd[1493]: time="2025-03-17T17:53:30.289325029Z" level=info msg="StartContainer for \"d8cf8c643007118e398bfac581dc2f029ad4d58e6b69979c8b8c2716dd1ebb8b\" returns successfully" Mar 17 17:53:30.305078 systemd[1]: cri-containerd-d8cf8c643007118e398bfac581dc2f029ad4d58e6b69979c8b8c2716dd1ebb8b.scope: Deactivated successfully. Mar 17 17:53:30.425566 containerd[1493]: time="2025-03-17T17:53:30.425370000Z" level=info msg="shim disconnected" id=d8cf8c643007118e398bfac581dc2f029ad4d58e6b69979c8b8c2716dd1ebb8b namespace=k8s.io Mar 17 17:53:30.425566 containerd[1493]: time="2025-03-17T17:53:30.425488643Z" level=warning msg="cleaning up after shim disconnected" id=d8cf8c643007118e398bfac581dc2f029ad4d58e6b69979c8b8c2716dd1ebb8b namespace=k8s.io Mar 17 17:53:30.425566 containerd[1493]: time="2025-03-17T17:53:30.425525044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:53:30.800875 kubelet[2018]: E0317 17:53:30.800692 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:30.937938 containerd[1493]: time="2025-03-17T17:53:30.937855610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:53:30.959636 kubelet[2018]: I0317 17:53:30.959106 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k2kq9" podStartSLOduration=5.6447099309999995 podStartE2EDuration="6.959077182s" podCreationTimestamp="2025-03-17 17:53:24 +0000 UTC" firstStartedPulling="2025-03-17 17:53:27.01868403 +0000 UTC m=+4.039003580" lastFinishedPulling="2025-03-17 17:53:28.333051241 +0000 UTC m=+5.353370831" observedRunningTime="2025-03-17 17:53:28.950049995 +0000 UTC m=+5.970369545" watchObservedRunningTime="2025-03-17 17:53:30.959077182 +0000 UTC m=+7.979396772" Mar 17 17:53:31.077739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8cf8c643007118e398bfac581dc2f029ad4d58e6b69979c8b8c2716dd1ebb8b-rootfs.mount: Deactivated successfully. Mar 17 17:53:31.802050 kubelet[2018]: E0317 17:53:31.801648 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:31.908822 kubelet[2018]: E0317 17:53:31.908247 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zszl8" podUID="c0027786-a11b-45ae-ada7-03122d7bb3ca" Mar 17 17:53:32.802413 kubelet[2018]: E0317 17:53:32.802345 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:33.802957 kubelet[2018]: E0317 17:53:33.802849 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:33.909065 kubelet[2018]: E0317 17:53:33.908604 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zszl8" podUID="c0027786-a11b-45ae-ada7-03122d7bb3ca" Mar 17 17:53:34.803352 kubelet[2018]: E0317 17:53:34.803304 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:35.449129 containerd[1493]: time="2025-03-17T17:53:35.449024577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:35.450647 containerd[1493]: time="2025-03-17T17:53:35.450528409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 17 17:53:35.451856 containerd[1493]: time="2025-03-17T17:53:35.451508390Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:35.454717 containerd[1493]: time="2025-03-17T17:53:35.454646137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:35.455826 containerd[1493]: time="2025-03-17T17:53:35.455779882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 4.51787435s" Mar 17 17:53:35.455826 containerd[1493]: time="2025-03-17T17:53:35.455823363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 17 17:53:35.459550 containerd[1493]: time="2025-03-17T17:53:35.459354478Z" level=info msg="CreateContainer within sandbox \"0a7f04db3200b37993e263207de23e717642d6c5c1f9936664e736b67665176e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:53:35.479182 containerd[1493]: time="2025-03-17T17:53:35.479091821Z" level=info msg="CreateContainer within sandbox \"0a7f04db3200b37993e263207de23e717642d6c5c1f9936664e736b67665176e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"22a173becdbb17bb36f3f4cd5bdac7405ef2aeda88ac96662a1e493120e5778b\"" Mar 17 17:53:35.480073 containerd[1493]: time="2025-03-17T17:53:35.479999801Z" level=info msg="StartContainer for \"22a173becdbb17bb36f3f4cd5bdac7405ef2aeda88ac96662a1e493120e5778b\"" Mar 17 17:53:35.512142 systemd[1]: Started cri-containerd-22a173becdbb17bb36f3f4cd5bdac7405ef2aeda88ac96662a1e493120e5778b.scope - libcontainer container 22a173becdbb17bb36f3f4cd5bdac7405ef2aeda88ac96662a1e493120e5778b. Mar 17 17:53:35.547331 containerd[1493]: time="2025-03-17T17:53:35.547168280Z" level=info msg="StartContainer for \"22a173becdbb17bb36f3f4cd5bdac7405ef2aeda88ac96662a1e493120e5778b\" returns successfully" Mar 17 17:53:35.804415 kubelet[2018]: E0317 17:53:35.804238 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:35.910239 kubelet[2018]: E0317 17:53:35.910182 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zszl8" podUID="c0027786-a11b-45ae-ada7-03122d7bb3ca" Mar 17 17:53:36.097102 kubelet[2018]: I0317 17:53:36.096768 2018 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 17:53:36.098818 systemd[1]: cri-containerd-22a173becdbb17bb36f3f4cd5bdac7405ef2aeda88ac96662a1e493120e5778b.scope: Deactivated successfully. Mar 17 17:53:36.099270 systemd[1]: cri-containerd-22a173becdbb17bb36f3f4cd5bdac7405ef2aeda88ac96662a1e493120e5778b.scope: Consumed 527ms CPU time, 171.7M memory peak, 150.3M written to disk. Mar 17 17:53:36.122785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22a173becdbb17bb36f3f4cd5bdac7405ef2aeda88ac96662a1e493120e5778b-rootfs.mount: Deactivated successfully. Mar 17 17:53:36.239832 containerd[1493]: time="2025-03-17T17:53:36.239752443Z" level=info msg="shim disconnected" id=22a173becdbb17bb36f3f4cd5bdac7405ef2aeda88ac96662a1e493120e5778b namespace=k8s.io Mar 17 17:53:36.239832 containerd[1493]: time="2025-03-17T17:53:36.239827045Z" level=warning msg="cleaning up after shim disconnected" id=22a173becdbb17bb36f3f4cd5bdac7405ef2aeda88ac96662a1e493120e5778b namespace=k8s.io Mar 17 17:53:36.240291 containerd[1493]: time="2025-03-17T17:53:36.239845445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:53:36.805111 kubelet[2018]: E0317 17:53:36.805021 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:36.958713 containerd[1493]: time="2025-03-17T17:53:36.958332566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:53:37.805424 kubelet[2018]: E0317 17:53:37.805331 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:37.919032 systemd[1]: Created slice kubepods-besteffort-podc0027786_a11b_45ae_ada7_03122d7bb3ca.slice - libcontainer container kubepods-besteffort-podc0027786_a11b_45ae_ada7_03122d7bb3ca.slice. Mar 17 17:53:37.923992 containerd[1493]: time="2025-03-17T17:53:37.923435977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:0,}" Mar 17 17:53:38.034941 containerd[1493]: time="2025-03-17T17:53:38.034831118Z" level=error msg="Failed to destroy network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:38.037293 containerd[1493]: time="2025-03-17T17:53:38.037200725Z" level=error msg="encountered an error cleaning up failed sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:38.036796 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3-shm.mount: Deactivated successfully. Mar 17 17:53:38.037471 containerd[1493]: time="2025-03-17T17:53:38.037319887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:38.037966 kubelet[2018]: E0317 17:53:38.037779 2018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:38.037966 kubelet[2018]: E0317 17:53:38.037882 2018 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:38.038525 kubelet[2018]: E0317 17:53:38.038174 2018 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:38.038525 kubelet[2018]: E0317 17:53:38.038259 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zszl8_calico-system(c0027786-a11b-45ae-ada7-03122d7bb3ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zszl8_calico-system(c0027786-a11b-45ae-ada7-03122d7bb3ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zszl8" podUID="c0027786-a11b-45ae-ada7-03122d7bb3ca" Mar 17 17:53:38.806292 kubelet[2018]: E0317 17:53:38.806222 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:38.965418 kubelet[2018]: I0317 17:53:38.965275 2018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3" Mar 17 17:53:38.968051 containerd[1493]: time="2025-03-17T17:53:38.967966519Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\"" Mar 17 17:53:38.968239 containerd[1493]: time="2025-03-17T17:53:38.968200164Z" level=info msg="Ensure that sandbox 460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3 in task-service has been cleanup successfully" Mar 17 17:53:38.970091 containerd[1493]: time="2025-03-17T17:53:38.970029839Z" level=info msg="TearDown network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" successfully" Mar 17 17:53:38.970091 containerd[1493]: time="2025-03-17T17:53:38.970073240Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" returns successfully" Mar 17 17:53:38.972093 containerd[1493]: time="2025-03-17T17:53:38.971225703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:1,}" Mar 17 17:53:38.971397 systemd[1]: run-netns-cni\x2dd43b09f2\x2d0a08\x2d98e4\x2d2828\x2df84be9f23203.mount: Deactivated successfully. Mar 17 17:53:39.053892 containerd[1493]: time="2025-03-17T17:53:39.053831202Z" level=error msg="Failed to destroy network for sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:39.054417 containerd[1493]: time="2025-03-17T17:53:39.054247650Z" level=error msg="encountered an error cleaning up failed sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:39.054417 containerd[1493]: time="2025-03-17T17:53:39.054311851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:39.055891 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3-shm.mount: Deactivated successfully. Mar 17 17:53:39.057060 kubelet[2018]: E0317 17:53:39.056161 2018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:39.057060 kubelet[2018]: E0317 17:53:39.056221 2018 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:39.057060 kubelet[2018]: E0317 17:53:39.056244 2018 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:39.057220 kubelet[2018]: E0317 17:53:39.056287 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zszl8_calico-system(c0027786-a11b-45ae-ada7-03122d7bb3ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zszl8_calico-system(c0027786-a11b-45ae-ada7-03122d7bb3ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zszl8" podUID="c0027786-a11b-45ae-ada7-03122d7bb3ca" Mar 17 17:53:39.807124 kubelet[2018]: E0317 17:53:39.807006 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:39.969981 kubelet[2018]: I0317 17:53:39.969256 2018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3" Mar 17 17:53:39.972356 containerd[1493]: time="2025-03-17T17:53:39.970365688Z" level=info msg="StopPodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\"" Mar 17 17:53:39.972356 containerd[1493]: time="2025-03-17T17:53:39.970564291Z" level=info msg="Ensure that sandbox 99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3 in task-service has been cleanup successfully" Mar 17 17:53:39.972563 systemd[1]: run-netns-cni\x2d97a955b6\x2d37ca\x2dd899\x2d8967\x2db7205dfd57c3.mount: Deactivated successfully. Mar 17 17:53:39.973797 containerd[1493]: time="2025-03-17T17:53:39.973090419Z" level=info msg="TearDown network for sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" successfully" Mar 17 17:53:39.973797 containerd[1493]: time="2025-03-17T17:53:39.973122980Z" level=info msg="StopPodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" returns successfully" Mar 17 17:53:39.973976 containerd[1493]: time="2025-03-17T17:53:39.973942515Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\"" Mar 17 17:53:39.974412 containerd[1493]: time="2025-03-17T17:53:39.974077118Z" level=info msg="TearDown network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" successfully" Mar 17 17:53:39.974412 containerd[1493]: time="2025-03-17T17:53:39.974095678Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" returns successfully" Mar 17 17:53:39.975441 containerd[1493]: time="2025-03-17T17:53:39.974743170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:2,}" Mar 17 17:53:40.066233 containerd[1493]: time="2025-03-17T17:53:40.065978577Z" level=error msg="Failed to destroy network for sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:40.068883 containerd[1493]: time="2025-03-17T17:53:40.066911314Z" level=error msg="encountered an error cleaning up failed sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:40.068883 containerd[1493]: time="2025-03-17T17:53:40.067012516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:40.069752 kubelet[2018]: E0317 17:53:40.069341 2018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:40.069752 kubelet[2018]: E0317 17:53:40.069409 2018 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:40.069752 kubelet[2018]: E0317 17:53:40.069430 2018 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:40.069966 kubelet[2018]: E0317 17:53:40.069473 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zszl8_calico-system(c0027786-a11b-45ae-ada7-03122d7bb3ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zszl8_calico-system(c0027786-a11b-45ae-ada7-03122d7bb3ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zszl8" podUID="c0027786-a11b-45ae-ada7-03122d7bb3ca" Mar 17 17:53:40.069798 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b-shm.mount: Deactivated successfully. Mar 17 17:53:40.808025 kubelet[2018]: E0317 17:53:40.807957 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:40.975099 kubelet[2018]: I0317 17:53:40.974658 2018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b" Mar 17 17:53:40.978257 containerd[1493]: time="2025-03-17T17:53:40.976077370Z" level=info msg="StopPodSandbox for \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\"" Mar 17 17:53:40.978257 containerd[1493]: time="2025-03-17T17:53:40.976272974Z" level=info msg="Ensure that sandbox 75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b in task-service has been cleanup successfully" Mar 17 17:53:40.980488 containerd[1493]: time="2025-03-17T17:53:40.978877822Z" level=info msg="TearDown network for sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\" successfully" Mar 17 17:53:40.980488 containerd[1493]: time="2025-03-17T17:53:40.978915742Z" level=info msg="StopPodSandbox for \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\" returns successfully" Mar 17 17:53:40.980488 containerd[1493]: time="2025-03-17T17:53:40.979231068Z" level=info msg="StopPodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\"" Mar 17 17:53:40.980488 containerd[1493]: time="2025-03-17T17:53:40.979319790Z" level=info msg="TearDown network for sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" successfully" Mar 17 17:53:40.980488 containerd[1493]: time="2025-03-17T17:53:40.979329150Z" level=info msg="StopPodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" returns successfully" Mar 17 17:53:40.980488 containerd[1493]: time="2025-03-17T17:53:40.979540234Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\"" Mar 17 17:53:40.980488 containerd[1493]: time="2025-03-17T17:53:40.979610595Z" level=info msg="TearDown network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" successfully" Mar 17 17:53:40.980488 containerd[1493]: time="2025-03-17T17:53:40.979620075Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" returns successfully" Mar 17 17:53:40.980488 containerd[1493]: time="2025-03-17T17:53:40.980036283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:3,}" Mar 17 17:53:40.982159 systemd[1]: run-netns-cni\x2d18ff349d\x2d3c8a\x2d13c2\x2db10b\x2de8b6118ad1e0.mount: Deactivated successfully. Mar 17 17:53:41.069015 containerd[1493]: time="2025-03-17T17:53:41.068739350Z" level=error msg="Failed to destroy network for sandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:41.072815 containerd[1493]: time="2025-03-17T17:53:41.070447461Z" level=error msg="encountered an error cleaning up failed sandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:41.072815 containerd[1493]: time="2025-03-17T17:53:41.070606183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:41.073028 kubelet[2018]: E0317 17:53:41.072238 2018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:41.073028 kubelet[2018]: E0317 17:53:41.072303 2018 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:41.073028 kubelet[2018]: E0317 17:53:41.072353 2018 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:41.071384 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab-shm.mount: Deactivated successfully. Mar 17 17:53:41.073349 kubelet[2018]: E0317 17:53:41.072450 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zszl8_calico-system(c0027786-a11b-45ae-ada7-03122d7bb3ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zszl8_calico-system(c0027786-a11b-45ae-ada7-03122d7bb3ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zszl8" podUID="c0027786-a11b-45ae-ada7-03122d7bb3ca" Mar 17 17:53:41.808767 kubelet[2018]: E0317 17:53:41.808699 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:41.961553 systemd[1]: Created slice kubepods-besteffort-pod51665948_acbb_4da3_8eba_2345b7d6dd61.slice - libcontainer container kubepods-besteffort-pod51665948_acbb_4da3_8eba_2345b7d6dd61.slice. Mar 17 17:53:41.979280 kubelet[2018]: I0317 17:53:41.979233 2018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab" Mar 17 17:53:41.980954 containerd[1493]: time="2025-03-17T17:53:41.980271135Z" level=info msg="StopPodSandbox for \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\"" Mar 17 17:53:41.980954 containerd[1493]: time="2025-03-17T17:53:41.980708143Z" level=info msg="Ensure that sandbox 7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab in task-service has been cleanup successfully" Mar 17 17:53:41.983172 containerd[1493]: time="2025-03-17T17:53:41.983107106Z" level=info msg="TearDown network for sandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\" successfully" Mar 17 17:53:41.983295 containerd[1493]: time="2025-03-17T17:53:41.983179187Z" level=info msg="StopPodSandbox for \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\" returns successfully" Mar 17 17:53:41.983772 containerd[1493]: time="2025-03-17T17:53:41.983717476Z" level=info msg="StopPodSandbox for \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\"" Mar 17 17:53:41.985308 containerd[1493]: time="2025-03-17T17:53:41.985266584Z" level=info msg="TearDown network for sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\" successfully" Mar 17 17:53:41.985308 containerd[1493]: time="2025-03-17T17:53:41.985296944Z" level=info msg="StopPodSandbox for \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\" returns successfully" Mar 17 17:53:41.985761 containerd[1493]: time="2025-03-17T17:53:41.985712312Z" level=info msg="StopPodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\"" Mar 17 17:53:41.985812 containerd[1493]: time="2025-03-17T17:53:41.985803873Z" level=info msg="TearDown network for sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" successfully" Mar 17 17:53:41.985845 containerd[1493]: time="2025-03-17T17:53:41.985813994Z" level=info msg="StopPodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" returns successfully" Mar 17 17:53:41.987720 containerd[1493]: time="2025-03-17T17:53:41.987449223Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\"" Mar 17 17:53:41.987661 systemd[1]: run-netns-cni\x2dfb5e26a1\x2d0b43\x2dc354\x2d4313\x2d80240b8e5f29.mount: Deactivated successfully. Mar 17 17:53:41.989465 containerd[1493]: time="2025-03-17T17:53:41.988792286Z" level=info msg="TearDown network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" successfully" Mar 17 17:53:41.989465 containerd[1493]: time="2025-03-17T17:53:41.988828767Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" returns successfully" Mar 17 17:53:41.990290 containerd[1493]: time="2025-03-17T17:53:41.989930347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:4,}" Mar 17 17:53:42.050814 kubelet[2018]: I0317 17:53:42.050744 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpjvl\" (UniqueName: \"kubernetes.io/projected/51665948-acbb-4da3-8eba-2345b7d6dd61-kube-api-access-dpjvl\") pod \"nginx-deployment-7fcdb87857-lvkx4\" (UID: \"51665948-acbb-4da3-8eba-2345b7d6dd61\") " pod="default/nginx-deployment-7fcdb87857-lvkx4" Mar 17 17:53:42.096237 containerd[1493]: time="2025-03-17T17:53:42.095819935Z" level=error msg="Failed to destroy network for sandbox \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:42.098479 containerd[1493]: time="2025-03-17T17:53:42.096975115Z" level=error msg="encountered an error cleaning up failed sandbox \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:42.098479 containerd[1493]: time="2025-03-17T17:53:42.097097237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:42.098722 kubelet[2018]: E0317 17:53:42.097370 2018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:42.098722 kubelet[2018]: E0317 17:53:42.097436 2018 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:42.098722 kubelet[2018]: E0317 17:53:42.097461 2018 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:42.098906 kubelet[2018]: E0317 17:53:42.097528 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zszl8_calico-system(c0027786-a11b-45ae-ada7-03122d7bb3ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zszl8_calico-system(c0027786-a11b-45ae-ada7-03122d7bb3ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zszl8" podUID="c0027786-a11b-45ae-ada7-03122d7bb3ca" Mar 17 17:53:42.100686 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9-shm.mount: Deactivated successfully. Mar 17 17:53:42.267068 containerd[1493]: time="2025-03-17T17:53:42.267017761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lvkx4,Uid:51665948-acbb-4da3-8eba-2345b7d6dd61,Namespace:default,Attempt:0,}" Mar 17 17:53:42.350982 containerd[1493]: time="2025-03-17T17:53:42.350692281Z" level=error msg="Failed to destroy network for sandbox \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:42.352525 containerd[1493]: time="2025-03-17T17:53:42.352456191Z" level=error msg="encountered an error cleaning up failed sandbox \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:42.352525 containerd[1493]: time="2025-03-17T17:53:42.352543833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lvkx4,Uid:51665948-acbb-4da3-8eba-2345b7d6dd61,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:42.353178 kubelet[2018]: E0317 17:53:42.352934 2018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:42.353178 kubelet[2018]: E0317 17:53:42.352993 2018 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lvkx4" Mar 17 17:53:42.353178 kubelet[2018]: E0317 17:53:42.353012 2018 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lvkx4" Mar 17 17:53:42.353421 kubelet[2018]: E0317 17:53:42.353059 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-lvkx4_default(51665948-acbb-4da3-8eba-2345b7d6dd61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-lvkx4_default(51665948-acbb-4da3-8eba-2345b7d6dd61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-lvkx4" podUID="51665948-acbb-4da3-8eba-2345b7d6dd61" Mar 17 17:53:42.809515 kubelet[2018]: E0317 17:53:42.809368 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:42.991146 kubelet[2018]: I0317 17:53:42.991111 2018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9" Mar 17 17:53:42.991879 containerd[1493]: time="2025-03-17T17:53:42.991687512Z" level=info msg="StopPodSandbox for \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\"" Mar 17 17:53:42.992777 containerd[1493]: time="2025-03-17T17:53:42.992598007Z" level=info msg="Ensure that sandbox 13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9 in task-service has been cleanup successfully" Mar 17 17:53:42.993175 containerd[1493]: time="2025-03-17T17:53:42.993064055Z" level=info msg="TearDown network for sandbox \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\" successfully" Mar 17 17:53:42.993175 containerd[1493]: time="2025-03-17T17:53:42.993088536Z" level=info msg="StopPodSandbox for \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\" returns successfully" Mar 17 17:53:42.995272 containerd[1493]: time="2025-03-17T17:53:42.993915110Z" level=info msg="StopPodSandbox for \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\"" Mar 17 17:53:42.995272 containerd[1493]: time="2025-03-17T17:53:42.994008392Z" level=info msg="TearDown network for sandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\" successfully" Mar 17 17:53:42.995272 containerd[1493]: time="2025-03-17T17:53:42.994018952Z" level=info msg="StopPodSandbox for \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\" returns successfully" Mar 17 17:53:42.995505 systemd[1]: run-netns-cni\x2dc60c4836\x2d7b65\x2d9742\x2d46e4\x2de00984210d0e.mount: Deactivated successfully. Mar 17 17:53:42.997650 containerd[1493]: time="2025-03-17T17:53:42.997012843Z" level=info msg="StopPodSandbox for \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\"" Mar 17 17:53:42.997650 containerd[1493]: time="2025-03-17T17:53:42.997113765Z" level=info msg="TearDown network for sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\" successfully" Mar 17 17:53:42.997650 containerd[1493]: time="2025-03-17T17:53:42.997126205Z" level=info msg="StopPodSandbox for \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\" returns successfully" Mar 17 17:53:42.997650 containerd[1493]: time="2025-03-17T17:53:42.997533772Z" level=info msg="StopPodSandbox for \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\"" Mar 17 17:53:42.997861 kubelet[2018]: I0317 17:53:42.997069 2018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e" Mar 17 17:53:42.997940 containerd[1493]: time="2025-03-17T17:53:42.997799857Z" level=info msg="StopPodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\"" Mar 17 17:53:42.997940 containerd[1493]: time="2025-03-17T17:53:42.997878098Z" level=info msg="TearDown network for sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" successfully" Mar 17 17:53:42.998686 containerd[1493]: time="2025-03-17T17:53:42.997889338Z" level=info msg="StopPodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" returns successfully" Mar 17 17:53:42.998686 containerd[1493]: time="2025-03-17T17:53:42.998373067Z" level=info msg="Ensure that sandbox 96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e in task-service has been cleanup successfully" Mar 17 17:53:42.999261 containerd[1493]: time="2025-03-17T17:53:42.999098759Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\"" Mar 17 17:53:43.002927 containerd[1493]: time="2025-03-17T17:53:43.000675026Z" level=info msg="TearDown network for sandbox \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\" successfully" Mar 17 17:53:43.002927 containerd[1493]: time="2025-03-17T17:53:43.001398639Z" level=info msg="StopPodSandbox for \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\" returns successfully" Mar 17 17:53:43.002927 containerd[1493]: time="2025-03-17T17:53:43.001159835Z" level=info msg="TearDown network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" successfully" Mar 17 17:53:43.002927 containerd[1493]: time="2025-03-17T17:53:43.001474240Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" returns successfully" Mar 17 17:53:43.002927 containerd[1493]: time="2025-03-17T17:53:43.002649899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lvkx4,Uid:51665948-acbb-4da3-8eba-2345b7d6dd61,Namespace:default,Attempt:1,}" Mar 17 17:53:43.001631 systemd[1]: run-netns-cni\x2da2b70f52\x2d3e33\x2d63a7\x2d1f96\x2dcbb0f7276457.mount: Deactivated successfully. Mar 17 17:53:43.003971 containerd[1493]: time="2025-03-17T17:53:43.003454233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:5,}" Mar 17 17:53:43.147165 containerd[1493]: time="2025-03-17T17:53:43.147058268Z" level=error msg="Failed to destroy network for sandbox \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:43.148934 containerd[1493]: time="2025-03-17T17:53:43.148625214Z" level=error msg="encountered an error cleaning up failed sandbox \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:43.148934 containerd[1493]: time="2025-03-17T17:53:43.148741856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lvkx4,Uid:51665948-acbb-4da3-8eba-2345b7d6dd61,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:43.149298 kubelet[2018]: E0317 17:53:43.149136 2018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:43.149298 kubelet[2018]: E0317 17:53:43.149222 2018 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lvkx4" Mar 17 17:53:43.149298 kubelet[2018]: E0317 17:53:43.149249 2018 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-lvkx4" Mar 17 17:53:43.149646 kubelet[2018]: E0317 17:53:43.149305 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-lvkx4_default(51665948-acbb-4da3-8eba-2345b7d6dd61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-lvkx4_default(51665948-acbb-4da3-8eba-2345b7d6dd61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-lvkx4" podUID="51665948-acbb-4da3-8eba-2345b7d6dd61" Mar 17 17:53:43.161937 containerd[1493]: time="2025-03-17T17:53:43.161098382Z" level=error msg="Failed to destroy network for sandbox \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:43.162758 containerd[1493]: time="2025-03-17T17:53:43.162591207Z" level=error msg="encountered an error cleaning up failed sandbox \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:43.163725 containerd[1493]: time="2025-03-17T17:53:43.163585384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:43.164351 kubelet[2018]: E0317 17:53:43.164300 2018 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:43.164462 kubelet[2018]: E0317 17:53:43.164379 2018 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:43.164462 kubelet[2018]: E0317 17:53:43.164404 2018 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zszl8" Mar 17 17:53:43.164526 kubelet[2018]: E0317 17:53:43.164449 2018 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zszl8_calico-system(c0027786-a11b-45ae-ada7-03122d7bb3ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zszl8_calico-system(c0027786-a11b-45ae-ada7-03122d7bb3ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zszl8" podUID="c0027786-a11b-45ae-ada7-03122d7bb3ca" Mar 17 17:53:43.615224 containerd[1493]: time="2025-03-17T17:53:43.613491127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:43.615224 containerd[1493]: time="2025-03-17T17:53:43.615074754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 17 17:53:43.615965 containerd[1493]: time="2025-03-17T17:53:43.615761645Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:43.623727 containerd[1493]: time="2025-03-17T17:53:43.623447174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:43.624703 containerd[1493]: time="2025-03-17T17:53:43.624324548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 6.665940621s" Mar 17 17:53:43.624833 containerd[1493]: time="2025-03-17T17:53:43.624706075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 17 17:53:43.637983 containerd[1493]: time="2025-03-17T17:53:43.637388046Z" level=info msg="CreateContainer within sandbox \"0a7f04db3200b37993e263207de23e717642d6c5c1f9936664e736b67665176e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:53:43.666951 containerd[1493]: time="2025-03-17T17:53:43.666866978Z" level=info msg="CreateContainer within sandbox \"0a7f04db3200b37993e263207de23e717642d6c5c1f9936664e736b67665176e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"af4dd1eaf3f470147e28b444af59195184c6a81e97b89b17e280523c9674e254\"" Mar 17 17:53:43.668107 containerd[1493]: time="2025-03-17T17:53:43.668051957Z" level=info msg="StartContainer for \"af4dd1eaf3f470147e28b444af59195184c6a81e97b89b17e280523c9674e254\"" Mar 17 17:53:43.703365 systemd[1]: Started cri-containerd-af4dd1eaf3f470147e28b444af59195184c6a81e97b89b17e280523c9674e254.scope - libcontainer container af4dd1eaf3f470147e28b444af59195184c6a81e97b89b17e280523c9674e254. Mar 17 17:53:43.745809 containerd[1493]: time="2025-03-17T17:53:43.745432488Z" level=info msg="StartContainer for \"af4dd1eaf3f470147e28b444af59195184c6a81e97b89b17e280523c9674e254\" returns successfully" Mar 17 17:53:43.792467 kubelet[2018]: E0317 17:53:43.792416 2018 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:43.810553 kubelet[2018]: E0317 17:53:43.810426 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:43.891933 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:53:43.892053 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:53:43.992161 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8-shm.mount: Deactivated successfully. Mar 17 17:53:43.992537 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630-shm.mount: Deactivated successfully. Mar 17 17:53:43.992847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719187477.mount: Deactivated successfully. Mar 17 17:53:44.011465 kubelet[2018]: I0317 17:53:44.011407 2018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8" Mar 17 17:53:44.015217 containerd[1493]: time="2025-03-17T17:53:44.015114419Z" level=info msg="StopPodSandbox for \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\"" Mar 17 17:53:44.020706 containerd[1493]: time="2025-03-17T17:53:44.015399103Z" level=info msg="Ensure that sandbox 0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8 in task-service has been cleanup successfully" Mar 17 17:53:44.020706 containerd[1493]: time="2025-03-17T17:53:44.015877031Z" level=info msg="TearDown network for sandbox \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\" successfully" Mar 17 17:53:44.020706 containerd[1493]: time="2025-03-17T17:53:44.015984753Z" level=info msg="StopPodSandbox for \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\" returns successfully" Mar 17 17:53:44.020706 containerd[1493]: time="2025-03-17T17:53:44.016919728Z" level=info msg="StopPodSandbox for \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\"" Mar 17 17:53:44.020706 containerd[1493]: time="2025-03-17T17:53:44.017089251Z" level=info msg="TearDown network for sandbox \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\" successfully" Mar 17 17:53:44.020706 containerd[1493]: time="2025-03-17T17:53:44.017101931Z" level=info msg="StopPodSandbox for \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\" returns successfully" Mar 17 17:53:44.020706 containerd[1493]: time="2025-03-17T17:53:44.017723021Z" level=info msg="StopPodSandbox for \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\"" Mar 17 17:53:44.020706 containerd[1493]: time="2025-03-17T17:53:44.018582515Z" level=info msg="TearDown network for sandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\" successfully" Mar 17 17:53:44.020706 containerd[1493]: time="2025-03-17T17:53:44.019333007Z" level=info msg="StopPodSandbox for \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\" returns successfully" Mar 17 17:53:44.020706 containerd[1493]: time="2025-03-17T17:53:44.019880336Z" level=info msg="StopPodSandbox for \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\"" Mar 17 17:53:44.020442 systemd[1]: run-netns-cni\x2dbe197e3c\x2dd570\x2d7787\x2d760d\x2d1661259f27be.mount: Deactivated successfully. Mar 17 17:53:44.021334 kubelet[2018]: I0317 17:53:44.018773 2018 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630" Mar 17 17:53:44.021383 containerd[1493]: time="2025-03-17T17:53:44.021142436Z" level=info msg="Ensure that sandbox 7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630 in task-service has been cleanup successfully" Mar 17 17:53:44.022023 containerd[1493]: time="2025-03-17T17:53:44.021636604Z" level=info msg="StopPodSandbox for \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\"" Mar 17 17:53:44.022023 containerd[1493]: time="2025-03-17T17:53:44.021776446Z" level=info msg="TearDown network for sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\" successfully" Mar 17 17:53:44.022023 containerd[1493]: time="2025-03-17T17:53:44.021789407Z" level=info msg="StopPodSandbox for \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\" returns successfully" Mar 17 17:53:44.024959 containerd[1493]: time="2025-03-17T17:53:44.022521218Z" level=info msg="StopPodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\"" Mar 17 17:53:44.024959 containerd[1493]: time="2025-03-17T17:53:44.022620780Z" level=info msg="TearDown network for sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" successfully" Mar 17 17:53:44.024959 containerd[1493]: time="2025-03-17T17:53:44.022631940Z" level=info msg="StopPodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" returns successfully" Mar 17 17:53:44.024959 containerd[1493]: time="2025-03-17T17:53:44.022997746Z" level=info msg="TearDown network for sandbox \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\" successfully" Mar 17 17:53:44.024959 containerd[1493]: time="2025-03-17T17:53:44.023015146Z" level=info msg="StopPodSandbox for \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\" returns successfully" Mar 17 17:53:44.024959 containerd[1493]: time="2025-03-17T17:53:44.023601116Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\"" Mar 17 17:53:44.024959 containerd[1493]: time="2025-03-17T17:53:44.023887201Z" level=info msg="StopPodSandbox for \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\"" Mar 17 17:53:44.024959 containerd[1493]: time="2025-03-17T17:53:44.023966642Z" level=info msg="TearDown network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" successfully" Mar 17 17:53:44.024959 containerd[1493]: time="2025-03-17T17:53:44.023981682Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" returns successfully" Mar 17 17:53:44.024959 containerd[1493]: time="2025-03-17T17:53:44.024142405Z" level=info msg="TearDown network for sandbox \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\" successfully" Mar 17 17:53:44.024959 containerd[1493]: time="2025-03-17T17:53:44.024156485Z" level=info msg="StopPodSandbox for \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\" returns successfully" Mar 17 17:53:44.024959 containerd[1493]: time="2025-03-17T17:53:44.024808295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:6,}" Mar 17 17:53:44.027272 containerd[1493]: time="2025-03-17T17:53:44.025815432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lvkx4,Uid:51665948-acbb-4da3-8eba-2345b7d6dd61,Namespace:default,Attempt:2,}" Mar 17 17:53:44.029291 systemd[1]: run-netns-cni\x2d0dd550f4\x2da546\x2d03b8\x2d7c40\x2d38ac80f77f52.mount: Deactivated successfully. Mar 17 17:53:44.040523 kubelet[2018]: I0317 17:53:44.036588 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7tpzg" podStartSLOduration=3.438633168 podStartE2EDuration="20.036565926s" podCreationTimestamp="2025-03-17 17:53:24 +0000 UTC" firstStartedPulling="2025-03-17 17:53:27.028857231 +0000 UTC m=+4.049176781" lastFinishedPulling="2025-03-17 17:53:43.626789989 +0000 UTC m=+20.647109539" observedRunningTime="2025-03-17 17:53:44.036465524 +0000 UTC m=+21.056785114" watchObservedRunningTime="2025-03-17 17:53:44.036565926 +0000 UTC m=+21.056885476" Mar 17 17:53:44.402052 systemd-networkd[1378]: calid0ccc73de58: Link UP Mar 17 17:53:44.403994 systemd-networkd[1378]: calid0ccc73de58: Gained carrier Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.130 [INFO][2866] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.166 [INFO][2866] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0 nginx-deployment-7fcdb87857- default 51665948-acbb-4da3-8eba-2345b7d6dd61 1526 0 2025-03-17 17:53:41 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-7fcdb87857-lvkx4 eth0 default [] [] [kns.default ksa.default.default] calid0ccc73de58 [] []}} ContainerID="2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" Namespace="default" Pod="nginx-deployment-7fcdb87857-lvkx4" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-" Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.167 [INFO][2866] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" Namespace="default" Pod="nginx-deployment-7fcdb87857-lvkx4" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0" Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.236 [INFO][2885] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" HandleID="k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" Workload="10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0" Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.256 [INFO][2885] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" HandleID="k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" Workload="10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003377d0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-7fcdb87857-lvkx4", "timestamp":"2025-03-17 17:53:44.236912124 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.256 [INFO][2885] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.256 [INFO][2885] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.257 [INFO][2885] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.260 [INFO][2885] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" host="10.0.0.4" Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.270 [INFO][2885] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.279 [INFO][2885] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.282 [INFO][2885] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.286 [INFO][2885] ipam/ipam.go 160: The referenced block doesn't exist, trying to create it cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.291 [INFO][2885] ipam/ipam.go 167: Wrote affinity as pending cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.294 [INFO][2885] ipam/ipam.go 176: Attempting to claim the block cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.294 [INFO][2885] ipam/ipam_block_reader_writer.go 223: Attempting to create a new block host="10.0.0.4" subnet=192.168.99.192/26 Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.304 [INFO][2885] ipam/ipam_block_reader_writer.go 228: The block already exists, getting it from data store host="10.0.0.4" subnet=192.168.99.192/26 Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.308 [INFO][2885] ipam/ipam_block_reader_writer.go 244: Block is already claimed by this host, confirm the affinity host="10.0.0.4" subnet=192.168.99.192/26 Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.308 [INFO][2885] ipam/ipam_block_reader_writer.go 275: Confirming affinity host="10.0.0.4" subnet=192.168.99.192/26 Mar 17 17:53:44.421503 containerd[1493]: 2025-03-17 17:53:44.316 [ERROR][2885] ipam/customresource.go 183: Error updating resource Key=BlockAffinity(10.0.0.4-192-168-99-192-26) Name="10.0.0.4-192-168-99-192-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-192-168-99-192-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1548", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"confirmed", Node:"10.0.0.4", CIDR:"192.168.99.192/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "10.0.0.4-192-168-99-192-26": the object has been modified; please apply your changes to the latest version and try again Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.320 [INFO][2885] ipam/ipam_block_reader_writer.go 284: Affinity is already confirmed host="10.0.0.4" subnet=192.168.99.192/26 Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.320 [INFO][2885] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" host="10.0.0.4" Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.324 [INFO][2885] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.330 [INFO][2885] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" host="10.0.0.4" Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.335 [ERROR][2885] ipam/customresource.go 183: Error updating resource Key=IPAMBlock(192-168-99-192-26) Name="192-168-99-192-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-99-192-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1549", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.99.192/26", Affinity:(*string)(0x400049a200), Allocations:[]*int{(*int)(0x400000ef00), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0x40003377d0), AttrSecondary:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-7fcdb87857-lvkx4", "timestamp":"2025-03-17 17:53:44.236912124 +0000 UTC"}}}, SequenceNumber:0x182da89d459aa0cd, SequenceNumberForAllocation:map[string]uint64{"0":0x182da89d459aa0cc}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-99-192-26": the object has been modified; please apply your changes to the latest version and try again Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.335 [INFO][2885] ipam/ipam.go 1207: Failed to update block block=192.168.99.192/26 error=update conflict: IPAMBlock(192-168-99-192-26) handle="k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" host="10.0.0.4" Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.367 [INFO][2885] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" host="10.0.0.4" Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.371 [INFO][2885] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.378 [INFO][2885] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" host="10.0.0.4" Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.385 [INFO][2885] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" host="10.0.0.4" Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.385 [INFO][2885] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" host="10.0.0.4" Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.385 [INFO][2885] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:53:44.423230 containerd[1493]: 2025-03-17 17:53:44.385 [INFO][2885] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" HandleID="k8s-pod-network.2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" Workload="10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0" Mar 17 17:53:44.424404 containerd[1493]: 2025-03-17 17:53:44.388 [INFO][2866] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" Namespace="default" Pod="nginx-deployment-7fcdb87857-lvkx4" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"51665948-acbb-4da3-8eba-2345b7d6dd61", ResourceVersion:"1526", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-lvkx4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid0ccc73de58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:44.424404 containerd[1493]: 2025-03-17 17:53:44.388 [INFO][2866] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.193/32] ContainerID="2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" Namespace="default" Pod="nginx-deployment-7fcdb87857-lvkx4" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0" Mar 17 17:53:44.424404 containerd[1493]: 2025-03-17 17:53:44.389 [INFO][2866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0ccc73de58 ContainerID="2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" Namespace="default" Pod="nginx-deployment-7fcdb87857-lvkx4" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0" Mar 17 17:53:44.424404 containerd[1493]: 2025-03-17 17:53:44.407 [INFO][2866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" Namespace="default" Pod="nginx-deployment-7fcdb87857-lvkx4" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0" Mar 17 17:53:44.424404 containerd[1493]: 2025-03-17 17:53:44.407 [INFO][2866] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" Namespace="default" Pod="nginx-deployment-7fcdb87857-lvkx4" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"51665948-acbb-4da3-8eba-2345b7d6dd61", ResourceVersion:"1526", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b", Pod:"nginx-deployment-7fcdb87857-lvkx4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid0ccc73de58", MAC:"72:6b:cc:32:c4:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:44.424404 containerd[1493]: 2025-03-17 17:53:44.418 [INFO][2866] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b" Namespace="default" Pod="nginx-deployment-7fcdb87857-lvkx4" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--lvkx4-eth0" Mar 17 17:53:44.446892 containerd[1493]: time="2025-03-17T17:53:44.446560553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:44.446892 containerd[1493]: time="2025-03-17T17:53:44.446634434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:44.446892 containerd[1493]: time="2025-03-17T17:53:44.446651235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:44.446892 containerd[1493]: time="2025-03-17T17:53:44.446817237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:44.456718 systemd-networkd[1378]: cali76f75731573: Link UP Mar 17 17:53:44.459588 systemd-networkd[1378]: cali76f75731573: Gained carrier Mar 17 17:53:44.476790 systemd[1]: Started cri-containerd-2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b.scope - libcontainer container 2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b. Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.124 [INFO][2853] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.168 [INFO][2853] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--zszl8-eth0 csi-node-driver- calico-system c0027786-a11b-45ae-ada7-03122d7bb3ca 1446 0 2025-03-17 17:53:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:54877d75d5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-zszl8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali76f75731573 [] []}} ContainerID="f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" Namespace="calico-system" Pod="csi-node-driver-zszl8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--zszl8-" Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.169 [INFO][2853] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" Namespace="calico-system" Pod="csi-node-driver-zszl8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--zszl8-eth0" Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.236 [INFO][2890] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" HandleID="k8s-pod-network.f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" Workload="10.0.0.4-k8s-csi--node--driver--zszl8-eth0" Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.262 [INFO][2890] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" HandleID="k8s-pod-network.f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" Workload="10.0.0.4-k8s-csi--node--driver--zszl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028caf0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-zszl8", "timestamp":"2025-03-17 17:53:44.236798322 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.262 [INFO][2890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.385 [INFO][2890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.385 [INFO][2890] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.390 [INFO][2890] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" host="10.0.0.4" Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.397 [INFO][2890] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.410 [INFO][2890] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.415 [INFO][2890] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.425 [INFO][2890] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.425 [INFO][2890] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" host="10.0.0.4" Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.431 [INFO][2890] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533 Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.438 [INFO][2890] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" host="10.0.0.4" Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.450 [INFO][2890] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" host="10.0.0.4" Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.450 [INFO][2890] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" host="10.0.0.4" Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.450 [INFO][2890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:53:44.477512 containerd[1493]: 2025-03-17 17:53:44.450 [INFO][2890] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" HandleID="k8s-pod-network.f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" Workload="10.0.0.4-k8s-csi--node--driver--zszl8-eth0" Mar 17 17:53:44.478159 containerd[1493]: 2025-03-17 17:53:44.453 [INFO][2853] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" Namespace="calico-system" Pod="csi-node-driver-zszl8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--zszl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--zszl8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0027786-a11b-45ae-ada7-03122d7bb3ca", ResourceVersion:"1446", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-zszl8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali76f75731573", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:44.478159 containerd[1493]: 2025-03-17 17:53:44.454 [INFO][2853] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.194/32] ContainerID="f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" Namespace="calico-system" Pod="csi-node-driver-zszl8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--zszl8-eth0" Mar 17 17:53:44.478159 containerd[1493]: 2025-03-17 17:53:44.454 [INFO][2853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76f75731573 ContainerID="f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" Namespace="calico-system" Pod="csi-node-driver-zszl8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--zszl8-eth0" Mar 17 17:53:44.478159 containerd[1493]: 2025-03-17 17:53:44.456 [INFO][2853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" Namespace="calico-system" Pod="csi-node-driver-zszl8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--zszl8-eth0" Mar 17 17:53:44.478159 containerd[1493]: 2025-03-17 17:53:44.458 [INFO][2853] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" Namespace="calico-system" Pod="csi-node-driver-zszl8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--zszl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--zszl8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0027786-a11b-45ae-ada7-03122d7bb3ca", ResourceVersion:"1446", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533", Pod:"csi-node-driver-zszl8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali76f75731573", MAC:"5e:df:99:d0:b9:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:44.478159 containerd[1493]: 2025-03-17 17:53:44.470 [INFO][2853] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533" Namespace="calico-system" Pod="csi-node-driver-zszl8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--zszl8-eth0" Mar 17 17:53:44.507322 containerd[1493]: time="2025-03-17T17:53:44.506942849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:44.507322 containerd[1493]: time="2025-03-17T17:53:44.507018611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:44.507322 containerd[1493]: time="2025-03-17T17:53:44.507036531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:44.509747 containerd[1493]: time="2025-03-17T17:53:44.508533995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:44.519562 containerd[1493]: time="2025-03-17T17:53:44.519521613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lvkx4,Uid:51665948-acbb-4da3-8eba-2345b7d6dd61,Namespace:default,Attempt:2,} returns sandbox id \"2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b\"" Mar 17 17:53:44.523264 containerd[1493]: time="2025-03-17T17:53:44.523050710Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:53:44.535165 systemd[1]: Started cri-containerd-f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533.scope - libcontainer container f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533. Mar 17 17:53:44.564120 containerd[1493]: time="2025-03-17T17:53:44.564041372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zszl8,Uid:c0027786-a11b-45ae-ada7-03122d7bb3ca,Namespace:calico-system,Attempt:6,} returns sandbox id \"f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533\"" Mar 17 17:53:44.811393 kubelet[2018]: E0317 17:53:44.811161 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:45.678982 kernel: bpftool[3160]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:53:45.812146 kubelet[2018]: E0317 17:53:45.812086 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:45.874715 systemd-networkd[1378]: vxlan.calico: Link UP Mar 17 17:53:45.877847 systemd-networkd[1378]: vxlan.calico: Gained carrier Mar 17 17:53:45.881049 systemd-networkd[1378]: calid0ccc73de58: Gained IPv6LL Mar 17 17:53:46.098853 systemd[1]: run-containerd-runc-k8s.io-af4dd1eaf3f470147e28b444af59195184c6a81e97b89b17e280523c9674e254-runc.Yvd5a1.mount: Deactivated successfully. Mar 17 17:53:46.138868 systemd-networkd[1378]: cali76f75731573: Gained IPv6LL Mar 17 17:53:46.812580 kubelet[2018]: E0317 17:53:46.812389 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:47.167217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301361783.mount: Deactivated successfully. Mar 17 17:53:47.481393 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Mar 17 17:53:47.813019 kubelet[2018]: E0317 17:53:47.812809 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:48.098078 containerd[1493]: time="2025-03-17T17:53:48.097929873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:48.099603 containerd[1493]: time="2025-03-17T17:53:48.099540856Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69703867" Mar 17 17:53:48.101346 containerd[1493]: time="2025-03-17T17:53:48.100494269Z" level=info msg="ImageCreate event name:\"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:48.104797 containerd[1493]: time="2025-03-17T17:53:48.104731850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:48.106674 containerd[1493]: time="2025-03-17T17:53:48.106257512Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"69703745\" in 3.582988718s" Mar 17 17:53:48.106674 containerd[1493]: time="2025-03-17T17:53:48.106301912Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 17:53:48.109666 containerd[1493]: time="2025-03-17T17:53:48.109377036Z" level=info msg="CreateContainer within sandbox \"2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 17:53:48.110232 containerd[1493]: time="2025-03-17T17:53:48.110009445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:53:48.130329 containerd[1493]: time="2025-03-17T17:53:48.130277014Z" level=info msg="CreateContainer within sandbox \"2f0b6a4574983e7ff667d61da5eff8866ebee3b1dd0f65ed770d944be4ebec2b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4c33a9bd2bcb31b6602888ed7fb258fc19343afc3e8fb37d7b1e278f785139da\"" Mar 17 17:53:48.131564 containerd[1493]: time="2025-03-17T17:53:48.131477911Z" level=info msg="StartContainer for \"4c33a9bd2bcb31b6602888ed7fb258fc19343afc3e8fb37d7b1e278f785139da\"" Mar 17 17:53:48.181392 systemd[1]: Started cri-containerd-4c33a9bd2bcb31b6602888ed7fb258fc19343afc3e8fb37d7b1e278f785139da.scope - libcontainer container 4c33a9bd2bcb31b6602888ed7fb258fc19343afc3e8fb37d7b1e278f785139da. Mar 17 17:53:48.218956 containerd[1493]: time="2025-03-17T17:53:48.217640701Z" level=info msg="StartContainer for \"4c33a9bd2bcb31b6602888ed7fb258fc19343afc3e8fb37d7b1e278f785139da\" returns successfully" Mar 17 17:53:48.813800 kubelet[2018]: E0317 17:53:48.813671 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:49.069694 kubelet[2018]: I0317 17:53:49.068840 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-lvkx4" podStartSLOduration=4.481905156 podStartE2EDuration="8.068811215s" podCreationTimestamp="2025-03-17 17:53:41 +0000 UTC" firstStartedPulling="2025-03-17 17:53:44.520787833 +0000 UTC m=+21.541107383" lastFinishedPulling="2025-03-17 17:53:48.107693852 +0000 UTC m=+25.128013442" observedRunningTime="2025-03-17 17:53:49.068619612 +0000 UTC m=+26.088939162" watchObservedRunningTime="2025-03-17 17:53:49.068811215 +0000 UTC m=+26.089130765" Mar 17 17:53:49.633127 containerd[1493]: time="2025-03-17T17:53:49.633078658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:49.635402 containerd[1493]: time="2025-03-17T17:53:49.635348410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 17 17:53:49.636610 containerd[1493]: time="2025-03-17T17:53:49.636556306Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:49.640029 containerd[1493]: time="2025-03-17T17:53:49.639947513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:49.641016 containerd[1493]: time="2025-03-17T17:53:49.640972488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 1.530926362s" Mar 17 17:53:49.641285 containerd[1493]: time="2025-03-17T17:53:49.641169570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 17 17:53:49.645786 containerd[1493]: time="2025-03-17T17:53:49.645486550Z" level=info msg="CreateContainer within sandbox \"f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:53:49.666819 containerd[1493]: time="2025-03-17T17:53:49.666640043Z" level=info msg="CreateContainer within sandbox \"f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d27376e5c852a55c3721fa1db9ab3431ec8cb020196119a446bb430c35e03140\"" Mar 17 17:53:49.668642 containerd[1493]: time="2025-03-17T17:53:49.667795378Z" level=info msg="StartContainer for \"d27376e5c852a55c3721fa1db9ab3431ec8cb020196119a446bb430c35e03140\"" Mar 17 17:53:49.706145 systemd[1]: Started cri-containerd-d27376e5c852a55c3721fa1db9ab3431ec8cb020196119a446bb430c35e03140.scope - libcontainer container d27376e5c852a55c3721fa1db9ab3431ec8cb020196119a446bb430c35e03140. Mar 17 17:53:49.739375 containerd[1493]: time="2025-03-17T17:53:49.739313848Z" level=info msg="StartContainer for \"d27376e5c852a55c3721fa1db9ab3431ec8cb020196119a446bb430c35e03140\" returns successfully" Mar 17 17:53:49.742662 containerd[1493]: time="2025-03-17T17:53:49.742370810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:53:49.814512 kubelet[2018]: E0317 17:53:49.814439 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:50.815325 kubelet[2018]: E0317 17:53:50.815228 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:51.719930 containerd[1493]: time="2025-03-17T17:53:51.719699650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:51.720929 containerd[1493]: time="2025-03-17T17:53:51.720717663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 17 17:53:51.722049 containerd[1493]: time="2025-03-17T17:53:51.721982520Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:51.724940 containerd[1493]: time="2025-03-17T17:53:51.724300950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:51.725226 containerd[1493]: time="2025-03-17T17:53:51.725091560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.98267967s" Mar 17 17:53:51.725226 containerd[1493]: time="2025-03-17T17:53:51.725125081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 17 17:53:51.728249 containerd[1493]: time="2025-03-17T17:53:51.728196281Z" level=info msg="CreateContainer within sandbox \"f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:53:51.744977 containerd[1493]: time="2025-03-17T17:53:51.744887658Z" level=info msg="CreateContainer within sandbox \"f0842218ca1c875eca84fd52034be8d0e0f3a987419642cbf770118a1a808533\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d908f5a49531d882e7ff2543758f7e803e4263f489c9b5f7eb3ad063169b382d\"" Mar 17 17:53:51.745933 containerd[1493]: time="2025-03-17T17:53:51.745878910Z" level=info msg="StartContainer for \"d908f5a49531d882e7ff2543758f7e803e4263f489c9b5f7eb3ad063169b382d\"" Mar 17 17:53:51.784369 systemd[1]: Started cri-containerd-d908f5a49531d882e7ff2543758f7e803e4263f489c9b5f7eb3ad063169b382d.scope - libcontainer container d908f5a49531d882e7ff2543758f7e803e4263f489c9b5f7eb3ad063169b382d. Mar 17 17:53:51.815461 kubelet[2018]: E0317 17:53:51.815413 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:51.826144 containerd[1493]: time="2025-03-17T17:53:51.826097753Z" level=info msg="StartContainer for \"d908f5a49531d882e7ff2543758f7e803e4263f489c9b5f7eb3ad063169b382d\" returns successfully" Mar 17 17:53:51.923637 kubelet[2018]: I0317 17:53:51.923599 2018 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:53:51.923637 kubelet[2018]: I0317 17:53:51.923645 2018 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:53:52.099538 kubelet[2018]: I0317 17:53:52.098847 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zszl8" podStartSLOduration=20.938580248 podStartE2EDuration="28.098819859s" podCreationTimestamp="2025-03-17 17:53:24 +0000 UTC" firstStartedPulling="2025-03-17 17:53:44.566015004 +0000 UTC m=+21.586334554" lastFinishedPulling="2025-03-17 17:53:51.726254655 +0000 UTC m=+28.746574165" observedRunningTime="2025-03-17 17:53:52.098806339 +0000 UTC m=+29.119125889" watchObservedRunningTime="2025-03-17 17:53:52.098819859 +0000 UTC m=+29.119139529" Mar 17 17:53:52.816273 kubelet[2018]: E0317 17:53:52.816185 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:53.817473 kubelet[2018]: E0317 17:53:53.817375 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:54.818737 kubelet[2018]: E0317 17:53:54.818548 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:55.819209 kubelet[2018]: E0317 17:53:55.819128 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:55.939549 systemd[1]: Created slice kubepods-besteffort-pod05529813_e835_4ea2_b4dc_4eeb7316929d.slice - libcontainer container kubepods-besteffort-pod05529813_e835_4ea2_b4dc_4eeb7316929d.slice. Mar 17 17:53:56.046659 kubelet[2018]: I0317 17:53:56.046540 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq225\" (UniqueName: \"kubernetes.io/projected/05529813-e835-4ea2-b4dc-4eeb7316929d-kube-api-access-rq225\") pod \"nfs-server-provisioner-0\" (UID: \"05529813-e835-4ea2-b4dc-4eeb7316929d\") " pod="default/nfs-server-provisioner-0" Mar 17 17:53:56.046659 kubelet[2018]: I0317 17:53:56.046629 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/05529813-e835-4ea2-b4dc-4eeb7316929d-data\") pod \"nfs-server-provisioner-0\" (UID: \"05529813-e835-4ea2-b4dc-4eeb7316929d\") " pod="default/nfs-server-provisioner-0" Mar 17 17:53:56.243936 containerd[1493]: time="2025-03-17T17:53:56.243845289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:05529813-e835-4ea2-b4dc-4eeb7316929d,Namespace:default,Attempt:0,}" Mar 17 17:53:56.429724 systemd-networkd[1378]: cali60e51b789ff: Link UP Mar 17 17:53:56.431943 systemd-networkd[1378]: cali60e51b789ff: Gained carrier Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.314 [INFO][3435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 05529813-e835-4ea2-b4dc-4eeb7316929d 1626 0 2025-03-17 17:53:55 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.314 [INFO][3435] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.355 [INFO][3453] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" HandleID="k8s-pod-network.62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.375 [INFO][3453] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" HandleID="k8s-pod-network.62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332a20), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-17 17:53:56.355580012 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.375 [INFO][3453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.375 [INFO][3453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.375 [INFO][3453] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.379 [INFO][3453] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" host="10.0.0.4" Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.386 [INFO][3453] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.394 [INFO][3453] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.397 [INFO][3453] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.401 [INFO][3453] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.401 [INFO][3453] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" host="10.0.0.4" Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.404 [INFO][3453] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.412 [INFO][3453] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" host="10.0.0.4" Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.422 [INFO][3453] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" host="10.0.0.4" Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.422 [INFO][3453] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" host="10.0.0.4" Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.422 [INFO][3453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:53:56.448756 containerd[1493]: 2025-03-17 17:53:56.422 [INFO][3453] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" HandleID="k8s-pod-network.62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:53:56.450242 containerd[1493]: 2025-03-17 17:53:56.425 [INFO][3435] cni-plugin/k8s.go 386: Populated endpoint ContainerID="62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"05529813-e835-4ea2-b4dc-4eeb7316929d", ResourceVersion:"1626", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:56.450242 containerd[1493]: 2025-03-17 17:53:56.426 [INFO][3435] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.195/32] ContainerID="62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:53:56.450242 containerd[1493]: 2025-03-17 17:53:56.426 [INFO][3435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:53:56.450242 containerd[1493]: 2025-03-17 17:53:56.431 [INFO][3435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:53:56.450474 containerd[1493]: 2025-03-17 17:53:56.432 [INFO][3435] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"05529813-e835-4ea2-b4dc-4eeb7316929d", ResourceVersion:"1626", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"12:d7:2e:a6:66:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:56.450474 containerd[1493]: 2025-03-17 17:53:56.445 [INFO][3435] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:53:56.474816 containerd[1493]: time="2025-03-17T17:53:56.474672736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:56.475361 containerd[1493]: time="2025-03-17T17:53:56.474829978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:56.475361 containerd[1493]: time="2025-03-17T17:53:56.474859138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:56.475361 containerd[1493]: time="2025-03-17T17:53:56.475091781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:56.505289 systemd[1]: Started cri-containerd-62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f.scope - libcontainer container 62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f. Mar 17 17:53:56.543924 containerd[1493]: time="2025-03-17T17:53:56.543866906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:05529813-e835-4ea2-b4dc-4eeb7316929d,Namespace:default,Attempt:0,} returns sandbox id \"62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f\"" Mar 17 17:53:56.547019 containerd[1493]: time="2025-03-17T17:53:56.546804179Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 17:53:56.820505 kubelet[2018]: E0317 17:53:56.820250 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:57.820525 kubelet[2018]: E0317 17:53:57.820440 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:58.169272 systemd-networkd[1378]: cali60e51b789ff: Gained IPv6LL Mar 17 17:53:58.378627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount79411699.mount: Deactivated successfully. Mar 17 17:53:58.822157 kubelet[2018]: E0317 17:53:58.821291 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:53:59.822176 kubelet[2018]: E0317 17:53:59.822124 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:00.022041 containerd[1493]: time="2025-03-17T17:54:00.021960691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:54:00.023926 containerd[1493]: time="2025-03-17T17:54:00.023545146Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373691" Mar 17 17:54:00.025010 containerd[1493]: time="2025-03-17T17:54:00.024948000Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:54:00.028940 containerd[1493]: time="2025-03-17T17:54:00.028707677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:54:00.031006 containerd[1493]: time="2025-03-17T17:54:00.030825418Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.483976079s" Mar 17 17:54:00.031006 containerd[1493]: time="2025-03-17T17:54:00.030873338Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Mar 17 17:54:00.035262 containerd[1493]: time="2025-03-17T17:54:00.035118660Z" level=info msg="CreateContainer within sandbox \"62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 17:54:00.058046 containerd[1493]: time="2025-03-17T17:54:00.057988925Z" level=info msg="CreateContainer within sandbox \"62a2508e8154b2fd5076d043cdb2b7d36c33ca97bd4732baef56f41bbfa6662f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3311b2f39db55bb3c4d71ac8ad0eaa747c30346d66423c8bed1fdbb4da28cc44\"" Mar 17 17:54:00.060228 containerd[1493]: time="2025-03-17T17:54:00.058842453Z" level=info msg="StartContainer for \"3311b2f39db55bb3c4d71ac8ad0eaa747c30346d66423c8bed1fdbb4da28cc44\"" Mar 17 17:54:00.105318 systemd[1]: Started cri-containerd-3311b2f39db55bb3c4d71ac8ad0eaa747c30346d66423c8bed1fdbb4da28cc44.scope - libcontainer container 3311b2f39db55bb3c4d71ac8ad0eaa747c30346d66423c8bed1fdbb4da28cc44. Mar 17 17:54:00.152562 containerd[1493]: time="2025-03-17T17:54:00.152493413Z" level=info msg="StartContainer for \"3311b2f39db55bb3c4d71ac8ad0eaa747c30346d66423c8bed1fdbb4da28cc44\" returns successfully" Mar 17 17:54:00.822799 kubelet[2018]: E0317 17:54:00.822675 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:01.825233 kubelet[2018]: E0317 17:54:01.825136 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:02.826299 kubelet[2018]: E0317 17:54:02.826225 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:03.792405 kubelet[2018]: E0317 17:54:03.792250 2018 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:03.827475 kubelet[2018]: E0317 17:54:03.827368 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:04.827633 kubelet[2018]: E0317 17:54:04.827538 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:05.828017 kubelet[2018]: E0317 17:54:05.827890 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:06.828501 kubelet[2018]: E0317 17:54:06.828329 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:07.829001 kubelet[2018]: E0317 17:54:07.828929 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:08.829585 kubelet[2018]: E0317 17:54:08.829517 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:09.693171 kubelet[2018]: I0317 17:54:09.693018 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.206591679 podStartE2EDuration="14.692991462s" podCreationTimestamp="2025-03-17 17:53:55 +0000 UTC" firstStartedPulling="2025-03-17 17:53:56.546479655 +0000 UTC m=+33.566799205" lastFinishedPulling="2025-03-17 17:54:00.032879438 +0000 UTC m=+37.053198988" observedRunningTime="2025-03-17 17:54:01.138329461 +0000 UTC m=+38.158649051" watchObservedRunningTime="2025-03-17 17:54:09.692991462 +0000 UTC m=+46.713311012" Mar 17 17:54:09.705488 systemd[1]: Created slice kubepods-besteffort-pod54fb1c59_1e85_4abf_9f8f_180185526340.slice - libcontainer container kubepods-besteffort-pod54fb1c59_1e85_4abf_9f8f_180185526340.slice. Mar 17 17:54:09.830122 kubelet[2018]: E0317 17:54:09.830053 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:09.841386 kubelet[2018]: I0317 17:54:09.841051 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-674bf90d-dd5e-4c01-a6e2-dd9e2dee4b2d\" (UniqueName: \"kubernetes.io/nfs/54fb1c59-1e85-4abf-9f8f-180185526340-pvc-674bf90d-dd5e-4c01-a6e2-dd9e2dee4b2d\") pod \"test-pod-1\" (UID: \"54fb1c59-1e85-4abf-9f8f-180185526340\") " pod="default/test-pod-1" Mar 17 17:54:09.841386 kubelet[2018]: I0317 17:54:09.841118 2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fq4d\" (UniqueName: \"kubernetes.io/projected/54fb1c59-1e85-4abf-9f8f-180185526340-kube-api-access-7fq4d\") pod \"test-pod-1\" (UID: \"54fb1c59-1e85-4abf-9f8f-180185526340\") " pod="default/test-pod-1" Mar 17 17:54:09.970933 kernel: FS-Cache: Loaded Mar 17 17:54:09.997161 kernel: RPC: Registered named UNIX socket transport module. Mar 17 17:54:09.997322 kernel: RPC: Registered udp transport module. Mar 17 17:54:09.997359 kernel: RPC: Registered tcp transport module. Mar 17 17:54:09.997391 kernel: RPC: Registered tcp-with-tls transport module. Mar 17 17:54:09.997442 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 17:54:10.169988 kernel: NFS: Registering the id_resolver key type Mar 17 17:54:10.170198 kernel: Key type id_resolver registered Mar 17 17:54:10.171335 kernel: Key type id_legacy registered Mar 17 17:54:10.195365 nfsidmap[3641]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 17:54:10.199916 nfsidmap[3642]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 17:54:10.310750 containerd[1493]: time="2025-03-17T17:54:10.310118628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:54fb1c59-1e85-4abf-9f8f-180185526340,Namespace:default,Attempt:0,}" Mar 17 17:54:10.485930 systemd-networkd[1378]: cali5ec59c6bf6e: Link UP Mar 17 17:54:10.487324 systemd-networkd[1378]: cali5ec59c6bf6e: Gained carrier Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.377 [INFO][3647] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default 54fb1c59-1e85-4abf-9f8f-180185526340 1682 0 2025-03-17 17:53:57 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.377 [INFO][3647] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.413 [INFO][3655] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" HandleID="k8s-pod-network.14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" Workload="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.431 [INFO][3655] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" HandleID="k8s-pod-network.14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d4b30), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2025-03-17 17:54:10.413011491 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.431 [INFO][3655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.431 [INFO][3655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.431 [INFO][3655] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.434 [INFO][3655] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" host="10.0.0.4" Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.441 [INFO][3655] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.449 [INFO][3655] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.452 [INFO][3655] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.456 [INFO][3655] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.456 [INFO][3655] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" host="10.0.0.4" Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.461 [INFO][3655] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76 Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.467 [INFO][3655] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" host="10.0.0.4" Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.478 [INFO][3655] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" host="10.0.0.4" Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.478 [INFO][3655] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" host="10.0.0.4" Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.478 [INFO][3655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:54:10.502151 containerd[1493]: 2025-03-17 17:54:10.478 [INFO][3655] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" HandleID="k8s-pod-network.14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" Workload="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 17:54:10.502798 containerd[1493]: 2025-03-17 17:54:10.482 [INFO][3647] cni-plugin/k8s.go 386: Populated endpoint ContainerID="14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"54fb1c59-1e85-4abf-9f8f-180185526340", ResourceVersion:"1682", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:54:10.502798 containerd[1493]: 2025-03-17 17:54:10.482 [INFO][3647] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.196/32] ContainerID="14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 17:54:10.502798 containerd[1493]: 2025-03-17 17:54:10.482 [INFO][3647] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 17:54:10.502798 containerd[1493]: 2025-03-17 17:54:10.487 [INFO][3647] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 17:54:10.502798 containerd[1493]: 2025-03-17 17:54:10.488 [INFO][3647] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"54fb1c59-1e85-4abf-9f8f-180185526340", ResourceVersion:"1682", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"0a:b4:f4:98:ff:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:54:10.502798 containerd[1493]: 2025-03-17 17:54:10.499 [INFO][3647] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 17:54:10.544253 containerd[1493]: time="2025-03-17T17:54:10.544089838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:54:10.544253 containerd[1493]: time="2025-03-17T17:54:10.544159238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:54:10.544253 containerd[1493]: time="2025-03-17T17:54:10.544172038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:54:10.544645 containerd[1493]: time="2025-03-17T17:54:10.544497121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:54:10.568263 systemd[1]: Started cri-containerd-14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76.scope - libcontainer container 14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76. Mar 17 17:54:10.609554 containerd[1493]: time="2025-03-17T17:54:10.609393029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:54fb1c59-1e85-4abf-9f8f-180185526340,Namespace:default,Attempt:0,} returns sandbox id \"14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76\"" Mar 17 17:54:10.611877 containerd[1493]: time="2025-03-17T17:54:10.611501444Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:54:10.831219 kubelet[2018]: E0317 17:54:10.831003 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:11.083531 containerd[1493]: time="2025-03-17T17:54:11.081739624Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:54:11.088063 containerd[1493]: time="2025-03-17T17:54:11.085122928Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 17 17:54:11.088063 containerd[1493]: time="2025-03-17T17:54:11.087442224Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"69703745\" in 475.895499ms" Mar 17 17:54:11.088063 containerd[1493]: time="2025-03-17T17:54:11.087483184Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 17:54:11.090750 containerd[1493]: time="2025-03-17T17:54:11.090603366Z" level=info msg="CreateContainer within sandbox \"14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 17:54:11.130025 containerd[1493]: time="2025-03-17T17:54:11.129957962Z" level=info msg="CreateContainer within sandbox \"14043fb97978aa68e4c18cacc32dfca143e77b8c2fd3bc80dd6a271478781f76\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e4185ea56dc4cd07ba2d0006fa6149cb30f0ddf2a311096a5a234b6cb4ef6a03\"" Mar 17 17:54:11.131277 containerd[1493]: time="2025-03-17T17:54:11.131148810Z" level=info msg="StartContainer for \"e4185ea56dc4cd07ba2d0006fa6149cb30f0ddf2a311096a5a234b6cb4ef6a03\"" Mar 17 17:54:11.173276 systemd[1]: Started cri-containerd-e4185ea56dc4cd07ba2d0006fa6149cb30f0ddf2a311096a5a234b6cb4ef6a03.scope - libcontainer container e4185ea56dc4cd07ba2d0006fa6149cb30f0ddf2a311096a5a234b6cb4ef6a03. Mar 17 17:54:11.206020 containerd[1493]: time="2025-03-17T17:54:11.205090448Z" level=info msg="StartContainer for \"e4185ea56dc4cd07ba2d0006fa6149cb30f0ddf2a311096a5a234b6cb4ef6a03\" returns successfully" Mar 17 17:54:11.832154 kubelet[2018]: E0317 17:54:11.832026 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:12.180772 kubelet[2018]: I0317 17:54:12.180515 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.70266565 podStartE2EDuration="15.180491803s" podCreationTimestamp="2025-03-17 17:53:57 +0000 UTC" firstStartedPulling="2025-03-17 17:54:10.61089128 +0000 UTC m=+47.631210830" lastFinishedPulling="2025-03-17 17:54:11.088717433 +0000 UTC m=+48.109036983" observedRunningTime="2025-03-17 17:54:12.179803278 +0000 UTC m=+49.200122868" watchObservedRunningTime="2025-03-17 17:54:12.180491803 +0000 UTC m=+49.200811353" Mar 17 17:54:12.441745 systemd-networkd[1378]: cali5ec59c6bf6e: Gained IPv6LL Mar 17 17:54:12.832970 kubelet[2018]: E0317 17:54:12.832759 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:13.833707 kubelet[2018]: E0317 17:54:13.833611 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:14.834478 kubelet[2018]: E0317 17:54:14.834396 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:15.834935 kubelet[2018]: E0317 17:54:15.834754 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:16.835307 kubelet[2018]: E0317 17:54:16.835203 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:17.836503 kubelet[2018]: E0317 17:54:17.836316 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:18.837259 kubelet[2018]: E0317 17:54:18.837152 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:19.838236 kubelet[2018]: E0317 17:54:19.838142 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:20.839012 kubelet[2018]: E0317 17:54:20.838921 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:21.139462 systemd[1]: Started sshd@7-49.12.184.245:22-154.117.199.5:52079.service - OpenSSH per-connection server daemon (154.117.199.5:52079). Mar 17 17:54:21.697586 sshd[3800]: Connection closed by 154.117.199.5 port 52079 [preauth] Mar 17 17:54:21.699803 systemd[1]: sshd@7-49.12.184.245:22-154.117.199.5:52079.service: Deactivated successfully. Mar 17 17:54:21.839886 kubelet[2018]: E0317 17:54:21.839777 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:22.841013 kubelet[2018]: E0317 17:54:22.840938 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:23.791616 kubelet[2018]: E0317 17:54:23.791549 2018 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:23.841995 kubelet[2018]: E0317 17:54:23.841681 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:23.845370 containerd[1493]: time="2025-03-17T17:54:23.844607321Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\"" Mar 17 17:54:23.845370 containerd[1493]: time="2025-03-17T17:54:23.844741042Z" level=info msg="TearDown network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" successfully" Mar 17 17:54:23.845370 containerd[1493]: time="2025-03-17T17:54:23.844753482Z" level=info msg="StopPodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" returns successfully" Mar 17 17:54:23.846505 containerd[1493]: time="2025-03-17T17:54:23.845344205Z" level=info msg="RemovePodSandbox for \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\"" Mar 17 17:54:23.846505 containerd[1493]: time="2025-03-17T17:54:23.845973808Z" level=info msg="Forcibly stopping sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\"" Mar 17 17:54:23.846505 containerd[1493]: time="2025-03-17T17:54:23.846088168Z" level=info msg="TearDown network for sandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" successfully" Mar 17 17:54:23.850681 containerd[1493]: time="2025-03-17T17:54:23.850513270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:54:23.850681 containerd[1493]: time="2025-03-17T17:54:23.850623110Z" level=info msg="RemovePodSandbox \"460ad8e3ec6624ab7456ecccca045877b0b74e9ed6ba603425717e6147749ef3\" returns successfully" Mar 17 17:54:23.851487 containerd[1493]: time="2025-03-17T17:54:23.851439234Z" level=info msg="StopPodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\"" Mar 17 17:54:23.851657 containerd[1493]: time="2025-03-17T17:54:23.851615755Z" level=info msg="TearDown network for sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" successfully" Mar 17 17:54:23.851729 containerd[1493]: time="2025-03-17T17:54:23.851661155Z" level=info msg="StopPodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" returns successfully" Mar 17 17:54:23.852979 containerd[1493]: time="2025-03-17T17:54:23.852120158Z" level=info msg="RemovePodSandbox for \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\"" Mar 17 17:54:23.852979 containerd[1493]: time="2025-03-17T17:54:23.852157678Z" level=info msg="Forcibly stopping sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\"" Mar 17 17:54:23.852979 containerd[1493]: time="2025-03-17T17:54:23.852247678Z" level=info msg="TearDown network for sandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" successfully" Mar 17 17:54:23.859136 containerd[1493]: time="2025-03-17T17:54:23.859058711Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:54:23.859136 containerd[1493]: time="2025-03-17T17:54:23.859134632Z" level=info msg="RemovePodSandbox \"99cbe2c4a740feb049a02c194a3176bd34439c7035fa230326c78d10de1b1fe3\" returns successfully" Mar 17 17:54:23.859983 containerd[1493]: time="2025-03-17T17:54:23.859697794Z" level=info msg="StopPodSandbox for \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\"" Mar 17 17:54:23.859983 containerd[1493]: time="2025-03-17T17:54:23.859891515Z" level=info msg="TearDown network for sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\" successfully" Mar 17 17:54:23.859983 containerd[1493]: time="2025-03-17T17:54:23.859934476Z" level=info msg="StopPodSandbox for \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\" returns successfully" Mar 17 17:54:23.860475 containerd[1493]: time="2025-03-17T17:54:23.860424838Z" level=info msg="RemovePodSandbox for \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\"" Mar 17 17:54:23.860475 containerd[1493]: time="2025-03-17T17:54:23.860460758Z" level=info msg="Forcibly stopping sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\"" Mar 17 17:54:23.860566 containerd[1493]: time="2025-03-17T17:54:23.860535318Z" level=info msg="TearDown network for sandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\" successfully" Mar 17 17:54:23.863392 containerd[1493]: time="2025-03-17T17:54:23.863322092Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:54:23.863392 containerd[1493]: time="2025-03-17T17:54:23.863388052Z" level=info msg="RemovePodSandbox \"75379a1b520b4368767b6bc480d2318f14a0e9c40da733d27412c8695d09ea7b\" returns successfully" Mar 17 17:54:23.863835 containerd[1493]: time="2025-03-17T17:54:23.863787534Z" level=info msg="StopPodSandbox for \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\"" Mar 17 17:54:23.863950 containerd[1493]: time="2025-03-17T17:54:23.863910975Z" level=info msg="TearDown network for sandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\" successfully" Mar 17 17:54:23.863950 containerd[1493]: time="2025-03-17T17:54:23.863923015Z" level=info msg="StopPodSandbox for \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\" returns successfully" Mar 17 17:54:23.864713 containerd[1493]: time="2025-03-17T17:54:23.864639258Z" level=info msg="RemovePodSandbox for \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\"" Mar 17 17:54:23.864713 containerd[1493]: time="2025-03-17T17:54:23.864694459Z" level=info msg="Forcibly stopping sandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\"" Mar 17 17:54:23.864846 containerd[1493]: time="2025-03-17T17:54:23.864823619Z" level=info msg="TearDown network for sandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\" successfully" Mar 17 17:54:23.869824 containerd[1493]: time="2025-03-17T17:54:23.869761843Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:54:23.870295 containerd[1493]: time="2025-03-17T17:54:23.869844844Z" level=info msg="RemovePodSandbox \"7c73b0f0c6a1653c7e7b55a40ca587f72a64dc2345b375cb641e1869f4e62dab\" returns successfully" Mar 17 17:54:23.870884 containerd[1493]: time="2025-03-17T17:54:23.870520327Z" level=info msg="StopPodSandbox for \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\"" Mar 17 17:54:23.870884 containerd[1493]: time="2025-03-17T17:54:23.870671888Z" level=info msg="TearDown network for sandbox \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\" successfully" Mar 17 17:54:23.870884 containerd[1493]: time="2025-03-17T17:54:23.870688808Z" level=info msg="StopPodSandbox for \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\" returns successfully" Mar 17 17:54:23.871659 containerd[1493]: time="2025-03-17T17:54:23.871610412Z" level=info msg="RemovePodSandbox for \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\"" Mar 17 17:54:23.871659 containerd[1493]: time="2025-03-17T17:54:23.871650373Z" level=info msg="Forcibly stopping sandbox \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\"" Mar 17 17:54:23.871868 containerd[1493]: time="2025-03-17T17:54:23.871738933Z" level=info msg="TearDown network for sandbox \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\" successfully" Mar 17 17:54:23.875652 containerd[1493]: time="2025-03-17T17:54:23.875602232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:54:23.875784 containerd[1493]: time="2025-03-17T17:54:23.875678872Z" level=info msg="RemovePodSandbox \"13b49176f967e00a5ca9258f06ce541779f41e3abb7a19f439784c86929029a9\" returns successfully" Mar 17 17:54:23.876278 containerd[1493]: time="2025-03-17T17:54:23.876249555Z" level=info msg="StopPodSandbox for \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\"" Mar 17 17:54:23.876373 containerd[1493]: time="2025-03-17T17:54:23.876353555Z" level=info msg="TearDown network for sandbox \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\" successfully" Mar 17 17:54:23.876373 containerd[1493]: time="2025-03-17T17:54:23.876370396Z" level=info msg="StopPodSandbox for \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\" returns successfully" Mar 17 17:54:23.876718 containerd[1493]: time="2025-03-17T17:54:23.876690437Z" level=info msg="RemovePodSandbox for \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\"" Mar 17 17:54:23.876775 containerd[1493]: time="2025-03-17T17:54:23.876723997Z" level=info msg="Forcibly stopping sandbox \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\"" Mar 17 17:54:23.876826 containerd[1493]: time="2025-03-17T17:54:23.876792718Z" level=info msg="TearDown network for sandbox \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\" successfully" Mar 17 17:54:23.879870 containerd[1493]: time="2025-03-17T17:54:23.879822012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:54:23.879870 containerd[1493]: time="2025-03-17T17:54:23.879917093Z" level=info msg="RemovePodSandbox \"0ba9f9744063564edc76fb4369ffe1c147bc8b9bda45ef499da1c530daa77ac8\" returns successfully" Mar 17 17:54:23.880925 containerd[1493]: time="2025-03-17T17:54:23.880647336Z" level=info msg="StopPodSandbox for \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\"" Mar 17 17:54:23.880925 containerd[1493]: time="2025-03-17T17:54:23.880785377Z" level=info msg="TearDown network for sandbox \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\" successfully" Mar 17 17:54:23.880925 containerd[1493]: time="2025-03-17T17:54:23.880845097Z" level=info msg="StopPodSandbox for \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\" returns successfully" Mar 17 17:54:23.881455 containerd[1493]: time="2025-03-17T17:54:23.881428620Z" level=info msg="RemovePodSandbox for \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\"" Mar 17 17:54:23.881509 containerd[1493]: time="2025-03-17T17:54:23.881466420Z" level=info msg="Forcibly stopping sandbox \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\"" Mar 17 17:54:23.881569 containerd[1493]: time="2025-03-17T17:54:23.881551781Z" level=info msg="TearDown network for sandbox \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\" successfully" Mar 17 17:54:23.885180 containerd[1493]: time="2025-03-17T17:54:23.885086358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:54:23.885180 containerd[1493]: time="2025-03-17T17:54:23.885162638Z" level=info msg="RemovePodSandbox \"96ba6ae51d65d6cd377422edeee50841f6943e6561ba519f56d04103fbdc522e\" returns successfully" Mar 17 17:54:23.885924 containerd[1493]: time="2025-03-17T17:54:23.885641601Z" level=info msg="StopPodSandbox for \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\"" Mar 17 17:54:23.885924 containerd[1493]: time="2025-03-17T17:54:23.885774201Z" level=info msg="TearDown network for sandbox \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\" successfully" Mar 17 17:54:23.885924 containerd[1493]: time="2025-03-17T17:54:23.885788401Z" level=info msg="StopPodSandbox for \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\" returns successfully" Mar 17 17:54:23.886204 containerd[1493]: time="2025-03-17T17:54:23.886147803Z" level=info msg="RemovePodSandbox for \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\"" Mar 17 17:54:23.886204 containerd[1493]: time="2025-03-17T17:54:23.886187083Z" level=info msg="Forcibly stopping sandbox \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\"" Mar 17 17:54:23.886286 containerd[1493]: time="2025-03-17T17:54:23.886257404Z" level=info msg="TearDown network for sandbox \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\" successfully" Mar 17 17:54:23.889743 containerd[1493]: time="2025-03-17T17:54:23.889671180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:54:23.889743 containerd[1493]: time="2025-03-17T17:54:23.889735341Z" level=info msg="RemovePodSandbox \"7f664a8930f8de726bbbed8721d33cfeb5d03ad6e9f92c6fea71ee3cb4b74630\" returns successfully" Mar 17 17:54:24.842917 kubelet[2018]: E0317 17:54:24.842850 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:25.844149 kubelet[2018]: E0317 17:54:25.844057 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:26.845074 kubelet[2018]: E0317 17:54:26.845000 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:27.846222 kubelet[2018]: E0317 17:54:27.846064 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:28.846996 kubelet[2018]: E0317 17:54:28.846927 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:29.848192 kubelet[2018]: E0317 17:54:29.848105 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:30.848832 kubelet[2018]: E0317 17:54:30.848678 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:31.849722 kubelet[2018]: E0317 17:54:31.849645 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:32.850818 kubelet[2018]: E0317 17:54:32.850697 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:33.851843 kubelet[2018]: E0317 17:54:33.851733 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:34.852664 kubelet[2018]: E0317 17:54:34.852590 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:54:35.182612 kubelet[2018]: E0317 17:54:35.182509 2018 controller.go:195] "Failed to update lease" err="Put \"https://138.199.229.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 17:54:35.233333 kubelet[2018]: E0317 17:54:35.233024 2018 kubelet_node_status.go:549] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:54:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:54:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:54:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:54:25Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\\\",\\\"ghcr.io/flatcar/calico/node:v3.29.2\\\"],\\\"sizeBytes\\\":137085886},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\\\",\\\"ghcr.io/flatcar/calico/cni:v3.29.2\\\"],\\\"sizeBytes\\\":92597153},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":69703745},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\\\",\\\"registry.k8s.io/kube-proxy:v1.32.3\\\"],\\\"sizeBytes\\\":27369114},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\\\"],\\\"sizeBytes\\\":14491426},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\\\",\\\"ghcr.io/flatcar/calico/csi:v3.29.2\\\"],\\\"sizeBytes\\\":8843558},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\\\"],\\\"sizeBytes\\\":6489869},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":268403}]}}\" for node \"10.0.0.4\": Patch \"https://138.199.229.115:6443/api/v1/nodes/10.0.0.4/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 17:54:35.853676 kubelet[2018]: E0317 17:54:35.853596 2018 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"