Feb 13 19:35:06.868577 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:35:06.868598 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025 Feb 13 19:35:06.868608 kernel: KASLR enabled Feb 13 19:35:06.868614 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 19:35:06.868620 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Feb 13 19:35:06.868625 kernel: random: crng init done Feb 13 19:35:06.868632 kernel: secureboot: Secure boot disabled Feb 13 19:35:06.868638 kernel: ACPI: Early table checksum verification disabled Feb 13 19:35:06.868644 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 19:35:06.868651 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:35:06.868657 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:35:06.868663 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:35:06.868669 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:35:06.868674 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:35:06.868682 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:35:06.868689 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:35:06.868695 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:35:06.868702 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:35:06.868708 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:35:06.868714 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:35:06.868720 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 19:35:06.868726 kernel: NUMA: Failed to initialise from firmware Feb 13 19:35:06.868732 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 19:35:06.868738 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 19:35:06.868744 kernel: Zone ranges: Feb 13 19:35:06.868751 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:35:06.868757 kernel: DMA32 empty Feb 13 19:35:06.868763 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 19:35:06.868769 kernel: Movable zone start for each node Feb 13 19:35:06.868775 kernel: Early memory node ranges Feb 13 19:35:06.868781 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Feb 13 19:35:06.868787 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Feb 13 19:35:06.868793 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Feb 13 19:35:06.868799 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 19:35:06.868805 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 19:35:06.868811 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 19:35:06.868817 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 19:35:06.868824 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 19:35:06.868830 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 19:35:06.868836 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 19:35:06.868845 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 19:35:06.868851 kernel: psci: probing for conduit method from ACPI. Feb 13 19:35:06.868858 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:35:06.868865 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:35:06.868872 kernel: psci: Trusted OS migration not required Feb 13 19:35:06.868878 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:35:06.868885 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:35:06.868891 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:35:06.868898 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:35:06.868904 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:35:06.868910 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:35:06.868917 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:35:06.868923 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:35:06.868931 kernel: CPU features: detected: Spectre-v4 Feb 13 19:35:06.868937 kernel: CPU features: detected: Spectre-BHB Feb 13 19:35:06.868944 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:35:06.868950 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:35:06.868969 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:35:06.868976 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:35:06.868983 kernel: alternatives: applying boot alternatives Feb 13 19:35:06.868991 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 19:35:06.868998 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:35:06.869004 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:35:06.869011 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:35:06.869020 kernel: Fallback order for Node 0: 0 Feb 13 19:35:06.869027 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 19:35:06.869033 kernel: Policy zone: Normal Feb 13 19:35:06.869039 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:35:06.869046 kernel: software IO TLB: area num 2. Feb 13 19:35:06.869053 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 19:35:06.869060 kernel: Memory: 3882296K/4096000K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 213704K reserved, 0K cma-reserved) Feb 13 19:35:06.869066 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:35:06.869073 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:35:06.869080 kernel: rcu: RCU event tracing is enabled. Feb 13 19:35:06.869087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:35:06.869093 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:35:06.869101 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:35:06.869108 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:35:06.869114 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:35:06.869121 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:35:06.869127 kernel: GICv3: 256 SPIs implemented Feb 13 19:35:06.869134 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:35:06.869140 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:35:06.869146 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:35:06.869153 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:35:06.869159 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:35:06.869166 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:35:06.869174 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:35:06.869180 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 19:35:06.869187 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 19:35:06.869194 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:35:06.869200 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:35:06.869206 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:35:06.869213 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:35:06.869220 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:35:06.869226 kernel: Console: colour dummy device 80x25 Feb 13 19:35:06.869233 kernel: ACPI: Core revision 20230628 Feb 13 19:35:06.869240 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:35:06.869248 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:35:06.869255 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:35:06.869262 kernel: landlock: Up and running. Feb 13 19:35:06.869269 kernel: SELinux: Initializing. Feb 13 19:35:06.869275 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:35:06.869282 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:35:06.869289 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:35:06.869296 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:35:06.869302 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:35:06.869310 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:35:06.869317 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:35:06.869324 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:35:06.869330 kernel: Remapping and enabling EFI services. Feb 13 19:35:06.869337 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:35:06.869343 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:35:06.869350 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:35:06.869357 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 19:35:06.869363 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:35:06.869372 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:35:06.869378 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:35:06.869412 kernel: SMP: Total of 2 processors activated. Feb 13 19:35:06.869421 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:35:06.869428 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:35:06.869435 kernel: CPU features: detected: Common not Private translations Feb 13 19:35:06.869442 kernel: CPU features: detected: CRC32 instructions Feb 13 19:35:06.869449 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:35:06.869457 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:35:06.869465 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:35:06.869472 kernel: CPU features: detected: Privileged Access Never Feb 13 19:35:06.869480 kernel: CPU features: detected: RAS Extension Support Feb 13 19:35:06.869487 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:35:06.869494 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:35:06.869501 kernel: alternatives: applying system-wide alternatives Feb 13 19:35:06.869508 kernel: devtmpfs: initialized Feb 13 19:35:06.869516 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:35:06.869525 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:35:06.869532 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:35:06.869539 kernel: SMBIOS 3.0.0 present. Feb 13 19:35:06.869546 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 19:35:06.869553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:35:06.869560 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:35:06.869567 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:35:06.869574 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:35:06.869581 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:35:06.869590 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Feb 13 19:35:06.869597 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:35:06.869603 kernel: cpuidle: using governor menu Feb 13 19:35:06.869610 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:35:06.869617 kernel: ASID allocator initialised with 32768 entries Feb 13 19:35:06.869624 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:35:06.869631 kernel: Serial: AMBA PL011 UART driver Feb 13 19:35:06.869638 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:35:06.869645 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:35:06.869655 kernel: Modules: 508880 pages in range for PLT usage Feb 13 19:35:06.869662 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:35:06.869669 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:35:06.869676 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:35:06.869683 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:35:06.869690 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:35:06.869697 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:35:06.869703 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:35:06.869711 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:35:06.869719 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:35:06.869726 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:35:06.869733 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:35:06.869740 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:35:06.869747 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:35:06.869754 kernel: ACPI: Interpreter enabled Feb 13 19:35:06.869761 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:35:06.869768 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:35:06.869775 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:35:06.869784 kernel: printk: console [ttyAMA0] enabled Feb 13 19:35:06.869791 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:35:06.869932 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:35:06.870024 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:35:06.870091 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:35:06.870156 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:35:06.870219 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:35:06.870231 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:35:06.870239 kernel: PCI host bridge to bus 0000:00 Feb 13 19:35:06.870309 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:35:06.870368 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:35:06.870556 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:35:06.870616 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:35:06.870695 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:35:06.870775 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 19:35:06.870842 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 19:35:06.870906 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 19:35:06.871021 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 19:35:06.871091 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 19:35:06.871168 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 19:35:06.871241 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 19:35:06.871313 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 19:35:06.871378 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 19:35:06.871468 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 19:35:06.871535 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 19:35:06.871606 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 19:35:06.871671 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 19:35:06.871747 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 19:35:06.871813 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 19:35:06.871884 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 19:35:06.871950 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 19:35:06.872037 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 19:35:06.872105 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 19:35:06.872181 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 19:35:06.872246 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 19:35:06.872321 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 19:35:06.872419 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 19:35:06.872502 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 19:35:06.872570 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 19:35:06.872641 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:35:06.872707 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 19:35:06.872779 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 19:35:06.872847 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 19:35:06.872922 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 19:35:06.873032 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 19:35:06.873105 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 19:35:06.873187 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 19:35:06.873256 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 19:35:06.873332 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 19:35:06.873416 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 19:35:06.873485 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 19:35:06.873561 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 19:35:06.873632 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 19:35:06.873701 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 19:35:06.873776 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 19:35:06.873847 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 19:35:06.873916 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 19:35:06.873999 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 19:35:06.874074 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 19:35:06.874140 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 19:35:06.874205 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 19:35:06.874273 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 19:35:06.874339 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 19:35:06.874686 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 19:35:06.874773 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 19:35:06.875312 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 19:35:06.875427 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 19:35:06.875504 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 19:35:06.875569 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 19:35:06.875634 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 19:35:06.875701 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 19:35:06.875766 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 19:35:06.875828 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 19:35:06.875906 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 19:35:06.875993 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 19:35:06.876063 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 19:35:06.876132 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 19:35:06.876199 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 19:35:06.876267 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 19:35:06.876335 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 19:35:06.876438 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 19:35:06.876506 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 19:35:06.876581 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 19:35:06.876658 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 19:35:06.876722 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 19:35:06.876787 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 19:35:06.876851 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 19:35:06.876917 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 19:35:06.877004 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 19:35:06.877073 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 19:35:06.877139 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 19:35:06.877205 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 19:35:06.877269 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 19:35:06.877336 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 19:35:06.877472 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 19:35:06.877547 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 19:35:06.877610 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 19:35:06.877674 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 19:35:06.877737 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 19:35:06.877802 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 19:35:06.877865 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 19:35:06.877934 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 19:35:06.878038 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 19:35:06.878111 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 19:35:06.878175 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 19:35:06.878239 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 19:35:06.878303 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 19:35:06.878367 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 19:35:06.878461 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 19:35:06.878532 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 19:35:06.878596 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 19:35:06.878668 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 19:35:06.878735 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 19:35:06.878801 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 19:35:06.878876 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 19:35:06.878942 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 19:35:06.879026 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 19:35:06.879103 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 19:35:06.879168 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 19:35:06.879233 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 19:35:06.879297 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 19:35:06.879362 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 19:35:06.879564 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 19:35:06.879640 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 19:35:06.879713 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 19:35:06.879784 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:35:06.879850 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 19:35:06.879915 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 19:35:06.880024 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 19:35:06.880093 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 19:35:06.880158 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 19:35:06.880230 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 19:35:06.880300 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 19:35:06.880366 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 19:35:06.880444 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 19:35:06.880509 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 19:35:06.880580 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 19:35:06.880651 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 19:35:06.880714 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 19:35:06.880778 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 19:35:06.880840 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 19:35:06.880903 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 19:35:06.880986 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 19:35:06.881054 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 19:35:06.881117 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 19:35:06.881184 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 19:35:06.881252 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 19:35:06.881323 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 19:35:06.883458 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 19:35:06.883577 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 19:35:06.883646 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 19:35:06.883711 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 19:35:06.883776 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 19:35:06.883859 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 19:35:06.883929 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 19:35:06.884020 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 19:35:06.884089 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 19:35:06.884155 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 19:35:06.884221 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 19:35:06.884297 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 19:35:06.884366 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 19:35:06.884454 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 19:35:06.884523 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 19:35:06.884589 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 19:35:06.884653 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 19:35:06.884719 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 19:35:06.884788 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 19:35:06.884856 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 19:35:06.884925 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 19:35:06.885003 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 19:35:06.885074 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 19:35:06.885142 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 19:35:06.885207 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 19:35:06.885272 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 19:35:06.885339 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:35:06.885409 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:35:06.885473 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:35:06.885551 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 19:35:06.885615 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 19:35:06.885677 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 19:35:06.885747 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 19:35:06.885808 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 19:35:06.885872 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 19:35:06.885953 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 19:35:06.886070 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 19:35:06.886135 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 19:35:06.886205 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 19:35:06.886268 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 19:35:06.886334 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 19:35:06.886427 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 19:35:06.886492 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 19:35:06.886556 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 19:35:06.886625 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 19:35:06.886689 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 19:35:06.886751 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 19:35:06.886819 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 19:35:06.886882 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 19:35:06.886942 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 19:35:06.887026 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 19:35:06.887089 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 19:35:06.887154 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 19:35:06.887224 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 19:35:06.887285 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 19:35:06.887347 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 19:35:06.887357 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:35:06.887365 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:35:06.887373 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:35:06.887382 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:35:06.888644 kernel: iommu: Default domain type: Translated Feb 13 19:35:06.888655 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:35:06.888662 kernel: efivars: Registered efivars operations Feb 13 19:35:06.888670 kernel: vgaarb: loaded Feb 13 19:35:06.888678 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:35:06.888685 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:35:06.888694 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:35:06.888701 kernel: pnp: PnP ACPI init Feb 13 19:35:06.888839 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:35:06.888860 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:35:06.888868 kernel: NET: Registered PF_INET protocol family Feb 13 19:35:06.888875 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:35:06.888883 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:35:06.888891 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:35:06.888899 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:35:06.888907 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:35:06.888914 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:35:06.888924 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:35:06.888931 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:35:06.888939 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:35:06.889036 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 19:35:06.889049 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:35:06.889056 kernel: kvm [1]: HYP mode not available Feb 13 19:35:06.889064 kernel: Initialise system trusted keyrings Feb 13 19:35:06.889071 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:35:06.889079 kernel: Key type asymmetric registered Feb 13 19:35:06.889089 kernel: Asymmetric key parser 'x509' registered Feb 13 19:35:06.889096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:35:06.889104 kernel: io scheduler mq-deadline registered Feb 13 19:35:06.889111 kernel: io scheduler kyber registered Feb 13 19:35:06.889119 kernel: io scheduler bfq registered Feb 13 19:35:06.889127 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:35:06.889198 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 19:35:06.889265 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 19:35:06.889331 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:35:06.889413 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 19:35:06.889482 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 19:35:06.889554 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:35:06.889622 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 19:35:06.889688 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 19:35:06.889756 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:35:06.889824 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 19:35:06.889890 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 19:35:06.889962 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:35:06.890033 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 19:35:06.890101 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 19:35:06.890171 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:35:06.890241 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 19:35:06.890307 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 19:35:06.890371 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:35:06.891509 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 19:35:06.891590 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 19:35:06.891666 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:35:06.891737 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 19:35:06.891803 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 19:35:06.891869 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:35:06.891881 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 19:35:06.891948 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 19:35:06.892041 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 19:35:06.892107 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:35:06.892117 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:35:06.892125 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:35:06.892133 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:35:06.892205 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 19:35:06.892279 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 19:35:06.892290 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:35:06.892301 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:35:06.892369 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 19:35:06.892379 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 19:35:06.892774 kernel: thunder_xcv, ver 1.0 Feb 13 19:35:06.892791 kernel: thunder_bgx, ver 1.0 Feb 13 19:35:06.892799 kernel: nicpf, ver 1.0 Feb 13 19:35:06.892807 kernel: nicvf, ver 1.0 Feb 13 19:35:06.892912 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:35:06.893004 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:35:06 UTC (1739475306) Feb 13 19:35:06.893022 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:35:06.893030 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:35:06.893037 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:35:06.893045 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:35:06.893053 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:35:06.893060 kernel: Segment Routing with IPv6 Feb 13 19:35:06.893068 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:35:06.893075 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:35:06.893084 kernel: Key type dns_resolver registered Feb 13 19:35:06.893092 kernel: registered taskstats version 1 Feb 13 19:35:06.893099 kernel: Loading compiled-in X.509 certificates Feb 13 19:35:06.893107 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3' Feb 13 19:35:06.893115 kernel: Key type .fscrypt registered Feb 13 19:35:06.893125 kernel: Key type fscrypt-provisioning registered Feb 13 19:35:06.893133 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:35:06.893142 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:35:06.893150 kernel: ima: No architecture policies found Feb 13 19:35:06.893159 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:35:06.893167 kernel: clk: Disabling unused clocks Feb 13 19:35:06.893174 kernel: Freeing unused kernel memory: 39936K Feb 13 19:35:06.893182 kernel: Run /init as init process Feb 13 19:35:06.893189 kernel: with arguments: Feb 13 19:35:06.893197 kernel: /init Feb 13 19:35:06.893204 kernel: with environment: Feb 13 19:35:06.893211 kernel: HOME=/ Feb 13 19:35:06.893219 kernel: TERM=linux Feb 13 19:35:06.893227 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:35:06.893237 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:35:06.893247 systemd[1]: Detected virtualization kvm. Feb 13 19:35:06.893255 systemd[1]: Detected architecture arm64. Feb 13 19:35:06.893262 systemd[1]: Running in initrd. Feb 13 19:35:06.893272 systemd[1]: No hostname configured, using default hostname. Feb 13 19:35:06.893280 systemd[1]: Hostname set to . Feb 13 19:35:06.893290 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:35:06.893298 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:35:06.893306 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:35:06.893314 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:35:06.893323 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:35:06.893331 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:35:06.893340 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:35:06.893348 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:35:06.893359 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:35:06.893368 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:35:06.893376 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:35:06.893384 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:35:06.893465 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:35:06.893474 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:35:06.893482 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:35:06.893493 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:35:06.893501 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:35:06.893510 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:35:06.893518 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:35:06.893526 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:35:06.893534 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:35:06.893542 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:35:06.893550 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:35:06.893558 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:35:06.893568 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:35:06.893576 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:35:06.893584 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:35:06.893592 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:35:06.893601 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:35:06.893609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:35:06.893617 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:35:06.893625 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:35:06.893656 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 19:35:06.893678 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:35:06.893687 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:35:06.893697 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:35:06.893706 systemd-journald[237]: Journal started Feb 13 19:35:06.893729 systemd-journald[237]: Runtime Journal (/run/log/journal/b95184b6487f485a80f4d1f184878612) is 8.0M, max 76.6M, 68.6M free. Feb 13 19:35:06.895215 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 19:35:06.899449 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:35:06.903424 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:35:06.907642 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:35:06.913686 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:35:06.917094 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:35:06.919554 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:35:06.922571 kernel: Bridge firewalling registered Feb 13 19:35:06.921879 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 19:35:06.930703 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:35:06.931590 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:35:06.932567 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:35:06.935408 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:35:06.943885 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:35:06.947614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:35:06.948410 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:35:06.958470 dracut-cmdline[268]: dracut-dracut-053 Feb 13 19:35:06.962749 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:35:06.964720 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 19:35:06.972805 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:35:07.003264 systemd-resolved[286]: Positive Trust Anchors: Feb 13 19:35:07.003869 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:35:07.003902 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:35:07.013148 systemd-resolved[286]: Defaulting to hostname 'linux'. Feb 13 19:35:07.014891 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:35:07.016097 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:35:07.060473 kernel: SCSI subsystem initialized Feb 13 19:35:07.064421 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:35:07.072448 kernel: iscsi: registered transport (tcp) Feb 13 19:35:07.088456 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:35:07.088540 kernel: QLogic iSCSI HBA Driver Feb 13 19:35:07.138437 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:35:07.144638 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:35:07.167609 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:35:07.167687 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:35:07.167717 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:35:07.218446 kernel: raid6: neonx8 gen() 15673 MB/s Feb 13 19:35:07.235493 kernel: raid6: neonx4 gen() 15710 MB/s Feb 13 19:35:07.252461 kernel: raid6: neonx2 gen() 13142 MB/s Feb 13 19:35:07.269426 kernel: raid6: neonx1 gen() 10428 MB/s Feb 13 19:35:07.286449 kernel: raid6: int64x8 gen() 6750 MB/s Feb 13 19:35:07.303454 kernel: raid6: int64x4 gen() 7309 MB/s Feb 13 19:35:07.320443 kernel: raid6: int64x2 gen() 6076 MB/s Feb 13 19:35:07.337493 kernel: raid6: int64x1 gen() 5025 MB/s Feb 13 19:35:07.337583 kernel: raid6: using algorithm neonx4 gen() 15710 MB/s Feb 13 19:35:07.354483 kernel: raid6: .... xor() 12348 MB/s, rmw enabled Feb 13 19:35:07.354579 kernel: raid6: using neon recovery algorithm Feb 13 19:35:07.358446 kernel: xor: measuring software checksum speed Feb 13 19:35:07.359563 kernel: 8regs : 19764 MB/sec Feb 13 19:35:07.359614 kernel: 32regs : 21716 MB/sec Feb 13 19:35:07.359630 kernel: arm64_neon : 27304 MB/sec Feb 13 19:35:07.359645 kernel: xor: using function: arm64_neon (27304 MB/sec) Feb 13 19:35:07.409510 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:35:07.423418 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:35:07.429690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:35:07.454776 systemd-udevd[456]: Using default interface naming scheme 'v255'. Feb 13 19:35:07.458294 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:35:07.469622 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:35:07.488371 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Feb 13 19:35:07.525046 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:35:07.530572 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:35:07.585380 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:35:07.592584 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:35:07.619501 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:35:07.621605 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:35:07.622221 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:35:07.624793 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:35:07.635658 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:35:07.652565 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:35:07.701716 kernel: scsi host0: Virtio SCSI HBA Feb 13 19:35:07.702627 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:35:07.704099 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:35:07.702740 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:35:07.705944 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 19:35:07.705455 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:35:07.707375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:35:07.707572 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:35:07.709167 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:35:07.717419 kernel: ACPI: bus type USB registered Feb 13 19:35:07.717485 kernel: usbcore: registered new interface driver usbfs Feb 13 19:35:07.717660 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:35:07.722422 kernel: usbcore: registered new interface driver hub Feb 13 19:35:07.722488 kernel: usbcore: registered new device driver usb Feb 13 19:35:07.730866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:35:07.735633 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:35:07.748581 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 19:35:07.759528 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 19:35:07.759652 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:35:07.759671 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 19:35:07.764919 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 19:35:07.765089 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 19:35:07.765176 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 19:35:07.765257 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:35:07.765361 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 19:35:07.765463 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 19:35:07.765552 kernel: hub 1-0:1.0: USB hub found Feb 13 19:35:07.765671 kernel: hub 1-0:1.0: 4 ports detected Feb 13 19:35:07.765750 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 19:35:07.765843 kernel: hub 2-0:1.0: USB hub found Feb 13 19:35:07.765929 kernel: hub 2-0:1.0: 4 ports detected Feb 13 19:35:07.764113 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:35:07.779775 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 19:35:07.790567 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 19:35:07.790681 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 19:35:07.790775 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 19:35:07.790858 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:35:07.790972 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:35:07.790986 kernel: GPT:17805311 != 80003071 Feb 13 19:35:07.790995 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:35:07.791005 kernel: GPT:17805311 != 80003071 Feb 13 19:35:07.791014 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:35:07.791023 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:35:07.791033 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 19:35:07.828525 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (506) Feb 13 19:35:07.834370 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (516) Feb 13 19:35:07.843048 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 19:35:07.849272 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 19:35:07.855181 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 19:35:07.857562 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 19:35:07.864599 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 19:35:07.880701 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:35:07.888866 disk-uuid[572]: Primary Header is updated. Feb 13 19:35:07.888866 disk-uuid[572]: Secondary Entries is updated. Feb 13 19:35:07.888866 disk-uuid[572]: Secondary Header is updated. Feb 13 19:35:07.893452 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:35:08.002156 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 19:35:08.244443 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 19:35:08.379973 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 19:35:08.380033 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 19:35:08.381458 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 19:35:08.434892 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 19:35:08.435371 kernel: usbcore: registered new interface driver usbhid Feb 13 19:35:08.436465 kernel: usbhid: USB HID core driver Feb 13 19:35:08.908102 disk-uuid[573]: The operation has completed successfully. Feb 13 19:35:08.909236 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:35:08.967383 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:35:08.967552 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:35:08.981683 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:35:08.986081 sh[588]: Success Feb 13 19:35:09.000410 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:35:09.064103 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:35:09.067569 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:35:09.068755 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:35:09.085861 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8 Feb 13 19:35:09.085927 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:35:09.085959 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:35:09.086756 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:35:09.086782 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:35:09.094434 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:35:09.096441 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:35:09.098126 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:35:09.104671 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:35:09.108634 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:35:09.120427 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:35:09.120492 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:35:09.120507 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:35:09.124427 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:35:09.124491 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:35:09.136436 kernel: BTRFS info (device sda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:35:09.136289 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:35:09.142647 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:35:09.154664 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:35:09.239047 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:35:09.247666 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:35:09.259060 ignition[672]: Ignition 2.20.0 Feb 13 19:35:09.259071 ignition[672]: Stage: fetch-offline Feb 13 19:35:09.259118 ignition[672]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:35:09.261884 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:35:09.259127 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:35:09.259295 ignition[672]: parsed url from cmdline: "" Feb 13 19:35:09.259299 ignition[672]: no config URL provided Feb 13 19:35:09.259304 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:35:09.259311 ignition[672]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:35:09.259316 ignition[672]: failed to fetch config: resource requires networking Feb 13 19:35:09.259540 ignition[672]: Ignition finished successfully Feb 13 19:35:09.273586 systemd-networkd[774]: lo: Link UP Feb 13 19:35:09.273609 systemd-networkd[774]: lo: Gained carrier Feb 13 19:35:09.275324 systemd-networkd[774]: Enumeration completed Feb 13 19:35:09.275447 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:35:09.276200 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:35:09.276203 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:35:09.277060 systemd[1]: Reached target network.target - Network. Feb 13 19:35:09.278609 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:35:09.278612 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:35:09.279771 systemd-networkd[774]: eth0: Link UP Feb 13 19:35:09.279775 systemd-networkd[774]: eth0: Gained carrier Feb 13 19:35:09.279784 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:35:09.284824 systemd-networkd[774]: eth1: Link UP Feb 13 19:35:09.284828 systemd-networkd[774]: eth1: Gained carrier Feb 13 19:35:09.284839 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:35:09.285772 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:35:09.299565 ignition[778]: Ignition 2.20.0 Feb 13 19:35:09.299575 ignition[778]: Stage: fetch Feb 13 19:35:09.299760 ignition[778]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:35:09.299770 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:35:09.299856 ignition[778]: parsed url from cmdline: "" Feb 13 19:35:09.299860 ignition[778]: no config URL provided Feb 13 19:35:09.299865 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:35:09.299871 ignition[778]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:35:09.299969 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 19:35:09.300865 ignition[778]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 19:35:09.318534 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:35:09.342648 systemd-networkd[774]: eth0: DHCPv4 address 188.245.239.161/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 19:35:09.501703 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 19:35:09.508471 ignition[778]: GET result: OK Feb 13 19:35:09.508539 ignition[778]: parsing config with SHA512: e6240ed60092276b8596c66f42fbda4b64eb2b010a2e886b543de7bf2535c54400415819e2f7e864d17e71f5407cbb31a27654a569ba8e0d5a4b3152bc1175d1 Feb 13 19:35:09.514277 unknown[778]: fetched base config from "system" Feb 13 19:35:09.514755 ignition[778]: fetch: fetch complete Feb 13 19:35:09.514289 unknown[778]: fetched base config from "system" Feb 13 19:35:09.514761 ignition[778]: fetch: fetch passed Feb 13 19:35:09.514295 unknown[778]: fetched user config from "hetzner" Feb 13 19:35:09.514868 ignition[778]: Ignition finished successfully Feb 13 19:35:09.518166 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:35:09.525633 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:35:09.542427 ignition[786]: Ignition 2.20.0 Feb 13 19:35:09.542437 ignition[786]: Stage: kargs Feb 13 19:35:09.542627 ignition[786]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:35:09.542638 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:35:09.544551 ignition[786]: kargs: kargs passed Feb 13 19:35:09.547558 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:35:09.544621 ignition[786]: Ignition finished successfully Feb 13 19:35:09.553637 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:35:09.566665 ignition[792]: Ignition 2.20.0 Feb 13 19:35:09.566676 ignition[792]: Stage: disks Feb 13 19:35:09.569406 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:35:09.566934 ignition[792]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:35:09.570984 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:35:09.566972 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:35:09.572302 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:35:09.567850 ignition[792]: disks: disks passed Feb 13 19:35:09.573357 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:35:09.567907 ignition[792]: Ignition finished successfully Feb 13 19:35:09.574472 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:35:09.575767 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:35:09.583846 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:35:09.605131 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 19:35:09.610258 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:35:09.615591 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:35:09.658443 kernel: EXT4-fs (sda9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none. Feb 13 19:35:09.660282 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:35:09.662994 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:35:09.670533 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:35:09.674080 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:35:09.677631 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 19:35:09.678410 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:35:09.678445 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:35:09.685229 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:35:09.688228 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:35:09.691928 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (808) Feb 13 19:35:09.696592 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:35:09.696645 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:35:09.696658 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:35:09.702936 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:35:09.703015 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:35:09.705782 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:35:09.753592 coreos-metadata[810]: Feb 13 19:35:09.753 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 19:35:09.755472 coreos-metadata[810]: Feb 13 19:35:09.755 INFO Fetch successful Feb 13 19:35:09.757574 coreos-metadata[810]: Feb 13 19:35:09.756 INFO wrote hostname ci-4186-1-1-8-34e5756c8e to /sysroot/etc/hostname Feb 13 19:35:09.759369 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:35:09.762674 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:35:09.767158 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:35:09.772450 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:35:09.777325 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:35:09.886533 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:35:09.892587 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:35:09.898221 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:35:09.905456 kernel: BTRFS info (device sda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:35:09.933847 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:35:09.937405 ignition[926]: INFO : Ignition 2.20.0 Feb 13 19:35:09.937405 ignition[926]: INFO : Stage: mount Feb 13 19:35:09.937405 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:35:09.937405 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:35:09.939492 ignition[926]: INFO : mount: mount passed Feb 13 19:35:09.939879 ignition[926]: INFO : Ignition finished successfully Feb 13 19:35:09.941721 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:35:09.948657 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:35:10.087496 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:35:10.095776 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:35:10.110437 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (938) Feb 13 19:35:10.112514 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:35:10.112582 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:35:10.112603 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:35:10.115857 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:35:10.115921 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:35:10.118630 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:35:10.144831 ignition[955]: INFO : Ignition 2.20.0 Feb 13 19:35:10.144831 ignition[955]: INFO : Stage: files Feb 13 19:35:10.146030 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:35:10.146030 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:35:10.148035 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:35:10.149089 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:35:10.149089 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:35:10.153017 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:35:10.155206 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:35:10.155206 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:35:10.153586 unknown[955]: wrote ssh authorized keys file for user: core Feb 13 19:35:10.159112 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:35:10.159112 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:35:10.159112 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:35:10.159112 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:35:10.159112 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:35:10.159112 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:35:10.159112 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:35:10.159112 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 19:35:10.508992 systemd-networkd[774]: eth1: Gained IPv6LL Feb 13 19:35:10.740453 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:35:11.196769 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:35:11.196769 ignition[955]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 19:35:11.199405 ignition[955]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 19:35:11.199405 ignition[955]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 19:35:11.199405 ignition[955]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 19:35:11.199405 ignition[955]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:35:11.199405 ignition[955]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:35:11.199405 ignition[955]: INFO : files: files passed Feb 13 19:35:11.199405 ignition[955]: INFO : Ignition finished successfully Feb 13 19:35:11.200600 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:35:11.205652 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:35:11.209379 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:35:11.214819 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:35:11.215016 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:35:11.230794 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:35:11.230794 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:35:11.234144 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:35:11.236362 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:35:11.238420 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:35:11.243703 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:35:11.277715 systemd-networkd[774]: eth0: Gained IPv6LL Feb 13 19:35:11.291765 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:35:11.292037 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:35:11.294794 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:35:11.295957 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:35:11.297215 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:35:11.299008 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:35:11.319278 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:35:11.326645 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:35:11.338284 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:35:11.339010 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:35:11.339644 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:35:11.340618 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:35:11.340740 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:35:11.342205 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:35:11.342764 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:35:11.343695 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:35:11.344655 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:35:11.345564 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:35:11.346440 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:35:11.347334 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:35:11.348364 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:35:11.349402 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:35:11.350284 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:35:11.351144 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:35:11.351264 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:35:11.352590 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:35:11.353458 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:35:11.354348 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:35:11.354744 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:35:11.355339 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:35:11.355467 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:35:11.356838 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:35:11.357272 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:35:11.358104 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:35:11.358198 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:35:11.359143 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 19:35:11.359238 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:35:11.365699 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:35:11.366195 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:35:11.366324 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:35:11.371693 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:35:11.372716 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:35:11.373436 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:35:11.374178 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:35:11.374324 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:35:11.381661 ignition[1007]: INFO : Ignition 2.20.0 Feb 13 19:35:11.381661 ignition[1007]: INFO : Stage: umount Feb 13 19:35:11.381661 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:35:11.381661 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:35:11.399975 ignition[1007]: INFO : umount: umount passed Feb 13 19:35:11.399975 ignition[1007]: INFO : Ignition finished successfully Feb 13 19:35:11.391510 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:35:11.391684 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:35:11.396706 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:35:11.397099 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:35:11.401149 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:35:11.401224 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:35:11.404360 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:35:11.405169 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:35:11.406858 systemd[1]: Stopped target network.target - Network. Feb 13 19:35:11.407683 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:35:11.407751 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:35:11.408750 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:35:11.409610 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:35:11.414537 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:35:11.416368 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:35:11.417512 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:35:11.418850 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:35:11.418900 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:35:11.419818 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:35:11.419852 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:35:11.421006 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:35:11.421059 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:35:11.421818 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:35:11.421854 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:35:11.422856 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:35:11.423595 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:35:11.425364 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:35:11.426011 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:35:11.426105 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:35:11.427315 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:35:11.427401 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:35:11.428500 systemd-networkd[774]: eth1: DHCPv6 lease lost Feb 13 19:35:11.430184 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:35:11.430271 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:35:11.433487 systemd-networkd[774]: eth0: DHCPv6 lease lost Feb 13 19:35:11.434564 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:35:11.434681 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:35:11.437709 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:35:11.437820 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:35:11.439565 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:35:11.439624 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:35:11.459133 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:35:11.460211 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:35:11.460316 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:35:11.461967 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:35:11.462060 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:35:11.463476 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:35:11.463541 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:35:11.464598 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:35:11.464648 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:35:11.466221 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:35:11.478258 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:35:11.478368 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:35:11.494678 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:35:11.494987 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:35:11.497562 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:35:11.497646 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:35:11.499525 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:35:11.499589 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:35:11.501303 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:35:11.501351 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:35:11.502667 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:35:11.502713 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:35:11.504063 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:35:11.504110 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:35:11.511612 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:35:11.513700 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:35:11.513843 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:35:11.517157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:35:11.517231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:35:11.518899 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:35:11.519868 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:35:11.521039 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:35:11.528738 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:35:11.537599 systemd[1]: Switching root. Feb 13 19:35:11.575580 systemd-journald[237]: Journal stopped Feb 13 19:35:12.576206 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 19:35:12.576290 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:35:12.576303 kernel: SELinux: policy capability open_perms=1 Feb 13 19:35:12.576312 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:35:12.576322 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:35:12.576331 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:35:12.576340 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:35:12.576349 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:35:12.576359 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:35:12.576373 kernel: audit: type=1403 audit(1739475311.746:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:35:12.576384 systemd[1]: Successfully loaded SELinux policy in 38.084ms. Feb 13 19:35:12.576423 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.696ms. Feb 13 19:35:12.576442 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:35:12.576454 systemd[1]: Detected virtualization kvm. Feb 13 19:35:12.576464 systemd[1]: Detected architecture arm64. Feb 13 19:35:12.576474 systemd[1]: Detected first boot. Feb 13 19:35:12.576488 systemd[1]: Hostname set to . Feb 13 19:35:12.576498 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:35:12.576511 zram_generator::config[1049]: No configuration found. Feb 13 19:35:12.576526 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:35:12.576536 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:35:12.576547 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:35:12.576557 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:35:12.576568 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:35:12.576578 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:35:12.576593 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:35:12.576604 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:35:12.576615 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:35:12.576625 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:35:12.576635 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:35:12.576647 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:35:12.576657 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:35:12.576667 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:35:12.576677 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:35:12.576687 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:35:12.576699 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:35:12.576710 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:35:12.576720 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:35:12.576731 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:35:12.576741 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:35:12.576776 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:35:12.576793 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:35:12.576804 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:35:12.576814 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:35:12.576828 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:35:12.576840 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:35:12.576850 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:35:12.576860 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:35:12.576870 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:35:12.576880 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:35:12.576892 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:35:12.576902 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:35:12.576912 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:35:12.576929 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:35:12.576954 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:35:12.576967 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:35:12.576977 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:35:12.576989 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:35:12.576999 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:35:12.577013 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:35:12.577023 systemd[1]: Reached target machines.target - Containers. Feb 13 19:35:12.577033 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:35:12.577043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:35:12.577053 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:35:12.577063 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:35:12.577074 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:35:12.577083 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:35:12.577095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:35:12.577105 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:35:12.577115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:35:12.577126 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:35:12.577136 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:35:12.577146 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:35:12.577156 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:35:12.577171 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:35:12.577181 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:35:12.577194 kernel: loop: module loaded Feb 13 19:35:12.577207 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:35:12.577220 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:35:12.577229 kernel: fuse: init (API version 7.39) Feb 13 19:35:12.577239 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:35:12.577249 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:35:12.577262 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:35:12.577272 systemd[1]: Stopped verity-setup.service. Feb 13 19:35:12.577282 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:35:12.577291 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:35:12.577301 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:35:12.577333 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:35:12.577348 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:35:12.577362 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:35:12.577372 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:35:12.578603 systemd-journald[1118]: Collecting audit messages is disabled. Feb 13 19:35:12.578692 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:35:12.578709 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:35:12.578734 kernel: ACPI: bus type drm_connector registered Feb 13 19:35:12.578747 systemd-journald[1118]: Journal started Feb 13 19:35:12.578775 systemd-journald[1118]: Runtime Journal (/run/log/journal/b95184b6487f485a80f4d1f184878612) is 8.0M, max 76.6M, 68.6M free. Feb 13 19:35:12.327201 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:35:12.349984 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 19:35:12.350590 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:35:12.580813 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:35:12.588996 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:35:12.589244 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:35:12.592887 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:35:12.593122 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:35:12.595107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:35:12.596055 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:35:12.598104 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:35:12.599636 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:35:12.600559 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:35:12.602430 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:35:12.608962 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:35:12.610204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:35:12.624526 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:35:12.632214 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:35:12.639339 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:35:12.647536 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:35:12.651876 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:35:12.654613 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:35:12.654667 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:35:12.658465 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:35:12.663687 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:35:12.672750 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:35:12.674048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:35:12.682603 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:35:12.686355 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:35:12.687109 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:35:12.693342 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:35:12.694351 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:35:12.698698 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:35:12.704238 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:35:12.716663 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:35:12.720337 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:35:12.727978 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:35:12.731001 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:35:12.742460 systemd-journald[1118]: Time spent on flushing to /var/log/journal/b95184b6487f485a80f4d1f184878612 is 65.335ms for 1112 entries. Feb 13 19:35:12.742460 systemd-journald[1118]: System Journal (/var/log/journal/b95184b6487f485a80f4d1f184878612) is 8.0M, max 584.8M, 576.8M free. Feb 13 19:35:12.817017 systemd-journald[1118]: Received client request to flush runtime journal. Feb 13 19:35:12.817074 kernel: loop0: detected capacity change from 0 to 201592 Feb 13 19:35:12.760772 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:35:12.765323 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:35:12.773685 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:35:12.778487 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:35:12.792664 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:35:12.815374 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:35:12.826513 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:35:12.834954 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:35:12.839750 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:35:12.850220 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:35:12.854906 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:35:12.867414 kernel: loop1: detected capacity change from 0 to 113552 Feb 13 19:35:12.873508 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:35:12.884759 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:35:12.911467 kernel: loop2: detected capacity change from 0 to 8 Feb 13 19:35:12.910862 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Feb 13 19:35:12.910874 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Feb 13 19:35:12.923738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:35:12.937425 kernel: loop3: detected capacity change from 0 to 116784 Feb 13 19:35:12.991414 kernel: loop4: detected capacity change from 0 to 201592 Feb 13 19:35:13.012800 kernel: loop5: detected capacity change from 0 to 113552 Feb 13 19:35:13.026835 kernel: loop6: detected capacity change from 0 to 8 Feb 13 19:35:13.026985 kernel: loop7: detected capacity change from 0 to 116784 Feb 13 19:35:13.037175 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 19:35:13.037645 (sd-merge)[1189]: Merged extensions into '/usr'. Feb 13 19:35:13.042632 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:35:13.042652 systemd[1]: Reloading... Feb 13 19:35:13.188252 zram_generator::config[1215]: No configuration found. Feb 13 19:35:13.310886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:35:13.364324 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:35:13.365989 systemd[1]: Reloading finished in 322 ms. Feb 13 19:35:13.388586 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:35:13.391466 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:35:13.402678 systemd[1]: Starting ensure-sysext.service... Feb 13 19:35:13.408707 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:35:13.430490 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:35:13.431059 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:35:13.431572 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:35:13.431591 systemd[1]: Reloading... Feb 13 19:35:13.432148 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:35:13.432466 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Feb 13 19:35:13.432588 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Feb 13 19:35:13.436512 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:35:13.436645 systemd-tmpfiles[1253]: Skipping /boot Feb 13 19:35:13.447765 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:35:13.447885 systemd-tmpfiles[1253]: Skipping /boot Feb 13 19:35:13.518415 zram_generator::config[1283]: No configuration found. Feb 13 19:35:13.635139 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:35:13.686853 systemd[1]: Reloading finished in 254 ms. Feb 13 19:35:13.707527 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:35:13.722134 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:35:13.733698 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:35:13.739504 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:35:13.743372 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:35:13.751133 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:35:13.755595 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:35:13.763653 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:35:13.771914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:35:13.778237 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:35:13.783007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:35:13.800521 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:35:13.801451 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:35:13.806776 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:35:13.813239 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:35:13.814601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:35:13.819909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:35:13.836920 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:35:13.839170 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:35:13.841473 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:35:13.844943 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:35:13.847173 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:35:13.849531 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:35:13.851983 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:35:13.852281 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:35:13.864196 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Feb 13 19:35:13.864222 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:35:13.870029 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:35:13.871111 systemd[1]: Finished ensure-sysext.service. Feb 13 19:35:13.879838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:35:13.880014 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:35:13.881279 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:35:13.883855 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:35:13.890904 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:35:13.902524 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:35:13.905032 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:35:13.907467 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:35:13.911311 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:35:13.928169 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:35:13.932816 augenrules[1364]: No rules Feb 13 19:35:13.934636 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:35:13.934897 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:35:13.953045 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:35:13.968734 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:35:14.032222 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:35:14.034196 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:35:14.055745 systemd-resolved[1323]: Positive Trust Anchors: Feb 13 19:35:14.055768 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:35:14.055804 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:35:14.064873 systemd-resolved[1323]: Using system hostname 'ci-4186-1-1-8-34e5756c8e'. Feb 13 19:35:14.067540 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:35:14.068788 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:35:14.079554 systemd-networkd[1376]: lo: Link UP Feb 13 19:35:14.079562 systemd-networkd[1376]: lo: Gained carrier Feb 13 19:35:14.080367 systemd-networkd[1376]: Enumeration completed Feb 13 19:35:14.081053 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:35:14.082330 systemd[1]: Reached target network.target - Network. Feb 13 19:35:14.091693 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:35:14.092778 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:35:14.184134 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:35:14.184509 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:35:14.186585 systemd-networkd[1376]: eth0: Link UP Feb 13 19:35:14.186880 systemd-networkd[1376]: eth0: Gained carrier Feb 13 19:35:14.186907 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:35:14.195959 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:35:14.195968 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:35:14.198419 systemd-networkd[1376]: eth1: Link UP Feb 13 19:35:14.198489 systemd-networkd[1376]: eth1: Gained carrier Feb 13 19:35:14.198632 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:35:14.219559 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:35:14.221069 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Feb 13 19:35:14.241412 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1375) Feb 13 19:35:14.241477 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:35:14.252841 systemd-networkd[1376]: eth0: DHCPv4 address 188.245.239.161/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 19:35:14.256239 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Feb 13 19:35:14.302721 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 19:35:14.303112 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:35:14.310707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:35:14.316323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:35:14.322121 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:35:14.322921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:35:14.323003 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:35:14.324821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:35:14.325004 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:35:14.339975 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 19:35:14.344300 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:35:14.350241 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:35:14.351062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:35:14.352075 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:35:14.352240 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:35:14.354120 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:35:14.354556 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:35:14.365416 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 19:35:14.365474 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 19:35:14.365487 kernel: [drm] features: -context_init Feb 13 19:35:14.369480 kernel: [drm] number of scanouts: 1 Feb 13 19:35:14.378764 kernel: [drm] number of cap sets: 0 Feb 13 19:35:14.378872 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:35:14.382605 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 19:35:14.393424 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 19:35:14.400429 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 19:35:14.408772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:35:14.416046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:35:14.417058 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:35:14.429883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:35:14.488478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:35:14.553106 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:35:14.564744 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:35:14.575441 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:35:14.603423 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:35:14.606206 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:35:14.607420 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:35:14.608162 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:35:14.608994 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:35:14.609799 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:35:14.610448 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:35:14.611030 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:35:14.611607 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:35:14.611635 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:35:14.612055 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:35:14.613844 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:35:14.615797 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:35:14.620484 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:35:14.623071 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:35:14.624733 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:35:14.625654 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:35:14.626253 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:35:14.626774 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:35:14.626805 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:35:14.629547 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:35:14.634443 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:35:14.634588 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:35:14.638689 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:35:14.641698 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:35:14.645632 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:35:14.646161 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:35:14.650600 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:35:14.656630 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 19:35:14.658524 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:35:14.660615 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:35:14.666604 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:35:14.667825 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:35:14.668379 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:35:14.672621 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:35:14.677716 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:35:14.697582 jq[1446]: false Feb 13 19:35:14.700732 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:35:14.700982 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:35:14.714323 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:35:14.717725 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:35:14.729437 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:35:14.742750 jq[1456]: true Feb 13 19:35:14.746020 dbus-daemon[1445]: [system] SELinux support is enabled Feb 13 19:35:14.746809 coreos-metadata[1444]: Feb 13 19:35:14.745 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 19:35:14.748739 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:35:14.756566 coreos-metadata[1444]: Feb 13 19:35:14.750 INFO Fetch successful Feb 13 19:35:14.756566 coreos-metadata[1444]: Feb 13 19:35:14.754 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 19:35:14.756566 coreos-metadata[1444]: Feb 13 19:35:14.756 INFO Fetch successful Feb 13 19:35:14.754749 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:35:14.754800 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:35:14.755524 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:35:14.755546 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:35:14.766859 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:35:14.776464 extend-filesystems[1447]: Found loop4 Feb 13 19:35:14.776464 extend-filesystems[1447]: Found loop5 Feb 13 19:35:14.776464 extend-filesystems[1447]: Found loop6 Feb 13 19:35:14.776464 extend-filesystems[1447]: Found loop7 Feb 13 19:35:14.776464 extend-filesystems[1447]: Found sda Feb 13 19:35:14.776464 extend-filesystems[1447]: Found sda1 Feb 13 19:35:14.776464 extend-filesystems[1447]: Found sda2 Feb 13 19:35:14.776464 extend-filesystems[1447]: Found sda3 Feb 13 19:35:14.776464 extend-filesystems[1447]: Found usr Feb 13 19:35:14.776464 extend-filesystems[1447]: Found sda4 Feb 13 19:35:14.776464 extend-filesystems[1447]: Found sda6 Feb 13 19:35:14.776464 extend-filesystems[1447]: Found sda7 Feb 13 19:35:14.776464 extend-filesystems[1447]: Found sda9 Feb 13 19:35:14.776464 extend-filesystems[1447]: Checking size of /dev/sda9 Feb 13 19:35:14.793901 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:35:14.818194 jq[1469]: true Feb 13 19:35:14.795450 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:35:14.820738 update_engine[1455]: I20250213 19:35:14.818924 1455 main.cc:92] Flatcar Update Engine starting Feb 13 19:35:14.828509 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:35:14.833419 update_engine[1455]: I20250213 19:35:14.832322 1455 update_check_scheduler.cc:74] Next update check in 2m19s Feb 13 19:35:14.833640 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:35:14.838819 extend-filesystems[1447]: Resized partition /dev/sda9 Feb 13 19:35:14.853418 extend-filesystems[1490]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:35:14.864466 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 19:35:14.908190 systemd-logind[1454]: New seat seat0. Feb 13 19:35:14.920661 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:35:14.920708 systemd-logind[1454]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 19:35:14.920988 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:35:14.929536 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:35:14.930605 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:35:14.976832 bash[1510]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:35:14.980436 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:35:14.996994 systemd[1]: Starting sshkeys.service... Feb 13 19:35:15.021002 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:35:15.026418 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1390) Feb 13 19:35:15.043499 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 19:35:15.033061 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:35:15.048327 extend-filesystems[1490]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 19:35:15.048327 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 19:35:15.048327 extend-filesystems[1490]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 19:35:15.054793 extend-filesystems[1447]: Resized filesystem in /dev/sda9 Feb 13 19:35:15.054793 extend-filesystems[1447]: Found sr0 Feb 13 19:35:15.053113 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:35:15.054515 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:35:15.117616 coreos-metadata[1517]: Feb 13 19:35:15.117 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 19:35:15.119065 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:35:15.119736 coreos-metadata[1517]: Feb 13 19:35:15.119 INFO Fetch successful Feb 13 19:35:15.123536 unknown[1517]: wrote ssh authorized keys file for user: core Feb 13 19:35:15.158237 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:35:15.171218 update-ssh-keys[1527]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:35:15.175451 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:35:15.181110 systemd[1]: Finished sshkeys.service. Feb 13 19:35:15.188385 containerd[1467]: time="2025-02-13T19:35:15.188295640Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:35:15.196371 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:35:15.204181 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:35:15.212850 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:35:15.214455 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:35:15.223817 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:35:15.225940 containerd[1467]: time="2025-02-13T19:35:15.225654400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:35:15.227338 containerd[1467]: time="2025-02-13T19:35:15.227278400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:35:15.227338 containerd[1467]: time="2025-02-13T19:35:15.227325400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:35:15.227338 containerd[1467]: time="2025-02-13T19:35:15.227347080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:35:15.228439 containerd[1467]: time="2025-02-13T19:35:15.227552840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:35:15.228439 containerd[1467]: time="2025-02-13T19:35:15.227578760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:35:15.228439 containerd[1467]: time="2025-02-13T19:35:15.227654680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:35:15.228439 containerd[1467]: time="2025-02-13T19:35:15.227668360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:35:15.228439 containerd[1467]: time="2025-02-13T19:35:15.227848920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:35:15.228439 containerd[1467]: time="2025-02-13T19:35:15.227863920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:35:15.228439 containerd[1467]: time="2025-02-13T19:35:15.227877880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:35:15.228439 containerd[1467]: time="2025-02-13T19:35:15.227886960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:35:15.228439 containerd[1467]: time="2025-02-13T19:35:15.227988560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:35:15.228439 containerd[1467]: time="2025-02-13T19:35:15.228208960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:35:15.228439 containerd[1467]: time="2025-02-13T19:35:15.228311600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:35:15.228764 containerd[1467]: time="2025-02-13T19:35:15.228326320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:35:15.228764 containerd[1467]: time="2025-02-13T19:35:15.228425560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:35:15.228764 containerd[1467]: time="2025-02-13T19:35:15.228470640Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:35:15.235651 containerd[1467]: time="2025-02-13T19:35:15.235578360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:35:15.235762 containerd[1467]: time="2025-02-13T19:35:15.235677640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:35:15.235762 containerd[1467]: time="2025-02-13T19:35:15.235704600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:35:15.235762 containerd[1467]: time="2025-02-13T19:35:15.235731520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:35:15.235762 containerd[1467]: time="2025-02-13T19:35:15.235754600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:35:15.236249 containerd[1467]: time="2025-02-13T19:35:15.236043480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236516360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236681280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236699920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236717040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236731320Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236746720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236760000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236773800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236788840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236803200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236816000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236828360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236850800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237361 containerd[1467]: time="2025-02-13T19:35:15.236863920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.236875840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.236888080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.236899720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.236914640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.236968920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.236988520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.237003080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.237019600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.237040240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.237053000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.237065120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.237082800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.237105200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.237118640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.237913 containerd[1467]: time="2025-02-13T19:35:15.237130280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:35:15.238360 containerd[1467]: time="2025-02-13T19:35:15.237319480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:35:15.238360 containerd[1467]: time="2025-02-13T19:35:15.237340680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:35:15.238360 containerd[1467]: time="2025-02-13T19:35:15.237354520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:35:15.238360 containerd[1467]: time="2025-02-13T19:35:15.237366880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:35:15.238360 containerd[1467]: time="2025-02-13T19:35:15.237376160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.238360 containerd[1467]: time="2025-02-13T19:35:15.237408360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:35:15.238360 containerd[1467]: time="2025-02-13T19:35:15.237419960Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:35:15.238360 containerd[1467]: time="2025-02-13T19:35:15.237429800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:35:15.238669 containerd[1467]: time="2025-02-13T19:35:15.237787840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:35:15.238669 containerd[1467]: time="2025-02-13T19:35:15.237838240Z" level=info msg="Connect containerd service" Feb 13 19:35:15.238669 containerd[1467]: time="2025-02-13T19:35:15.237875640Z" level=info msg="using legacy CRI server" Feb 13 19:35:15.238669 containerd[1467]: time="2025-02-13T19:35:15.237882160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:35:15.238669 containerd[1467]: time="2025-02-13T19:35:15.238202200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:35:15.240823 containerd[1467]: time="2025-02-13T19:35:15.240768480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:35:15.242792 containerd[1467]: time="2025-02-13T19:35:15.241381720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:35:15.242792 containerd[1467]: time="2025-02-13T19:35:15.241458600Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:35:15.242792 containerd[1467]: time="2025-02-13T19:35:15.241463360Z" level=info msg="Start subscribing containerd event" Feb 13 19:35:15.242792 containerd[1467]: time="2025-02-13T19:35:15.241534800Z" level=info msg="Start recovering state" Feb 13 19:35:15.242792 containerd[1467]: time="2025-02-13T19:35:15.241616920Z" level=info msg="Start event monitor" Feb 13 19:35:15.242792 containerd[1467]: time="2025-02-13T19:35:15.241630440Z" level=info msg="Start snapshots syncer" Feb 13 19:35:15.242792 containerd[1467]: time="2025-02-13T19:35:15.241641560Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:35:15.242792 containerd[1467]: time="2025-02-13T19:35:15.241649240Z" level=info msg="Start streaming server" Feb 13 19:35:15.242792 containerd[1467]: time="2025-02-13T19:35:15.241806640Z" level=info msg="containerd successfully booted in 0.054342s" Feb 13 19:35:15.242086 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:35:15.244035 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:35:15.255962 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:35:15.258695 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:35:15.259590 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:35:15.436619 systemd-networkd[1376]: eth0: Gained IPv6LL Feb 13 19:35:15.438587 systemd-networkd[1376]: eth1: Gained IPv6LL Feb 13 19:35:15.439043 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Feb 13 19:35:15.441197 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:35:15.442860 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:35:15.451044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:35:15.454843 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:35:15.484813 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:35:16.187980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:35:16.189151 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:35:16.193507 systemd[1]: Startup finished in 777ms (kernel) + 5.062s (initrd) + 4.484s (userspace) = 10.325s. Feb 13 19:35:16.206822 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:35:16.212688 agetty[1550]: failed to open credentials directory Feb 13 19:35:16.217151 agetty[1549]: failed to open credentials directory Feb 13 19:35:16.728662 kubelet[1567]: E0213 19:35:16.728600 1567 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:35:16.731726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:35:16.732033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:35:26.982783 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:35:26.989813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:35:27.115443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:35:27.125865 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:35:27.181092 kubelet[1586]: E0213 19:35:27.181000 1586 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:35:27.184638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:35:27.184858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:35:37.435564 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:35:37.445686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:35:37.599690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:35:37.606210 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:35:37.666377 kubelet[1601]: E0213 19:35:37.666294 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:35:37.671649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:35:37.672155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:35:45.927873 systemd-timesyncd[1354]: Contacted time server 185.41.106.152:123 (2.flatcar.pool.ntp.org). Feb 13 19:35:45.928007 systemd-timesyncd[1354]: Initial clock synchronization to Thu 2025-02-13 19:35:45.683610 UTC. Feb 13 19:35:47.922452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:35:47.927685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:35:48.086684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:35:48.086988 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:35:48.137322 kubelet[1616]: E0213 19:35:48.137256 1616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:35:48.140597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:35:48.140819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:35:58.391656 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 19:35:58.401793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:35:58.519747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:35:58.519817 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:35:58.560453 kubelet[1631]: E0213 19:35:58.559361 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:35:58.561096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:35:58.561245 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:35:59.913499 update_engine[1455]: I20250213 19:35:59.913199 1455 update_attempter.cc:509] Updating boot flags... Feb 13 19:35:59.970463 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1646) Feb 13 19:36:00.047433 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1648) Feb 13 19:36:00.100459 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1648) Feb 13 19:36:08.648479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 19:36:08.662956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:36:08.822755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:36:08.822867 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:36:08.878503 kubelet[1666]: E0213 19:36:08.878454 1666 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:36:08.882065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:36:08.882434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:36:18.898307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 19:36:18.907742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:36:19.051725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:36:19.053683 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:36:19.107052 kubelet[1681]: E0213 19:36:19.106835 1681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:36:19.109013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:36:19.109167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:36:29.148556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 19:36:29.155793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:36:29.289261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:36:29.301995 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:36:29.347485 kubelet[1696]: E0213 19:36:29.347433 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:36:29.350217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:36:29.350702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:36:39.399063 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 19:36:39.410697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:36:39.536360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:36:39.551075 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:36:39.602762 kubelet[1711]: E0213 19:36:39.602716 1711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:36:39.605555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:36:39.605853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:36:49.648309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Feb 13 19:36:49.656682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:36:49.775634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:36:49.787985 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:36:49.831851 kubelet[1726]: E0213 19:36:49.831776 1726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:36:49.834443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:36:49.834726 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:36:59.898151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Feb 13 19:36:59.913660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:00.050454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:00.066036 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:37:00.107777 kubelet[1740]: E0213 19:37:00.107711 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:37:00.110321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:37:00.110512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:37:09.560383 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:37:09.570779 systemd[1]: Started sshd@0-188.245.239.161:22-147.75.109.163:37696.service - OpenSSH per-connection server daemon (147.75.109.163:37696). Feb 13 19:37:10.148104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Feb 13 19:37:10.153717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:10.294810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:10.304964 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:37:10.358275 kubelet[1759]: E0213 19:37:10.358204 1759 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:37:10.362997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:37:10.363293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:37:10.559629 sshd[1749]: Accepted publickey for core from 147.75.109.163 port 37696 ssh2: RSA SHA256:M9lPYvS0yTh25PbmsMwTLicbNiMLmqxaG6Qj0FYC7QQ Feb 13 19:37:10.562007 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:10.576014 systemd-logind[1454]: New session 1 of user core. Feb 13 19:37:10.578494 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:37:10.584949 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:37:10.612510 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:37:10.619952 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:37:10.626285 (systemd)[1769]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:37:10.742627 systemd[1769]: Queued start job for default target default.target. Feb 13 19:37:10.752698 systemd[1769]: Created slice app.slice - User Application Slice. Feb 13 19:37:10.752765 systemd[1769]: Reached target paths.target - Paths. Feb 13 19:37:10.752791 systemd[1769]: Reached target timers.target - Timers. Feb 13 19:37:10.755320 systemd[1769]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:37:10.772556 systemd[1769]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:37:10.773065 systemd[1769]: Reached target sockets.target - Sockets. Feb 13 19:37:10.773115 systemd[1769]: Reached target basic.target - Basic System. Feb 13 19:37:10.773285 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:37:10.774523 systemd[1769]: Reached target default.target - Main User Target. Feb 13 19:37:10.774596 systemd[1769]: Startup finished in 141ms. Feb 13 19:37:10.781708 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:37:11.485886 systemd[1]: Started sshd@1-188.245.239.161:22-147.75.109.163:37708.service - OpenSSH per-connection server daemon (147.75.109.163:37708). Feb 13 19:37:12.486195 sshd[1780]: Accepted publickey for core from 147.75.109.163 port 37708 ssh2: RSA SHA256:M9lPYvS0yTh25PbmsMwTLicbNiMLmqxaG6Qj0FYC7QQ Feb 13 19:37:12.489419 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:12.497333 systemd-logind[1454]: New session 2 of user core. Feb 13 19:37:12.503727 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:37:13.187274 sshd[1782]: Connection closed by 147.75.109.163 port 37708 Feb 13 19:37:13.186257 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:13.190205 systemd[1]: sshd@1-188.245.239.161:22-147.75.109.163:37708.service: Deactivated successfully. Feb 13 19:37:13.191926 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:37:13.194112 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:37:13.195262 systemd-logind[1454]: Removed session 2. Feb 13 19:37:13.359565 systemd[1]: Started sshd@2-188.245.239.161:22-147.75.109.163:37716.service - OpenSSH per-connection server daemon (147.75.109.163:37716). Feb 13 19:37:14.346016 sshd[1787]: Accepted publickey for core from 147.75.109.163 port 37716 ssh2: RSA SHA256:M9lPYvS0yTh25PbmsMwTLicbNiMLmqxaG6Qj0FYC7QQ Feb 13 19:37:14.349167 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:14.354723 systemd-logind[1454]: New session 3 of user core. Feb 13 19:37:14.365775 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:37:15.023130 sshd[1789]: Connection closed by 147.75.109.163 port 37716 Feb 13 19:37:15.024296 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:15.028744 systemd[1]: sshd@2-188.245.239.161:22-147.75.109.163:37716.service: Deactivated successfully. Feb 13 19:37:15.030705 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:37:15.032782 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:37:15.034001 systemd-logind[1454]: Removed session 3. Feb 13 19:37:15.204531 systemd[1]: Started sshd@3-188.245.239.161:22-147.75.109.163:37722.service - OpenSSH per-connection server daemon (147.75.109.163:37722). Feb 13 19:37:16.180742 sshd[1794]: Accepted publickey for core from 147.75.109.163 port 37722 ssh2: RSA SHA256:M9lPYvS0yTh25PbmsMwTLicbNiMLmqxaG6Qj0FYC7QQ Feb 13 19:37:16.182944 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:16.188347 systemd-logind[1454]: New session 4 of user core. Feb 13 19:37:16.193669 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:37:16.859799 sshd[1796]: Connection closed by 147.75.109.163 port 37722 Feb 13 19:37:16.860538 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:16.866031 systemd[1]: sshd@3-188.245.239.161:22-147.75.109.163:37722.service: Deactivated successfully. Feb 13 19:37:16.869277 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:37:16.870323 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:37:16.871493 systemd-logind[1454]: Removed session 4. Feb 13 19:37:17.038915 systemd[1]: Started sshd@4-188.245.239.161:22-147.75.109.163:37736.service - OpenSSH per-connection server daemon (147.75.109.163:37736). Feb 13 19:37:18.023644 sshd[1801]: Accepted publickey for core from 147.75.109.163 port 37736 ssh2: RSA SHA256:M9lPYvS0yTh25PbmsMwTLicbNiMLmqxaG6Qj0FYC7QQ Feb 13 19:37:18.025668 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:18.032472 systemd-logind[1454]: New session 5 of user core. Feb 13 19:37:18.038785 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:37:18.558569 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:37:18.559017 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:37:18.575515 sudo[1804]: pam_unix(sudo:session): session closed for user root Feb 13 19:37:18.737426 sshd[1803]: Connection closed by 147.75.109.163 port 37736 Feb 13 19:37:18.736184 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:18.741655 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:37:18.742137 systemd[1]: sshd@4-188.245.239.161:22-147.75.109.163:37736.service: Deactivated successfully. Feb 13 19:37:18.744238 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:37:18.746301 systemd-logind[1454]: Removed session 5. Feb 13 19:37:18.904777 systemd[1]: Started sshd@5-188.245.239.161:22-147.75.109.163:37740.service - OpenSSH per-connection server daemon (147.75.109.163:37740). Feb 13 19:37:19.892204 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 37740 ssh2: RSA SHA256:M9lPYvS0yTh25PbmsMwTLicbNiMLmqxaG6Qj0FYC7QQ Feb 13 19:37:19.894723 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:19.900707 systemd-logind[1454]: New session 6 of user core. Feb 13 19:37:19.912783 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:37:20.398320 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Feb 13 19:37:20.411865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:20.423109 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:37:20.425589 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:37:20.433638 sudo[1814]: pam_unix(sudo:session): session closed for user root Feb 13 19:37:20.445979 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:37:20.446778 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:37:20.465135 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:37:20.524931 augenrules[1838]: No rules Feb 13 19:37:20.527130 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:37:20.528293 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:37:20.531201 sudo[1813]: pam_unix(sudo:session): session closed for user root Feb 13 19:37:20.577118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:20.592075 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:37:20.648125 kubelet[1848]: E0213 19:37:20.648070 1848 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:37:20.651011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:37:20.651233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:37:20.690860 sshd[1811]: Connection closed by 147.75.109.163 port 37740 Feb 13 19:37:20.691869 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:20.697165 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:37:20.699022 systemd[1]: sshd@5-188.245.239.161:22-147.75.109.163:37740.service: Deactivated successfully. Feb 13 19:37:20.701156 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:37:20.703651 systemd-logind[1454]: Removed session 6. Feb 13 19:37:20.870831 systemd[1]: Started sshd@6-188.245.239.161:22-147.75.109.163:57242.service - OpenSSH per-connection server daemon (147.75.109.163:57242). Feb 13 19:37:21.853510 sshd[1857]: Accepted publickey for core from 147.75.109.163 port 57242 ssh2: RSA SHA256:M9lPYvS0yTh25PbmsMwTLicbNiMLmqxaG6Qj0FYC7QQ Feb 13 19:37:21.855497 sshd-session[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:21.861621 systemd-logind[1454]: New session 7 of user core. Feb 13 19:37:21.867735 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:37:22.376905 sudo[1860]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:37:22.377301 sudo[1860]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:37:23.000959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:23.010891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:23.044154 systemd[1]: Reloading requested from client PID 1893 ('systemctl') (unit session-7.scope)... Feb 13 19:37:23.044170 systemd[1]: Reloading... Feb 13 19:37:23.166415 zram_generator::config[1929]: No configuration found. Feb 13 19:37:23.281265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:37:23.349430 systemd[1]: Reloading finished in 304 ms. Feb 13 19:37:23.407785 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:37:23.407925 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:37:23.408296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:23.419448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:23.528686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:23.540900 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:37:23.583430 kubelet[1981]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:37:23.583430 kubelet[1981]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:37:23.583430 kubelet[1981]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:37:23.583907 kubelet[1981]: I0213 19:37:23.583524 1981 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:37:24.234488 kubelet[1981]: I0213 19:37:24.233494 1981 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:37:24.234488 kubelet[1981]: I0213 19:37:24.233540 1981 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:37:24.234488 kubelet[1981]: I0213 19:37:24.234062 1981 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:37:24.268247 kubelet[1981]: I0213 19:37:24.268216 1981 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:37:24.281672 kubelet[1981]: E0213 19:37:24.281583 1981 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:37:24.281672 kubelet[1981]: I0213 19:37:24.281664 1981 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:37:24.285874 kubelet[1981]: I0213 19:37:24.285706 1981 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:37:24.287476 kubelet[1981]: I0213 19:37:24.287373 1981 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:37:24.287919 kubelet[1981]: I0213 19:37:24.287491 1981 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:37:24.288100 kubelet[1981]: I0213 19:37:24.287938 1981 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:37:24.288100 kubelet[1981]: I0213 19:37:24.287961 1981 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:37:24.288653 kubelet[1981]: I0213 19:37:24.288266 1981 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:37:24.291225 kubelet[1981]: I0213 19:37:24.291187 1981 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:37:24.291225 kubelet[1981]: I0213 19:37:24.291224 1981 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:37:24.291342 kubelet[1981]: I0213 19:37:24.291249 1981 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:37:24.291342 kubelet[1981]: I0213 19:37:24.291261 1981 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:37:24.293104 kubelet[1981]: E0213 19:37:24.293066 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:24.293247 kubelet[1981]: E0213 19:37:24.293233 1981 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:24.297226 kubelet[1981]: I0213 19:37:24.297177 1981 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:37:24.297981 kubelet[1981]: I0213 19:37:24.297927 1981 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:37:24.298158 kubelet[1981]: W0213 19:37:24.298073 1981 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:37:24.299040 kubelet[1981]: I0213 19:37:24.298999 1981 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:37:24.299040 kubelet[1981]: I0213 19:37:24.299036 1981 server.go:1287] "Started kubelet" Feb 13 19:37:24.301088 kubelet[1981]: I0213 19:37:24.300865 1981 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:37:24.302431 kubelet[1981]: I0213 19:37:24.301626 1981 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:37:24.302431 kubelet[1981]: I0213 19:37:24.301957 1981 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:37:24.302431 kubelet[1981]: I0213 19:37:24.302020 1981 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:37:24.304100 kubelet[1981]: I0213 19:37:24.303083 1981 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:37:24.314408 kubelet[1981]: I0213 19:37:24.313436 1981 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:37:24.314408 kubelet[1981]: I0213 19:37:24.314222 1981 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:37:24.314990 kubelet[1981]: E0213 19:37:24.314957 1981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:37:24.319569 kubelet[1981]: I0213 19:37:24.319510 1981 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:37:24.319691 kubelet[1981]: I0213 19:37:24.319626 1981 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:37:24.327288 kubelet[1981]: I0213 19:37:24.327254 1981 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:37:24.327426 kubelet[1981]: I0213 19:37:24.327375 1981 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:37:24.329876 kubelet[1981]: E0213 19:37:24.329839 1981 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:37:24.330088 kubelet[1981]: W0213 19:37:24.330060 1981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:37:24.330171 kubelet[1981]: E0213 19:37:24.330152 1981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:37:24.330270 kubelet[1981]: W0213 19:37:24.330257 1981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:37:24.330335 kubelet[1981]: E0213 19:37:24.330320 1981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:37:24.330700 kubelet[1981]: E0213 19:37:24.330414 1981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.1823dbb350f52c0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-02-13 19:37:24.299017231 +0000 UTC m=+0.753934738,LastTimestamp:2025-02-13 19:37:24.299017231 +0000 UTC m=+0.753934738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Feb 13 19:37:24.330960 kubelet[1981]: W0213 19:37:24.330939 1981 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:37:24.331048 kubelet[1981]: E0213 19:37:24.331034 1981 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:37:24.331246 kubelet[1981]: I0213 19:37:24.331228 1981 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:37:24.331912 kubelet[1981]: E0213 19:37:24.331877 1981 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:37:24.358476 kubelet[1981]: I0213 19:37:24.358445 1981 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:37:24.358642 kubelet[1981]: I0213 19:37:24.358629 1981 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:37:24.358700 kubelet[1981]: I0213 19:37:24.358692 1981 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:37:24.362130 kubelet[1981]: I0213 19:37:24.362104 1981 policy_none.go:49] "None policy: Start" Feb 13 19:37:24.362273 kubelet[1981]: I0213 19:37:24.362262 1981 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:37:24.362337 kubelet[1981]: I0213 19:37:24.362328 1981 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:37:24.371642 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:37:24.377221 kubelet[1981]: I0213 19:37:24.377180 1981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:37:24.379729 kubelet[1981]: I0213 19:37:24.379699 1981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:37:24.379891 kubelet[1981]: I0213 19:37:24.379880 1981 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:37:24.380128 kubelet[1981]: I0213 19:37:24.380114 1981 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:37:24.380213 kubelet[1981]: I0213 19:37:24.380204 1981 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:37:24.380425 kubelet[1981]: E0213 19:37:24.380407 1981 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:37:24.388026 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:37:24.391490 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:37:24.407382 kubelet[1981]: I0213 19:37:24.407340 1981 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:37:24.408785 kubelet[1981]: I0213 19:37:24.408012 1981 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:37:24.408785 kubelet[1981]: I0213 19:37:24.408043 1981 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:37:24.408785 kubelet[1981]: I0213 19:37:24.408670 1981 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:37:24.412126 kubelet[1981]: E0213 19:37:24.412042 1981 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:37:24.412126 kubelet[1981]: E0213 19:37:24.412098 1981 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" Feb 13 19:37:24.509930 kubelet[1981]: I0213 19:37:24.509732 1981 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.4" Feb 13 19:37:24.517051 kubelet[1981]: I0213 19:37:24.516844 1981 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.4" Feb 13 19:37:24.517051 kubelet[1981]: E0213 19:37:24.516881 1981 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": node \"10.0.0.4\" not found" Feb 13 19:37:24.524853 kubelet[1981]: E0213 19:37:24.524696 1981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:37:24.582546 sudo[1860]: pam_unix(sudo:session): session closed for user root Feb 13 19:37:24.625855 kubelet[1981]: E0213 19:37:24.625769 1981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:37:24.727346 kubelet[1981]: E0213 19:37:24.727282 1981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:37:24.742489 sshd[1859]: Connection closed by 147.75.109.163 port 57242 Feb 13 19:37:24.743301 sshd-session[1857]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:24.749218 systemd[1]: sshd@6-188.245.239.161:22-147.75.109.163:57242.service: Deactivated successfully. Feb 13 19:37:24.751257 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:37:24.752289 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:37:24.753828 systemd-logind[1454]: Removed session 7. Feb 13 19:37:24.828034 kubelet[1981]: E0213 19:37:24.827871 1981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:37:24.929115 kubelet[1981]: E0213 19:37:24.929008 1981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:37:25.029349 kubelet[1981]: E0213 19:37:25.029211 1981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:37:25.130625 kubelet[1981]: E0213 19:37:25.130268 1981 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:37:25.232605 kubelet[1981]: I0213 19:37:25.232563 1981 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:37:25.233128 containerd[1467]: time="2025-02-13T19:37:25.233073163Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:37:25.234257 kubelet[1981]: I0213 19:37:25.233442 1981 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:37:25.237911 kubelet[1981]: I0213 19:37:25.237863 1981 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:37:25.238126 kubelet[1981]: W0213 19:37:25.238101 1981 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:37:25.238232 kubelet[1981]: W0213 19:37:25.238154 1981 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:37:25.238232 kubelet[1981]: W0213 19:37:25.238193 1981 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:37:25.293695 kubelet[1981]: E0213 19:37:25.293626 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:25.293695 kubelet[1981]: I0213 19:37:25.293666 1981 apiserver.go:52] "Watching apiserver" Feb 13 19:37:25.311250 kubelet[1981]: E0213 19:37:25.310965 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:25.319448 systemd[1]: Created slice kubepods-besteffort-podce38e5bc_9cd8_438c_82f3_bf3f8d8f77fa.slice - libcontainer container kubepods-besteffort-podce38e5bc_9cd8_438c_82f3_bf3f8d8f77fa.slice. Feb 13 19:37:25.321111 kubelet[1981]: I0213 19:37:25.320098 1981 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:37:25.329834 kubelet[1981]: I0213 19:37:25.329762 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa-xtables-lock\") pod \"calico-node-7qq6h\" (UID: \"ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa\") " pod="calico-system/calico-node-7qq6h" Feb 13 19:37:25.329834 kubelet[1981]: I0213 19:37:25.329827 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa-cni-bin-dir\") pod \"calico-node-7qq6h\" (UID: \"ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa\") " pod="calico-system/calico-node-7qq6h" Feb 13 19:37:25.329979 kubelet[1981]: I0213 19:37:25.329852 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f9f2320c-ee95-40d0-96bc-00a1af393157-socket-dir\") pod \"csi-node-driver-lxcht\" (UID: \"f9f2320c-ee95-40d0-96bc-00a1af393157\") " pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:25.329979 kubelet[1981]: I0213 19:37:25.329870 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f9f2320c-ee95-40d0-96bc-00a1af393157-registration-dir\") pod \"csi-node-driver-lxcht\" (UID: \"f9f2320c-ee95-40d0-96bc-00a1af393157\") " pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:25.329979 kubelet[1981]: I0213 19:37:25.329897 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51400619-f322-4903-a250-ce3da315c74f-kube-proxy\") pod \"kube-proxy-m8dkv\" (UID: \"51400619-f322-4903-a250-ce3da315c74f\") " pod="kube-system/kube-proxy-m8dkv" Feb 13 19:37:25.329979 kubelet[1981]: I0213 19:37:25.329932 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa-lib-modules\") pod \"calico-node-7qq6h\" (UID: \"ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa\") " pod="calico-system/calico-node-7qq6h" Feb 13 19:37:25.329979 kubelet[1981]: I0213 19:37:25.329959 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa-node-certs\") pod \"calico-node-7qq6h\" (UID: \"ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa\") " pod="calico-system/calico-node-7qq6h" Feb 13 19:37:25.330089 kubelet[1981]: I0213 19:37:25.329974 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa-cni-log-dir\") pod \"calico-node-7qq6h\" (UID: \"ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa\") " pod="calico-system/calico-node-7qq6h" Feb 13 19:37:25.330089 kubelet[1981]: I0213 19:37:25.330031 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvmt\" (UniqueName: \"kubernetes.io/projected/ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa-kube-api-access-qgvmt\") pod \"calico-node-7qq6h\" (UID: \"ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa\") " pod="calico-system/calico-node-7qq6h" Feb 13 19:37:25.330089 kubelet[1981]: I0213 19:37:25.330050 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmrln\" (UniqueName: \"kubernetes.io/projected/f9f2320c-ee95-40d0-96bc-00a1af393157-kube-api-access-vmrln\") pod \"csi-node-driver-lxcht\" (UID: \"f9f2320c-ee95-40d0-96bc-00a1af393157\") " pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:25.330089 kubelet[1981]: I0213 19:37:25.330068 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51400619-f322-4903-a250-ce3da315c74f-xtables-lock\") pod \"kube-proxy-m8dkv\" (UID: \"51400619-f322-4903-a250-ce3da315c74f\") " pod="kube-system/kube-proxy-m8dkv" Feb 13 19:37:25.330089 kubelet[1981]: I0213 19:37:25.330086 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51400619-f322-4903-a250-ce3da315c74f-lib-modules\") pod \"kube-proxy-m8dkv\" (UID: \"51400619-f322-4903-a250-ce3da315c74f\") " pod="kube-system/kube-proxy-m8dkv" Feb 13 19:37:25.330191 kubelet[1981]: I0213 19:37:25.330106 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa-var-run-calico\") pod \"calico-node-7qq6h\" (UID: \"ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa\") " pod="calico-system/calico-node-7qq6h" Feb 13 19:37:25.330191 kubelet[1981]: I0213 19:37:25.330130 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa-flexvol-driver-host\") pod \"calico-node-7qq6h\" (UID: \"ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa\") " pod="calico-system/calico-node-7qq6h" Feb 13 19:37:25.330191 kubelet[1981]: I0213 19:37:25.330149 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f9f2320c-ee95-40d0-96bc-00a1af393157-varrun\") pod \"csi-node-driver-lxcht\" (UID: \"f9f2320c-ee95-40d0-96bc-00a1af393157\") " pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:25.330191 kubelet[1981]: I0213 19:37:25.330176 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9f2320c-ee95-40d0-96bc-00a1af393157-kubelet-dir\") pod \"csi-node-driver-lxcht\" (UID: \"f9f2320c-ee95-40d0-96bc-00a1af393157\") " pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:25.330270 kubelet[1981]: I0213 19:37:25.330212 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstg7\" (UniqueName: \"kubernetes.io/projected/51400619-f322-4903-a250-ce3da315c74f-kube-api-access-lstg7\") pod \"kube-proxy-m8dkv\" (UID: \"51400619-f322-4903-a250-ce3da315c74f\") " pod="kube-system/kube-proxy-m8dkv" Feb 13 19:37:25.330270 kubelet[1981]: I0213 19:37:25.330230 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa-policysync\") pod \"calico-node-7qq6h\" (UID: \"ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa\") " pod="calico-system/calico-node-7qq6h" Feb 13 19:37:25.330270 kubelet[1981]: I0213 19:37:25.330252 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa-tigera-ca-bundle\") pod \"calico-node-7qq6h\" (UID: \"ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa\") " pod="calico-system/calico-node-7qq6h" Feb 13 19:37:25.330335 kubelet[1981]: I0213 19:37:25.330274 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa-var-lib-calico\") pod \"calico-node-7qq6h\" (UID: \"ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa\") " pod="calico-system/calico-node-7qq6h" Feb 13 19:37:25.330335 kubelet[1981]: I0213 19:37:25.330291 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa-cni-net-dir\") pod \"calico-node-7qq6h\" (UID: \"ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa\") " pod="calico-system/calico-node-7qq6h" Feb 13 19:37:25.347745 systemd[1]: Created slice kubepods-besteffort-pod51400619_f322_4903_a250_ce3da315c74f.slice - libcontainer container kubepods-besteffort-pod51400619_f322_4903_a250_ce3da315c74f.slice. Feb 13 19:37:25.433557 kubelet[1981]: E0213 19:37:25.433520 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.433557 kubelet[1981]: W0213 19:37:25.433553 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.433922 kubelet[1981]: E0213 19:37:25.433591 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.433922 kubelet[1981]: E0213 19:37:25.433842 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.433922 kubelet[1981]: W0213 19:37:25.433858 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.433922 kubelet[1981]: E0213 19:37:25.433878 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.434140 kubelet[1981]: E0213 19:37:25.434127 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.434179 kubelet[1981]: W0213 19:37:25.434142 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.434179 kubelet[1981]: E0213 19:37:25.434162 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.434511 kubelet[1981]: E0213 19:37:25.434492 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.434592 kubelet[1981]: W0213 19:37:25.434514 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.434592 kubelet[1981]: E0213 19:37:25.434540 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.434777 kubelet[1981]: E0213 19:37:25.434763 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.434834 kubelet[1981]: W0213 19:37:25.434779 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.434834 kubelet[1981]: E0213 19:37:25.434797 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.435002 kubelet[1981]: E0213 19:37:25.434985 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.435002 kubelet[1981]: W0213 19:37:25.434999 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.435236 kubelet[1981]: E0213 19:37:25.435069 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.435236 kubelet[1981]: E0213 19:37:25.435181 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.435236 kubelet[1981]: W0213 19:37:25.435191 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.435514 kubelet[1981]: E0213 19:37:25.435378 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.435514 kubelet[1981]: E0213 19:37:25.435385 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.435514 kubelet[1981]: W0213 19:37:25.435412 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.435821 kubelet[1981]: E0213 19:37:25.435685 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.435987 kubelet[1981]: E0213 19:37:25.435971 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.436148 kubelet[1981]: W0213 19:37:25.436047 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.436258 kubelet[1981]: E0213 19:37:25.436228 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.437068 kubelet[1981]: E0213 19:37:25.437004 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.437068 kubelet[1981]: W0213 19:37:25.437022 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.437166 kubelet[1981]: E0213 19:37:25.437138 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.437555 kubelet[1981]: E0213 19:37:25.437531 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.437555 kubelet[1981]: W0213 19:37:25.437549 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.438443 kubelet[1981]: E0213 19:37:25.437817 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.438443 kubelet[1981]: W0213 19:37:25.437833 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.438443 kubelet[1981]: E0213 19:37:25.438032 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.438443 kubelet[1981]: W0213 19:37:25.438046 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.438443 kubelet[1981]: E0213 19:37:25.438143 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.438443 kubelet[1981]: E0213 19:37:25.438174 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.438443 kubelet[1981]: E0213 19:37:25.438190 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.438443 kubelet[1981]: E0213 19:37:25.438214 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.438443 kubelet[1981]: W0213 19:37:25.438223 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.438712 kubelet[1981]: E0213 19:37:25.438560 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.438712 kubelet[1981]: W0213 19:37:25.438574 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.438762 kubelet[1981]: E0213 19:37:25.438751 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.438786 kubelet[1981]: W0213 19:37:25.438760 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.438930 kubelet[1981]: E0213 19:37:25.438831 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.438930 kubelet[1981]: E0213 19:37:25.438865 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.438930 kubelet[1981]: E0213 19:37:25.438885 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.438930 kubelet[1981]: E0213 19:37:25.438916 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.438930 kubelet[1981]: W0213 19:37:25.438926 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.439060 kubelet[1981]: E0213 19:37:25.438948 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.439302 kubelet[1981]: E0213 19:37:25.439180 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.439302 kubelet[1981]: W0213 19:37:25.439205 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.439302 kubelet[1981]: E0213 19:37:25.439216 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.439558 kubelet[1981]: E0213 19:37:25.439541 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.439558 kubelet[1981]: W0213 19:37:25.439556 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.439667 kubelet[1981]: E0213 19:37:25.439606 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.440438 kubelet[1981]: E0213 19:37:25.439953 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.440438 kubelet[1981]: W0213 19:37:25.439970 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.440438 kubelet[1981]: E0213 19:37:25.439984 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.440878 kubelet[1981]: E0213 19:37:25.440859 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.440947 kubelet[1981]: W0213 19:37:25.440934 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.441002 kubelet[1981]: E0213 19:37:25.440991 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.441258 kubelet[1981]: E0213 19:37:25.441243 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.441337 kubelet[1981]: W0213 19:37:25.441325 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.441438 kubelet[1981]: E0213 19:37:25.441420 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.441810 kubelet[1981]: E0213 19:37:25.441793 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.441894 kubelet[1981]: W0213 19:37:25.441881 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.441952 kubelet[1981]: E0213 19:37:25.441941 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.443349 kubelet[1981]: E0213 19:37:25.443311 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.443349 kubelet[1981]: W0213 19:37:25.443332 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.443349 kubelet[1981]: E0213 19:37:25.443347 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.450408 kubelet[1981]: E0213 19:37:25.448669 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.450408 kubelet[1981]: W0213 19:37:25.448695 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.450408 kubelet[1981]: E0213 19:37:25.448717 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.457022 kubelet[1981]: E0213 19:37:25.456923 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.457022 kubelet[1981]: W0213 19:37:25.456947 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.457022 kubelet[1981]: E0213 19:37:25.456968 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.459695 kubelet[1981]: E0213 19:37:25.459665 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:25.459695 kubelet[1981]: W0213 19:37:25.459690 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:25.459827 kubelet[1981]: E0213 19:37:25.459712 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:25.647095 containerd[1467]: time="2025-02-13T19:37:25.647050965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7qq6h,Uid:ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa,Namespace:calico-system,Attempt:0,}" Feb 13 19:37:25.652694 containerd[1467]: time="2025-02-13T19:37:25.652333227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m8dkv,Uid:51400619-f322-4903-a250-ce3da315c74f,Namespace:kube-system,Attempt:0,}" Feb 13 19:37:26.262567 containerd[1467]: time="2025-02-13T19:37:26.261709794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:37:26.264533 containerd[1467]: time="2025-02-13T19:37:26.264459345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 19:37:26.267150 containerd[1467]: time="2025-02-13T19:37:26.267080100Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:37:26.270107 containerd[1467]: time="2025-02-13T19:37:26.269989766Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:37:26.271189 containerd[1467]: time="2025-02-13T19:37:26.270970334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:37:26.275248 containerd[1467]: time="2025-02-13T19:37:26.275068322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:37:26.280044 containerd[1467]: time="2025-02-13T19:37:26.279694732Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 632.53657ms" Feb 13 19:37:26.281417 containerd[1467]: time="2025-02-13T19:37:26.281354518Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 628.904575ms" Feb 13 19:37:26.293810 kubelet[1981]: E0213 19:37:26.293748 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:26.381574 kubelet[1981]: E0213 19:37:26.380889 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:26.383726 containerd[1467]: time="2025-02-13T19:37:26.383193740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:37:26.384371 containerd[1467]: time="2025-02-13T19:37:26.383458972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:37:26.384371 containerd[1467]: time="2025-02-13T19:37:26.383608207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:26.385184 containerd[1467]: time="2025-02-13T19:37:26.385007282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:26.389120 containerd[1467]: time="2025-02-13T19:37:26.388364093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:37:26.389120 containerd[1467]: time="2025-02-13T19:37:26.388448290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:37:26.389120 containerd[1467]: time="2025-02-13T19:37:26.388465730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:26.389120 containerd[1467]: time="2025-02-13T19:37:26.388552447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:26.449528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3703550754.mount: Deactivated successfully. Feb 13 19:37:26.470632 systemd[1]: Started cri-containerd-54fb09b5967b33a951e4c50c0ac0250c2e123e1a0da28769ab608ef608bdd7da.scope - libcontainer container 54fb09b5967b33a951e4c50c0ac0250c2e123e1a0da28769ab608ef608bdd7da. Feb 13 19:37:26.476677 systemd[1]: Started cri-containerd-31aebfa7967c8f6c2e584236df731c1fa8ada4faf3a05a3d0fa1e065683c5270.scope - libcontainer container 31aebfa7967c8f6c2e584236df731c1fa8ada4faf3a05a3d0fa1e065683c5270. Feb 13 19:37:26.514498 containerd[1467]: time="2025-02-13T19:37:26.514229337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m8dkv,Uid:51400619-f322-4903-a250-ce3da315c74f,Namespace:kube-system,Attempt:0,} returns sandbox id \"54fb09b5967b33a951e4c50c0ac0250c2e123e1a0da28769ab608ef608bdd7da\"" Feb 13 19:37:26.517586 containerd[1467]: time="2025-02-13T19:37:26.517530470Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:37:26.520644 containerd[1467]: time="2025-02-13T19:37:26.520579052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7qq6h,Uid:ce38e5bc-9cd8-438c-82f3-bf3f8d8f77fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"31aebfa7967c8f6c2e584236df731c1fa8ada4faf3a05a3d0fa1e065683c5270\"" Feb 13 19:37:27.294962 kubelet[1981]: E0213 19:37:27.294849 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:27.564158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965380749.mount: Deactivated successfully. Feb 13 19:37:27.896944 containerd[1467]: time="2025-02-13T19:37:27.896800490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:27.897988 containerd[1467]: time="2025-02-13T19:37:27.897933575Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363408" Feb 13 19:37:27.898823 containerd[1467]: time="2025-02-13T19:37:27.898763109Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:27.902534 containerd[1467]: time="2025-02-13T19:37:27.902453594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:27.903706 containerd[1467]: time="2025-02-13T19:37:27.903083735Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.385494586s" Feb 13 19:37:27.903706 containerd[1467]: time="2025-02-13T19:37:27.903119174Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 19:37:27.905548 containerd[1467]: time="2025-02-13T19:37:27.905333345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:37:27.906864 containerd[1467]: time="2025-02-13T19:37:27.906799420Z" level=info msg="CreateContainer within sandbox \"54fb09b5967b33a951e4c50c0ac0250c2e123e1a0da28769ab608ef608bdd7da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:37:27.933437 containerd[1467]: time="2025-02-13T19:37:27.931782845Z" level=info msg="CreateContainer within sandbox \"54fb09b5967b33a951e4c50c0ac0250c2e123e1a0da28769ab608ef608bdd7da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"329605388a7733fb9422571f7f8ee02be621776d19b121df927b49f77cf8a76d\"" Feb 13 19:37:27.935021 containerd[1467]: time="2025-02-13T19:37:27.934981746Z" level=info msg="StartContainer for \"329605388a7733fb9422571f7f8ee02be621776d19b121df927b49f77cf8a76d\"" Feb 13 19:37:27.968074 systemd[1]: run-containerd-runc-k8s.io-329605388a7733fb9422571f7f8ee02be621776d19b121df927b49f77cf8a76d-runc.RTKnPv.mount: Deactivated successfully. Feb 13 19:37:27.976653 systemd[1]: Started cri-containerd-329605388a7733fb9422571f7f8ee02be621776d19b121df927b49f77cf8a76d.scope - libcontainer container 329605388a7733fb9422571f7f8ee02be621776d19b121df927b49f77cf8a76d. Feb 13 19:37:28.017357 containerd[1467]: time="2025-02-13T19:37:28.017163460Z" level=info msg="StartContainer for \"329605388a7733fb9422571f7f8ee02be621776d19b121df927b49f77cf8a76d\" returns successfully" Feb 13 19:37:28.296337 kubelet[1981]: E0213 19:37:28.296278 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:28.382501 kubelet[1981]: E0213 19:37:28.380883 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:28.441864 kubelet[1981]: E0213 19:37:28.441822 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.442042 kubelet[1981]: W0213 19:37:28.442026 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.442230 kubelet[1981]: E0213 19:37:28.442186 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.443415 kubelet[1981]: E0213 19:37:28.443280 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.443415 kubelet[1981]: W0213 19:37:28.443306 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.443415 kubelet[1981]: E0213 19:37:28.443353 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.444086 kubelet[1981]: E0213 19:37:28.443907 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.444086 kubelet[1981]: W0213 19:37:28.443931 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.444086 kubelet[1981]: E0213 19:37:28.443954 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.444536 kubelet[1981]: E0213 19:37:28.444330 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.444536 kubelet[1981]: W0213 19:37:28.444342 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.444536 kubelet[1981]: E0213 19:37:28.444490 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.445073 kubelet[1981]: E0213 19:37:28.444969 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.445073 kubelet[1981]: W0213 19:37:28.444983 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.445073 kubelet[1981]: E0213 19:37:28.444994 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.445931 kubelet[1981]: E0213 19:37:28.445861 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.445931 kubelet[1981]: W0213 19:37:28.445876 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.445931 kubelet[1981]: E0213 19:37:28.445899 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.446622 kubelet[1981]: E0213 19:37:28.446461 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.446622 kubelet[1981]: W0213 19:37:28.446511 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.446622 kubelet[1981]: E0213 19:37:28.446526 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.446874 kubelet[1981]: E0213 19:37:28.446748 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.447001 kubelet[1981]: W0213 19:37:28.446757 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.447001 kubelet[1981]: E0213 19:37:28.446928 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.447325 kubelet[1981]: E0213 19:37:28.447311 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.447580 kubelet[1981]: W0213 19:37:28.447432 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.447580 kubelet[1981]: E0213 19:37:28.447449 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.448064 kubelet[1981]: E0213 19:37:28.447940 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.448064 kubelet[1981]: W0213 19:37:28.447954 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.448064 kubelet[1981]: E0213 19:37:28.447965 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.448662 kubelet[1981]: E0213 19:37:28.448349 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.448662 kubelet[1981]: W0213 19:37:28.448493 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.448662 kubelet[1981]: E0213 19:37:28.448513 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.449021 kubelet[1981]: E0213 19:37:28.449006 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.449092 kubelet[1981]: W0213 19:37:28.449077 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.449332 kubelet[1981]: E0213 19:37:28.449193 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.449840 kubelet[1981]: E0213 19:37:28.449824 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.450030 kubelet[1981]: W0213 19:37:28.449933 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.450030 kubelet[1981]: E0213 19:37:28.449950 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.450257 kubelet[1981]: E0213 19:37:28.450219 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.450506 kubelet[1981]: W0213 19:37:28.450319 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.450506 kubelet[1981]: E0213 19:37:28.450336 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.450748 kubelet[1981]: E0213 19:37:28.450732 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.450834 kubelet[1981]: W0213 19:37:28.450823 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.450932 kubelet[1981]: E0213 19:37:28.450878 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.451533 kubelet[1981]: E0213 19:37:28.451504 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.451861 kubelet[1981]: W0213 19:37:28.451629 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.451861 kubelet[1981]: E0213 19:37:28.451644 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.452179 kubelet[1981]: E0213 19:37:28.452141 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.452334 kubelet[1981]: W0213 19:37:28.452277 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.452513 kubelet[1981]: E0213 19:37:28.452495 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.452900 kubelet[1981]: E0213 19:37:28.452845 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.453170 kubelet[1981]: W0213 19:37:28.453003 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.453170 kubelet[1981]: E0213 19:37:28.453021 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.453540 kubelet[1981]: E0213 19:37:28.453414 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.453540 kubelet[1981]: W0213 19:37:28.453428 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.453540 kubelet[1981]: E0213 19:37:28.453438 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.454067 kubelet[1981]: E0213 19:37:28.454042 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.454149 kubelet[1981]: W0213 19:37:28.454137 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.454629 kubelet[1981]: E0213 19:37:28.454222 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.454886 kubelet[1981]: E0213 19:37:28.454871 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.454967 kubelet[1981]: W0213 19:37:28.454955 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.455023 kubelet[1981]: E0213 19:37:28.455013 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.455331 kubelet[1981]: E0213 19:37:28.455317 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.455869 kubelet[1981]: W0213 19:37:28.455718 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.455869 kubelet[1981]: E0213 19:37:28.455755 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.456486 kubelet[1981]: E0213 19:37:28.456236 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.456486 kubelet[1981]: W0213 19:37:28.456250 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.456486 kubelet[1981]: E0213 19:37:28.456427 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.458581 kubelet[1981]: E0213 19:37:28.458092 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.458581 kubelet[1981]: W0213 19:37:28.458218 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.458581 kubelet[1981]: E0213 19:37:28.458265 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.460506 kubelet[1981]: E0213 19:37:28.460142 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.460506 kubelet[1981]: W0213 19:37:28.460157 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.460506 kubelet[1981]: E0213 19:37:28.460428 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.461229 kubelet[1981]: E0213 19:37:28.460808 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.461229 kubelet[1981]: W0213 19:37:28.460825 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.461229 kubelet[1981]: E0213 19:37:28.460868 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.462136 kubelet[1981]: E0213 19:37:28.461921 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.462136 kubelet[1981]: W0213 19:37:28.461936 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.462634 kubelet[1981]: E0213 19:37:28.462321 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.463595 kubelet[1981]: E0213 19:37:28.463325 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.463595 kubelet[1981]: W0213 19:37:28.463352 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.463595 kubelet[1981]: E0213 19:37:28.463466 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.464129 kubelet[1981]: E0213 19:37:28.464096 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.464129 kubelet[1981]: W0213 19:37:28.464123 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.464502 kubelet[1981]: E0213 19:37:28.464222 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.465227 kubelet[1981]: E0213 19:37:28.465192 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.465227 kubelet[1981]: W0213 19:37:28.465220 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.465319 kubelet[1981]: E0213 19:37:28.465247 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.466771 kubelet[1981]: E0213 19:37:28.466686 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.466771 kubelet[1981]: W0213 19:37:28.466756 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.466950 kubelet[1981]: E0213 19:37:28.466913 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:28.467514 kubelet[1981]: E0213 19:37:28.467476 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:28.467660 kubelet[1981]: W0213 19:37:28.467610 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:28.467660 kubelet[1981]: E0213 19:37:28.467640 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.296588 kubelet[1981]: E0213 19:37:29.296509 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:29.461266 kubelet[1981]: E0213 19:37:29.461173 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.461266 kubelet[1981]: W0213 19:37:29.461206 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.461266 kubelet[1981]: E0213 19:37:29.461233 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.461895 kubelet[1981]: E0213 19:37:29.461527 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.461895 kubelet[1981]: W0213 19:37:29.461540 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.461895 kubelet[1981]: E0213 19:37:29.461594 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.461895 kubelet[1981]: E0213 19:37:29.461796 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.461895 kubelet[1981]: W0213 19:37:29.461807 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.461895 kubelet[1981]: E0213 19:37:29.461819 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.462229 kubelet[1981]: E0213 19:37:29.461995 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.462229 kubelet[1981]: W0213 19:37:29.462054 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.462229 kubelet[1981]: E0213 19:37:29.462067 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.462545 kubelet[1981]: E0213 19:37:29.462458 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.462545 kubelet[1981]: W0213 19:37:29.462474 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.462545 kubelet[1981]: E0213 19:37:29.462512 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.462909 kubelet[1981]: E0213 19:37:29.462887 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.462909 kubelet[1981]: W0213 19:37:29.462905 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.463130 kubelet[1981]: E0213 19:37:29.462919 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.463130 kubelet[1981]: E0213 19:37:29.463112 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.463130 kubelet[1981]: W0213 19:37:29.463123 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.463463 kubelet[1981]: E0213 19:37:29.463134 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.463598 kubelet[1981]: E0213 19:37:29.463535 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.463598 kubelet[1981]: W0213 19:37:29.463549 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.463598 kubelet[1981]: E0213 19:37:29.463563 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.463892 kubelet[1981]: E0213 19:37:29.463788 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.463892 kubelet[1981]: W0213 19:37:29.463799 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.463892 kubelet[1981]: E0213 19:37:29.463811 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.464204 kubelet[1981]: E0213 19:37:29.463982 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.464204 kubelet[1981]: W0213 19:37:29.463993 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.464204 kubelet[1981]: E0213 19:37:29.464003 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.464616 kubelet[1981]: E0213 19:37:29.464286 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.464616 kubelet[1981]: W0213 19:37:29.464321 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.464616 kubelet[1981]: E0213 19:37:29.464452 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.464880 kubelet[1981]: E0213 19:37:29.464791 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.464880 kubelet[1981]: W0213 19:37:29.464806 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.464880 kubelet[1981]: E0213 19:37:29.464820 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.465487 kubelet[1981]: E0213 19:37:29.465038 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.465487 kubelet[1981]: W0213 19:37:29.465049 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.465487 kubelet[1981]: E0213 19:37:29.465062 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.465487 kubelet[1981]: E0213 19:37:29.465238 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.465487 kubelet[1981]: W0213 19:37:29.465248 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.465487 kubelet[1981]: E0213 19:37:29.465259 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.465487 kubelet[1981]: E0213 19:37:29.465475 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.465487 kubelet[1981]: W0213 19:37:29.465486 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.465487 kubelet[1981]: E0213 19:37:29.465498 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.466455 kubelet[1981]: E0213 19:37:29.465691 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.466455 kubelet[1981]: W0213 19:37:29.465703 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.466455 kubelet[1981]: E0213 19:37:29.465715 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.466455 kubelet[1981]: E0213 19:37:29.465896 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.466455 kubelet[1981]: W0213 19:37:29.465906 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.466455 kubelet[1981]: E0213 19:37:29.465917 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.466455 kubelet[1981]: E0213 19:37:29.466091 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.466455 kubelet[1981]: W0213 19:37:29.466100 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.466455 kubelet[1981]: E0213 19:37:29.466111 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.466455 kubelet[1981]: E0213 19:37:29.466304 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.467144 kubelet[1981]: W0213 19:37:29.466315 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.467144 kubelet[1981]: E0213 19:37:29.466325 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.467144 kubelet[1981]: E0213 19:37:29.466680 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.467144 kubelet[1981]: W0213 19:37:29.466693 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.467144 kubelet[1981]: E0213 19:37:29.466707 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.523583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766122069.mount: Deactivated successfully. Feb 13 19:37:29.562292 kubelet[1981]: E0213 19:37:29.561982 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.562292 kubelet[1981]: W0213 19:37:29.562015 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.562292 kubelet[1981]: E0213 19:37:29.562046 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.563641 kubelet[1981]: E0213 19:37:29.563442 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.564307 kubelet[1981]: W0213 19:37:29.563943 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.564307 kubelet[1981]: E0213 19:37:29.564028 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.565475 kubelet[1981]: E0213 19:37:29.565341 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.565475 kubelet[1981]: W0213 19:37:29.565360 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.565618 kubelet[1981]: E0213 19:37:29.565552 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.565667 kubelet[1981]: E0213 19:37:29.565634 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.565667 kubelet[1981]: W0213 19:37:29.565642 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.565667 kubelet[1981]: E0213 19:37:29.565658 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.565890 kubelet[1981]: E0213 19:37:29.565838 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.565890 kubelet[1981]: W0213 19:37:29.565862 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.565890 kubelet[1981]: E0213 19:37:29.565871 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.566048 kubelet[1981]: E0213 19:37:29.566035 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.566048 kubelet[1981]: W0213 19:37:29.566046 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.566133 kubelet[1981]: E0213 19:37:29.566063 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.566290 kubelet[1981]: E0213 19:37:29.566275 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.566290 kubelet[1981]: W0213 19:37:29.566288 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.566363 kubelet[1981]: E0213 19:37:29.566306 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.566884 kubelet[1981]: E0213 19:37:29.566851 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.566884 kubelet[1981]: W0213 19:37:29.566875 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.566963 kubelet[1981]: E0213 19:37:29.566891 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.567064 kubelet[1981]: E0213 19:37:29.567048 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.567064 kubelet[1981]: W0213 19:37:29.567061 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.567127 kubelet[1981]: E0213 19:37:29.567073 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.567256 kubelet[1981]: E0213 19:37:29.567243 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.567299 kubelet[1981]: W0213 19:37:29.567256 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.567299 kubelet[1981]: E0213 19:37:29.567268 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.567702 kubelet[1981]: E0213 19:37:29.567586 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.567702 kubelet[1981]: W0213 19:37:29.567600 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.567702 kubelet[1981]: E0213 19:37:29.567617 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.567846 kubelet[1981]: E0213 19:37:29.567835 1981 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:37:29.567905 kubelet[1981]: W0213 19:37:29.567892 1981 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:37:29.567959 kubelet[1981]: E0213 19:37:29.567950 1981 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:37:29.988257 containerd[1467]: time="2025-02-13T19:37:29.987476517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:29.988806 containerd[1467]: time="2025-02-13T19:37:29.988755721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 19:37:29.990232 containerd[1467]: time="2025-02-13T19:37:29.990185960Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:29.993028 containerd[1467]: time="2025-02-13T19:37:29.992957041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:29.993999 containerd[1467]: time="2025-02-13T19:37:29.993831137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 2.088460153s" Feb 13 19:37:29.993999 containerd[1467]: time="2025-02-13T19:37:29.993870816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 19:37:29.996867 containerd[1467]: time="2025-02-13T19:37:29.996620338Z" level=info msg="CreateContainer within sandbox \"31aebfa7967c8f6c2e584236df731c1fa8ada4faf3a05a3d0fa1e065683c5270\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:37:30.025462 containerd[1467]: time="2025-02-13T19:37:30.025358993Z" level=info msg="CreateContainer within sandbox \"31aebfa7967c8f6c2e584236df731c1fa8ada4faf3a05a3d0fa1e065683c5270\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c8c1b298497034e92a2c21c740b0ccd4732dc9e2b9f9766692b48accb04ea220\"" Feb 13 19:37:30.025982 containerd[1467]: time="2025-02-13T19:37:30.025938897Z" level=info msg="StartContainer for \"c8c1b298497034e92a2c21c740b0ccd4732dc9e2b9f9766692b48accb04ea220\"" Feb 13 19:37:30.058664 systemd[1]: Started cri-containerd-c8c1b298497034e92a2c21c740b0ccd4732dc9e2b9f9766692b48accb04ea220.scope - libcontainer container c8c1b298497034e92a2c21c740b0ccd4732dc9e2b9f9766692b48accb04ea220. Feb 13 19:37:30.102662 containerd[1467]: time="2025-02-13T19:37:30.102605581Z" level=info msg="StartContainer for \"c8c1b298497034e92a2c21c740b0ccd4732dc9e2b9f9766692b48accb04ea220\" returns successfully" Feb 13 19:37:30.118827 systemd[1]: cri-containerd-c8c1b298497034e92a2c21c740b0ccd4732dc9e2b9f9766692b48accb04ea220.scope: Deactivated successfully. Feb 13 19:37:30.186041 containerd[1467]: time="2025-02-13T19:37:30.185906924Z" level=info msg="shim disconnected" id=c8c1b298497034e92a2c21c740b0ccd4732dc9e2b9f9766692b48accb04ea220 namespace=k8s.io Feb 13 19:37:30.186574 containerd[1467]: time="2025-02-13T19:37:30.186345072Z" level=warning msg="cleaning up after shim disconnected" id=c8c1b298497034e92a2c21c740b0ccd4732dc9e2b9f9766692b48accb04ea220 namespace=k8s.io Feb 13 19:37:30.186574 containerd[1467]: time="2025-02-13T19:37:30.186367712Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:37:30.297607 kubelet[1981]: E0213 19:37:30.296779 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:30.384804 kubelet[1981]: E0213 19:37:30.384749 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:30.412032 containerd[1467]: time="2025-02-13T19:37:30.411708808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:37:30.431831 kubelet[1981]: I0213 19:37:30.430451 1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m8dkv" podStartSLOduration=5.042763823 podStartE2EDuration="6.430433061s" podCreationTimestamp="2025-02-13 19:37:24 +0000 UTC" firstStartedPulling="2025-02-13 19:37:26.516993808 +0000 UTC m=+2.971911315" lastFinishedPulling="2025-02-13 19:37:27.904663046 +0000 UTC m=+4.359580553" observedRunningTime="2025-02-13 19:37:28.416720213 +0000 UTC m=+4.871637760" watchObservedRunningTime="2025-02-13 19:37:30.430433061 +0000 UTC m=+6.885350648" Feb 13 19:37:30.495035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8c1b298497034e92a2c21c740b0ccd4732dc9e2b9f9766692b48accb04ea220-rootfs.mount: Deactivated successfully. Feb 13 19:37:31.298085 kubelet[1981]: E0213 19:37:31.297163 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:32.298342 kubelet[1981]: E0213 19:37:32.298271 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:32.382202 kubelet[1981]: E0213 19:37:32.381085 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:33.298633 kubelet[1981]: E0213 19:37:33.298570 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:33.870004 update_engine[1455]: I20250213 19:37:33.869935 1455 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 19:37:33.870004 update_engine[1455]: I20250213 19:37:33.869990 1455 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 19:37:33.870806 update_engine[1455]: I20250213 19:37:33.870534 1455 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 19:37:33.871478 update_engine[1455]: I20250213 19:37:33.871428 1455 omaha_request_params.cc:62] Current group set to beta Feb 13 19:37:33.871695 update_engine[1455]: I20250213 19:37:33.871569 1455 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 19:37:33.871695 update_engine[1455]: I20250213 19:37:33.871587 1455 update_attempter.cc:643] Scheduling an action processor start. Feb 13 19:37:33.871695 update_engine[1455]: I20250213 19:37:33.871608 1455 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:37:33.871695 update_engine[1455]: I20250213 19:37:33.871642 1455 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 19:37:33.872049 locksmithd[1488]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 19:37:33.872271 update_engine[1455]: I20250213 19:37:33.872072 1455 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:37:33.872271 update_engine[1455]: I20250213 19:37:33.872092 1455 omaha_request_action.cc:272] Request: Feb 13 19:37:33.872271 update_engine[1455]: Feb 13 19:37:33.872271 update_engine[1455]: Feb 13 19:37:33.872271 update_engine[1455]: Feb 13 19:37:33.872271 update_engine[1455]: Feb 13 19:37:33.872271 update_engine[1455]: Feb 13 19:37:33.872271 update_engine[1455]: Feb 13 19:37:33.872271 update_engine[1455]: Feb 13 19:37:33.872271 update_engine[1455]: Feb 13 19:37:33.872271 update_engine[1455]: I20250213 19:37:33.872099 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:37:33.875754 update_engine[1455]: I20250213 19:37:33.874785 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:37:33.876507 update_engine[1455]: I20250213 19:37:33.876446 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:37:33.876608 update_engine[1455]: E20250213 19:37:33.876585 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:37:33.876662 update_engine[1455]: I20250213 19:37:33.876643 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 19:37:34.299349 kubelet[1981]: E0213 19:37:34.299313 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:34.382614 kubelet[1981]: E0213 19:37:34.382313 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:35.226406 containerd[1467]: time="2025-02-13T19:37:35.226243431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:35.228693 containerd[1467]: time="2025-02-13T19:37:35.228633500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 19:37:35.230224 containerd[1467]: time="2025-02-13T19:37:35.230145747Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:35.233765 containerd[1467]: time="2025-02-13T19:37:35.233702111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:35.235762 containerd[1467]: time="2025-02-13T19:37:35.235611591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.823859104s" Feb 13 19:37:35.235762 containerd[1467]: time="2025-02-13T19:37:35.235653830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 19:37:35.238566 containerd[1467]: time="2025-02-13T19:37:35.238354172Z" level=info msg="CreateContainer within sandbox \"31aebfa7967c8f6c2e584236df731c1fa8ada4faf3a05a3d0fa1e065683c5270\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:37:35.259315 containerd[1467]: time="2025-02-13T19:37:35.259273525Z" level=info msg="CreateContainer within sandbox \"31aebfa7967c8f6c2e584236df731c1fa8ada4faf3a05a3d0fa1e065683c5270\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cdd13aa85d94510191961fe638d6fee60e79513712d92a3d1b78525b34154157\"" Feb 13 19:37:35.260506 containerd[1467]: time="2025-02-13T19:37:35.260346583Z" level=info msg="StartContainer for \"cdd13aa85d94510191961fe638d6fee60e79513712d92a3d1b78525b34154157\"" Feb 13 19:37:35.294704 systemd[1]: Started cri-containerd-cdd13aa85d94510191961fe638d6fee60e79513712d92a3d1b78525b34154157.scope - libcontainer container cdd13aa85d94510191961fe638d6fee60e79513712d92a3d1b78525b34154157. Feb 13 19:37:35.300459 kubelet[1981]: E0213 19:37:35.299989 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:35.334888 containerd[1467]: time="2025-02-13T19:37:35.334795153Z" level=info msg="StartContainer for \"cdd13aa85d94510191961fe638d6fee60e79513712d92a3d1b78525b34154157\" returns successfully" Feb 13 19:37:35.845569 containerd[1467]: time="2025-02-13T19:37:35.845454531Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:37:35.849528 systemd[1]: cri-containerd-cdd13aa85d94510191961fe638d6fee60e79513712d92a3d1b78525b34154157.scope: Deactivated successfully. Feb 13 19:37:35.874998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdd13aa85d94510191961fe638d6fee60e79513712d92a3d1b78525b34154157-rootfs.mount: Deactivated successfully. Feb 13 19:37:35.922954 kubelet[1981]: I0213 19:37:35.922907 1981 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:37:36.009994 containerd[1467]: time="2025-02-13T19:37:36.009867266Z" level=info msg="shim disconnected" id=cdd13aa85d94510191961fe638d6fee60e79513712d92a3d1b78525b34154157 namespace=k8s.io Feb 13 19:37:36.010253 containerd[1467]: time="2025-02-13T19:37:36.009981544Z" level=warning msg="cleaning up after shim disconnected" id=cdd13aa85d94510191961fe638d6fee60e79513712d92a3d1b78525b34154157 namespace=k8s.io Feb 13 19:37:36.010253 containerd[1467]: time="2025-02-13T19:37:36.010060582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:37:36.023910 containerd[1467]: time="2025-02-13T19:37:36.023720025Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:37:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:37:36.300964 kubelet[1981]: E0213 19:37:36.300911 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:36.391828 systemd[1]: Created slice kubepods-besteffort-podf9f2320c_ee95_40d0_96bc_00a1af393157.slice - libcontainer container kubepods-besteffort-podf9f2320c_ee95_40d0_96bc_00a1af393157.slice. Feb 13 19:37:36.395207 containerd[1467]: time="2025-02-13T19:37:36.395102123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:0,}" Feb 13 19:37:36.440449 containerd[1467]: time="2025-02-13T19:37:36.440291486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:37:36.491919 containerd[1467]: time="2025-02-13T19:37:36.491845759Z" level=error msg="Failed to destroy network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:36.493916 containerd[1467]: time="2025-02-13T19:37:36.492308789Z" level=error msg="encountered an error cleaning up failed sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:36.493916 containerd[1467]: time="2025-02-13T19:37:36.492384868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:36.493792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b-shm.mount: Deactivated successfully. Feb 13 19:37:36.494578 kubelet[1981]: E0213 19:37:36.493973 1981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:36.494578 kubelet[1981]: E0213 19:37:36.494033 1981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:36.494578 kubelet[1981]: E0213 19:37:36.494052 1981 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:36.494757 kubelet[1981]: E0213 19:37:36.494111 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:37.301877 kubelet[1981]: E0213 19:37:37.301798 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:37.443497 kubelet[1981]: I0213 19:37:37.443006 1981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b" Feb 13 19:37:37.444251 containerd[1467]: time="2025-02-13T19:37:37.443893591Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\"" Feb 13 19:37:37.444251 containerd[1467]: time="2025-02-13T19:37:37.444089187Z" level=info msg="Ensure that sandbox 6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b in task-service has been cleanup successfully" Feb 13 19:37:37.444800 containerd[1467]: time="2025-02-13T19:37:37.444670496Z" level=info msg="TearDown network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" successfully" Feb 13 19:37:37.444800 containerd[1467]: time="2025-02-13T19:37:37.444697815Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" returns successfully" Feb 13 19:37:37.446230 systemd[1]: run-netns-cni\x2df6403bb9\x2dc148\x2d4472\x2db1dc\x2db69c65b1311d.mount: Deactivated successfully. Feb 13 19:37:37.446586 containerd[1467]: time="2025-02-13T19:37:37.446520420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:1,}" Feb 13 19:37:37.513291 containerd[1467]: time="2025-02-13T19:37:37.513239093Z" level=error msg="Failed to destroy network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:37.514435 containerd[1467]: time="2025-02-13T19:37:37.513831241Z" level=error msg="encountered an error cleaning up failed sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:37.514435 containerd[1467]: time="2025-02-13T19:37:37.513907880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:37.514585 kubelet[1981]: E0213 19:37:37.514207 1981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:37.514585 kubelet[1981]: E0213 19:37:37.514277 1981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:37.514585 kubelet[1981]: E0213 19:37:37.514311 1981 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:37.514707 kubelet[1981]: E0213 19:37:37.514355 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:37.515555 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4-shm.mount: Deactivated successfully. Feb 13 19:37:38.302299 kubelet[1981]: E0213 19:37:38.302213 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:38.449941 kubelet[1981]: I0213 19:37:38.446808 1981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4" Feb 13 19:37:38.449927 systemd[1]: run-netns-cni\x2d3de5d278\x2d398d\x2ddac2\x2d9ab0\x2d603fe9869c31.mount: Deactivated successfully. Feb 13 19:37:38.450484 containerd[1467]: time="2025-02-13T19:37:38.447681175Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\"" Feb 13 19:37:38.450484 containerd[1467]: time="2025-02-13T19:37:38.447853652Z" level=info msg="Ensure that sandbox 21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4 in task-service has been cleanup successfully" Feb 13 19:37:38.450484 containerd[1467]: time="2025-02-13T19:37:38.448142887Z" level=info msg="TearDown network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" successfully" Feb 13 19:37:38.450484 containerd[1467]: time="2025-02-13T19:37:38.448161766Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" returns successfully" Feb 13 19:37:38.450946 containerd[1467]: time="2025-02-13T19:37:38.450869277Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\"" Feb 13 19:37:38.451036 containerd[1467]: time="2025-02-13T19:37:38.451018794Z" level=info msg="TearDown network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" successfully" Feb 13 19:37:38.451076 containerd[1467]: time="2025-02-13T19:37:38.451037314Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" returns successfully" Feb 13 19:37:38.452056 containerd[1467]: time="2025-02-13T19:37:38.451717141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:2,}" Feb 13 19:37:38.525858 containerd[1467]: time="2025-02-13T19:37:38.525750745Z" level=error msg="Failed to destroy network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:38.527602 containerd[1467]: time="2025-02-13T19:37:38.526261096Z" level=error msg="encountered an error cleaning up failed sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:38.527602 containerd[1467]: time="2025-02-13T19:37:38.526352654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:38.528538 kubelet[1981]: E0213 19:37:38.528044 1981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:38.528538 kubelet[1981]: E0213 19:37:38.528148 1981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:38.528538 kubelet[1981]: E0213 19:37:38.528173 1981 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:38.528768 kubelet[1981]: E0213 19:37:38.528217 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:38.528990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d-shm.mount: Deactivated successfully. Feb 13 19:37:39.303426 kubelet[1981]: E0213 19:37:39.303342 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:39.452560 kubelet[1981]: I0213 19:37:39.452299 1981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d" Feb 13 19:37:39.453480 containerd[1467]: time="2025-02-13T19:37:39.453088466Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\"" Feb 13 19:37:39.453480 containerd[1467]: time="2025-02-13T19:37:39.453309062Z" level=info msg="Ensure that sandbox facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d in task-service has been cleanup successfully" Feb 13 19:37:39.456003 containerd[1467]: time="2025-02-13T19:37:39.454144088Z" level=info msg="TearDown network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" successfully" Feb 13 19:37:39.456003 containerd[1467]: time="2025-02-13T19:37:39.454170767Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" returns successfully" Feb 13 19:37:39.455766 systemd[1]: run-netns-cni\x2d8d1d33b7\x2d6e95\x2d1f23\x2dd78a\x2dcb846a097434.mount: Deactivated successfully. Feb 13 19:37:39.456650 containerd[1467]: time="2025-02-13T19:37:39.456594205Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\"" Feb 13 19:37:39.456738 containerd[1467]: time="2025-02-13T19:37:39.456718283Z" level=info msg="TearDown network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" successfully" Feb 13 19:37:39.456738 containerd[1467]: time="2025-02-13T19:37:39.456732803Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" returns successfully" Feb 13 19:37:39.457457 containerd[1467]: time="2025-02-13T19:37:39.457278473Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\"" Feb 13 19:37:39.457457 containerd[1467]: time="2025-02-13T19:37:39.457373551Z" level=info msg="TearDown network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" successfully" Feb 13 19:37:39.457457 containerd[1467]: time="2025-02-13T19:37:39.457383631Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" returns successfully" Feb 13 19:37:39.458824 containerd[1467]: time="2025-02-13T19:37:39.457973101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:3,}" Feb 13 19:37:39.545977 containerd[1467]: time="2025-02-13T19:37:39.545913414Z" level=error msg="Failed to destroy network for sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:39.546437 containerd[1467]: time="2025-02-13T19:37:39.546403365Z" level=error msg="encountered an error cleaning up failed sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:39.546510 containerd[1467]: time="2025-02-13T19:37:39.546476724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:39.548537 kubelet[1981]: E0213 19:37:39.546712 1981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:39.548537 kubelet[1981]: E0213 19:37:39.546772 1981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:39.548537 kubelet[1981]: E0213 19:37:39.546814 1981 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:39.548674 kubelet[1981]: E0213 19:37:39.546854 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:39.548776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2-shm.mount: Deactivated successfully. Feb 13 19:37:40.304162 kubelet[1981]: E0213 19:37:40.304072 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:40.457162 kubelet[1981]: I0213 19:37:40.457040 1981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2" Feb 13 19:37:40.459136 containerd[1467]: time="2025-02-13T19:37:40.458090787Z" level=info msg="StopPodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\"" Feb 13 19:37:40.459136 containerd[1467]: time="2025-02-13T19:37:40.458276224Z" level=info msg="Ensure that sandbox 56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2 in task-service has been cleanup successfully" Feb 13 19:37:40.460842 systemd[1]: run-netns-cni\x2de2672a4c\x2d926d\x2d3698\x2d53b1\x2d0f6061712e69.mount: Deactivated successfully. Feb 13 19:37:40.461977 containerd[1467]: time="2025-02-13T19:37:40.460941780Z" level=info msg="TearDown network for sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" successfully" Feb 13 19:37:40.461977 containerd[1467]: time="2025-02-13T19:37:40.461659608Z" level=info msg="StopPodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" returns successfully" Feb 13 19:37:40.462816 containerd[1467]: time="2025-02-13T19:37:40.462162720Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\"" Feb 13 19:37:40.462816 containerd[1467]: time="2025-02-13T19:37:40.462271718Z" level=info msg="TearDown network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" successfully" Feb 13 19:37:40.462816 containerd[1467]: time="2025-02-13T19:37:40.462283518Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" returns successfully" Feb 13 19:37:40.463350 containerd[1467]: time="2025-02-13T19:37:40.462990986Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\"" Feb 13 19:37:40.463645 containerd[1467]: time="2025-02-13T19:37:40.463621336Z" level=info msg="TearDown network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" successfully" Feb 13 19:37:40.463687 containerd[1467]: time="2025-02-13T19:37:40.463646695Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" returns successfully" Feb 13 19:37:40.464136 containerd[1467]: time="2025-02-13T19:37:40.464107728Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\"" Feb 13 19:37:40.464230 containerd[1467]: time="2025-02-13T19:37:40.464211366Z" level=info msg="TearDown network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" successfully" Feb 13 19:37:40.464261 containerd[1467]: time="2025-02-13T19:37:40.464229846Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" returns successfully" Feb 13 19:37:40.464947 containerd[1467]: time="2025-02-13T19:37:40.464920754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:4,}" Feb 13 19:37:40.538332 containerd[1467]: time="2025-02-13T19:37:40.538254028Z" level=error msg="Failed to destroy network for sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:40.540025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f-shm.mount: Deactivated successfully. Feb 13 19:37:40.540570 containerd[1467]: time="2025-02-13T19:37:40.539556486Z" level=error msg="encountered an error cleaning up failed sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:40.540652 containerd[1467]: time="2025-02-13T19:37:40.540617909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:40.541413 kubelet[1981]: E0213 19:37:40.540874 1981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:40.541413 kubelet[1981]: E0213 19:37:40.540932 1981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:40.541413 kubelet[1981]: E0213 19:37:40.540951 1981 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:40.541564 kubelet[1981]: E0213 19:37:40.540990 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:41.305366 kubelet[1981]: E0213 19:37:41.305149 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:41.471200 kubelet[1981]: I0213 19:37:41.471025 1981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f" Feb 13 19:37:41.472237 containerd[1467]: time="2025-02-13T19:37:41.471998164Z" level=info msg="StopPodSandbox for \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\"" Feb 13 19:37:41.472600 containerd[1467]: time="2025-02-13T19:37:41.472453877Z" level=info msg="Ensure that sandbox b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f in task-service has been cleanup successfully" Feb 13 19:37:41.475374 containerd[1467]: time="2025-02-13T19:37:41.472720473Z" level=info msg="TearDown network for sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\" successfully" Feb 13 19:37:41.475374 containerd[1467]: time="2025-02-13T19:37:41.474210970Z" level=info msg="StopPodSandbox for \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\" returns successfully" Feb 13 19:37:41.475374 containerd[1467]: time="2025-02-13T19:37:41.474854360Z" level=info msg="StopPodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\"" Feb 13 19:37:41.475374 containerd[1467]: time="2025-02-13T19:37:41.474977638Z" level=info msg="TearDown network for sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" successfully" Feb 13 19:37:41.475374 containerd[1467]: time="2025-02-13T19:37:41.474990237Z" level=info msg="StopPodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" returns successfully" Feb 13 19:37:41.474784 systemd[1]: run-netns-cni\x2da2b4063c\x2d86ab\x2d06f7\x2d90d2\x2d212768e933de.mount: Deactivated successfully. Feb 13 19:37:41.476542 containerd[1467]: time="2025-02-13T19:37:41.476321417Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\"" Feb 13 19:37:41.476542 containerd[1467]: time="2025-02-13T19:37:41.476451175Z" level=info msg="TearDown network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" successfully" Feb 13 19:37:41.476542 containerd[1467]: time="2025-02-13T19:37:41.476462775Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" returns successfully" Feb 13 19:37:41.477007 containerd[1467]: time="2025-02-13T19:37:41.476979007Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\"" Feb 13 19:37:41.477483 containerd[1467]: time="2025-02-13T19:37:41.477181163Z" level=info msg="TearDown network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" successfully" Feb 13 19:37:41.477483 containerd[1467]: time="2025-02-13T19:37:41.477197123Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" returns successfully" Feb 13 19:37:41.478171 containerd[1467]: time="2025-02-13T19:37:41.477954831Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\"" Feb 13 19:37:41.478171 containerd[1467]: time="2025-02-13T19:37:41.478102509Z" level=info msg="TearDown network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" successfully" Feb 13 19:37:41.478171 containerd[1467]: time="2025-02-13T19:37:41.478113189Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" returns successfully" Feb 13 19:37:41.478589 containerd[1467]: time="2025-02-13T19:37:41.478565262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:5,}" Feb 13 19:37:41.558591 containerd[1467]: time="2025-02-13T19:37:41.558450939Z" level=error msg="Failed to destroy network for sandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:41.559357 containerd[1467]: time="2025-02-13T19:37:41.559181047Z" level=error msg="encountered an error cleaning up failed sandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:41.559357 containerd[1467]: time="2025-02-13T19:37:41.559262326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:41.561229 kubelet[1981]: E0213 19:37:41.560524 1981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:41.561229 kubelet[1981]: E0213 19:37:41.560581 1981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:41.561229 kubelet[1981]: E0213 19:37:41.560604 1981 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:41.560912 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188-shm.mount: Deactivated successfully. Feb 13 19:37:41.561512 kubelet[1981]: E0213 19:37:41.560641 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:42.305571 kubelet[1981]: E0213 19:37:42.305441 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:42.397168 systemd[1]: Created slice kubepods-besteffort-pod22a00d22_2381_4428_8e2c_0bd83180a5f0.slice - libcontainer container kubepods-besteffort-pod22a00d22_2381_4428_8e2c_0bd83180a5f0.slice. Feb 13 19:37:42.478416 kubelet[1981]: I0213 19:37:42.478210 1981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188" Feb 13 19:37:42.479709 containerd[1467]: time="2025-02-13T19:37:42.479358378Z" level=info msg="StopPodSandbox for \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\"" Feb 13 19:37:42.479709 containerd[1467]: time="2025-02-13T19:37:42.479612495Z" level=info msg="Ensure that sandbox 3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188 in task-service has been cleanup successfully" Feb 13 19:37:42.480090 containerd[1467]: time="2025-02-13T19:37:42.479914490Z" level=info msg="TearDown network for sandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\" successfully" Feb 13 19:37:42.480090 containerd[1467]: time="2025-02-13T19:37:42.479929850Z" level=info msg="StopPodSandbox for \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\" returns successfully" Feb 13 19:37:42.481632 systemd[1]: run-netns-cni\x2d3d5c4618\x2d0764\x2d2115\x2df6cd\x2d8067d395b7f6.mount: Deactivated successfully. Feb 13 19:37:42.483878 containerd[1467]: time="2025-02-13T19:37:42.483845512Z" level=info msg="StopPodSandbox for \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\"" Feb 13 19:37:42.483972 containerd[1467]: time="2025-02-13T19:37:42.483954111Z" level=info msg="TearDown network for sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\" successfully" Feb 13 19:37:42.484006 containerd[1467]: time="2025-02-13T19:37:42.483970551Z" level=info msg="StopPodSandbox for \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\" returns successfully" Feb 13 19:37:42.484576 containerd[1467]: time="2025-02-13T19:37:42.484418184Z" level=info msg="StopPodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\"" Feb 13 19:37:42.484576 containerd[1467]: time="2025-02-13T19:37:42.484505543Z" level=info msg="TearDown network for sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" successfully" Feb 13 19:37:42.484576 containerd[1467]: time="2025-02-13T19:37:42.484515943Z" level=info msg="StopPodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" returns successfully" Feb 13 19:37:42.485220 containerd[1467]: time="2025-02-13T19:37:42.485193813Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\"" Feb 13 19:37:42.485869 containerd[1467]: time="2025-02-13T19:37:42.485786564Z" level=info msg="TearDown network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" successfully" Feb 13 19:37:42.485869 containerd[1467]: time="2025-02-13T19:37:42.485807164Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" returns successfully" Feb 13 19:37:42.486592 containerd[1467]: time="2025-02-13T19:37:42.486440394Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\"" Feb 13 19:37:42.486592 containerd[1467]: time="2025-02-13T19:37:42.486519593Z" level=info msg="TearDown network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" successfully" Feb 13 19:37:42.486592 containerd[1467]: time="2025-02-13T19:37:42.486529353Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" returns successfully" Feb 13 19:37:42.487196 containerd[1467]: time="2025-02-13T19:37:42.487156744Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\"" Feb 13 19:37:42.487358 containerd[1467]: time="2025-02-13T19:37:42.487253702Z" level=info msg="TearDown network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" successfully" Feb 13 19:37:42.487358 containerd[1467]: time="2025-02-13T19:37:42.487268542Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" returns successfully" Feb 13 19:37:42.488947 containerd[1467]: time="2025-02-13T19:37:42.488914598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:6,}" Feb 13 19:37:42.549161 kubelet[1981]: I0213 19:37:42.548663 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b55vz\" (UniqueName: \"kubernetes.io/projected/22a00d22-2381-4428-8e2c-0bd83180a5f0-kube-api-access-b55vz\") pod \"nginx-deployment-7fcdb87857-znm5m\" (UID: \"22a00d22-2381-4428-8e2c-0bd83180a5f0\") " pod="default/nginx-deployment-7fcdb87857-znm5m" Feb 13 19:37:42.583207 containerd[1467]: time="2025-02-13T19:37:42.582886576Z" level=error msg="Failed to destroy network for sandbox \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:42.586143 containerd[1467]: time="2025-02-13T19:37:42.584687790Z" level=error msg="encountered an error cleaning up failed sandbox \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:42.586143 containerd[1467]: time="2025-02-13T19:37:42.584777589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:42.586244 kubelet[1981]: E0213 19:37:42.585547 1981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:42.586244 kubelet[1981]: E0213 19:37:42.585601 1981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:42.586244 kubelet[1981]: E0213 19:37:42.585637 1981 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:42.586381 kubelet[1981]: E0213 19:37:42.585674 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:42.587008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce-shm.mount: Deactivated successfully. Feb 13 19:37:42.703028 containerd[1467]: time="2025-02-13T19:37:42.702971931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-znm5m,Uid:22a00d22-2381-4428-8e2c-0bd83180a5f0,Namespace:default,Attempt:0,}" Feb 13 19:37:42.795520 containerd[1467]: time="2025-02-13T19:37:42.795173575Z" level=error msg="Failed to destroy network for sandbox \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:42.796150 containerd[1467]: time="2025-02-13T19:37:42.795926644Z" level=error msg="encountered an error cleaning up failed sandbox \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:42.796150 containerd[1467]: time="2025-02-13T19:37:42.796024883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-znm5m,Uid:22a00d22-2381-4428-8e2c-0bd83180a5f0,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:42.796382 kubelet[1981]: E0213 19:37:42.796322 1981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:42.796464 kubelet[1981]: E0213 19:37:42.796417 1981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-znm5m" Feb 13 19:37:42.796464 kubelet[1981]: E0213 19:37:42.796445 1981 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-znm5m" Feb 13 19:37:42.796521 kubelet[1981]: E0213 19:37:42.796490 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-znm5m_default(22a00d22-2381-4428-8e2c-0bd83180a5f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-znm5m_default(22a00d22-2381-4428-8e2c-0bd83180a5f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-znm5m" podUID="22a00d22-2381-4428-8e2c-0bd83180a5f0" Feb 13 19:37:43.307440 kubelet[1981]: E0213 19:37:43.306455 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:43.487333 kubelet[1981]: I0213 19:37:43.487306 1981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce" Feb 13 19:37:43.488306 containerd[1467]: time="2025-02-13T19:37:43.488275152Z" level=info msg="StopPodSandbox for \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\"" Feb 13 19:37:43.488595 containerd[1467]: time="2025-02-13T19:37:43.488478309Z" level=info msg="Ensure that sandbox f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce in task-service has been cleanup successfully" Feb 13 19:37:43.491218 containerd[1467]: time="2025-02-13T19:37:43.489646253Z" level=info msg="TearDown network for sandbox \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\" successfully" Feb 13 19:37:43.491218 containerd[1467]: time="2025-02-13T19:37:43.489671413Z" level=info msg="StopPodSandbox for \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\" returns successfully" Feb 13 19:37:43.491218 containerd[1467]: time="2025-02-13T19:37:43.490124927Z" level=info msg="StopPodSandbox for \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\"" Feb 13 19:37:43.491218 containerd[1467]: time="2025-02-13T19:37:43.490197526Z" level=info msg="TearDown network for sandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\" successfully" Feb 13 19:37:43.491218 containerd[1467]: time="2025-02-13T19:37:43.490207325Z" level=info msg="StopPodSandbox for \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\" returns successfully" Feb 13 19:37:43.490970 systemd[1]: run-netns-cni\x2da0947278\x2d6605\x2ddd20\x2d7f7c\x2db4e9b36ae787.mount: Deactivated successfully. Feb 13 19:37:43.491748 kubelet[1981]: I0213 19:37:43.491601 1981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24" Feb 13 19:37:43.493567 containerd[1467]: time="2025-02-13T19:37:43.493536999Z" level=info msg="StopPodSandbox for \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\"" Feb 13 19:37:43.494810 containerd[1467]: time="2025-02-13T19:37:43.493691837Z" level=info msg="StopPodSandbox for \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\"" Feb 13 19:37:43.494893 containerd[1467]: time="2025-02-13T19:37:43.494873301Z" level=info msg="TearDown network for sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\" successfully" Feb 13 19:37:43.494893 containerd[1467]: time="2025-02-13T19:37:43.494889541Z" level=info msg="StopPodSandbox for \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\" returns successfully" Feb 13 19:37:43.494959 containerd[1467]: time="2025-02-13T19:37:43.494774582Z" level=info msg="Ensure that sandbox 8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24 in task-service has been cleanup successfully" Feb 13 19:37:43.495699 containerd[1467]: time="2025-02-13T19:37:43.495664930Z" level=info msg="StopPodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\"" Feb 13 19:37:43.495766 containerd[1467]: time="2025-02-13T19:37:43.495756769Z" level=info msg="TearDown network for sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" successfully" Feb 13 19:37:43.495786 containerd[1467]: time="2025-02-13T19:37:43.495768408Z" level=info msg="StopPodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" returns successfully" Feb 13 19:37:43.497008 systemd[1]: run-netns-cni\x2df3b9f1a2\x2d5f40\x2d245c\x2d0f24\x2d1a32fbd24f5d.mount: Deactivated successfully. Feb 13 19:37:43.497542 containerd[1467]: time="2025-02-13T19:37:43.497491824Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\"" Feb 13 19:37:43.497690 containerd[1467]: time="2025-02-13T19:37:43.497670022Z" level=info msg="TearDown network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" successfully" Feb 13 19:37:43.497690 containerd[1467]: time="2025-02-13T19:37:43.497688462Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" returns successfully" Feb 13 19:37:43.498972 containerd[1467]: time="2025-02-13T19:37:43.498711488Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\"" Feb 13 19:37:43.498972 containerd[1467]: time="2025-02-13T19:37:43.498800246Z" level=info msg="TearDown network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" successfully" Feb 13 19:37:43.498972 containerd[1467]: time="2025-02-13T19:37:43.498811926Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" returns successfully" Feb 13 19:37:43.498972 containerd[1467]: time="2025-02-13T19:37:43.498866645Z" level=info msg="TearDown network for sandbox \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\" successfully" Feb 13 19:37:43.498972 containerd[1467]: time="2025-02-13T19:37:43.498883205Z" level=info msg="StopPodSandbox for \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\" returns successfully" Feb 13 19:37:43.499846 containerd[1467]: time="2025-02-13T19:37:43.499639955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-znm5m,Uid:22a00d22-2381-4428-8e2c-0bd83180a5f0,Namespace:default,Attempt:1,}" Feb 13 19:37:43.499846 containerd[1467]: time="2025-02-13T19:37:43.499721154Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\"" Feb 13 19:37:43.499846 containerd[1467]: time="2025-02-13T19:37:43.499791073Z" level=info msg="TearDown network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" successfully" Feb 13 19:37:43.499846 containerd[1467]: time="2025-02-13T19:37:43.499799752Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" returns successfully" Feb 13 19:37:43.500787 containerd[1467]: time="2025-02-13T19:37:43.500612741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:7,}" Feb 13 19:37:43.519156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2675341022.mount: Deactivated successfully. Feb 13 19:37:43.579739 containerd[1467]: time="2025-02-13T19:37:43.578832777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:43.583622 containerd[1467]: time="2025-02-13T19:37:43.583529951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 19:37:43.586915 containerd[1467]: time="2025-02-13T19:37:43.586873945Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:43.593669 containerd[1467]: time="2025-02-13T19:37:43.593621371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:43.594349 containerd[1467]: time="2025-02-13T19:37:43.594310762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.153978717s" Feb 13 19:37:43.594349 containerd[1467]: time="2025-02-13T19:37:43.594343561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 19:37:43.604335 containerd[1467]: time="2025-02-13T19:37:43.604291544Z" level=info msg="CreateContainer within sandbox \"31aebfa7967c8f6c2e584236df731c1fa8ada4faf3a05a3d0fa1e065683c5270\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:37:43.629611 containerd[1467]: time="2025-02-13T19:37:43.629557313Z" level=info msg="CreateContainer within sandbox \"31aebfa7967c8f6c2e584236df731c1fa8ada4faf3a05a3d0fa1e065683c5270\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"648492246adc14c52145b267fd90fb9d0861caa554cc3b2f1529f1eb9e76cb4e\"" Feb 13 19:37:43.630540 containerd[1467]: time="2025-02-13T19:37:43.630500340Z" level=info msg="StartContainer for \"648492246adc14c52145b267fd90fb9d0861caa554cc3b2f1529f1eb9e76cb4e\"" Feb 13 19:37:43.635181 containerd[1467]: time="2025-02-13T19:37:43.635136556Z" level=error msg="Failed to destroy network for sandbox \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:43.636197 containerd[1467]: time="2025-02-13T19:37:43.636160782Z" level=error msg="encountered an error cleaning up failed sandbox \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:43.636958 containerd[1467]: time="2025-02-13T19:37:43.636341259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-znm5m,Uid:22a00d22-2381-4428-8e2c-0bd83180a5f0,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:43.637063 kubelet[1981]: E0213 19:37:43.636881 1981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:43.637063 kubelet[1981]: E0213 19:37:43.636934 1981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-znm5m" Feb 13 19:37:43.637063 kubelet[1981]: E0213 19:37:43.636954 1981 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-znm5m" Feb 13 19:37:43.637174 kubelet[1981]: E0213 19:37:43.637003 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-znm5m_default(22a00d22-2381-4428-8e2c-0bd83180a5f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-znm5m_default(22a00d22-2381-4428-8e2c-0bd83180a5f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-znm5m" podUID="22a00d22-2381-4428-8e2c-0bd83180a5f0" Feb 13 19:37:43.660612 systemd[1]: Started cri-containerd-648492246adc14c52145b267fd90fb9d0861caa554cc3b2f1529f1eb9e76cb4e.scope - libcontainer container 648492246adc14c52145b267fd90fb9d0861caa554cc3b2f1529f1eb9e76cb4e. Feb 13 19:37:43.677730 containerd[1467]: time="2025-02-13T19:37:43.677555968Z" level=error msg="Failed to destroy network for sandbox \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:43.678600 containerd[1467]: time="2025-02-13T19:37:43.678354876Z" level=error msg="encountered an error cleaning up failed sandbox \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:43.678600 containerd[1467]: time="2025-02-13T19:37:43.678548914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:43.679192 kubelet[1981]: E0213 19:37:43.679114 1981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:37:43.679192 kubelet[1981]: E0213 19:37:43.679188 1981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:43.679598 kubelet[1981]: E0213 19:37:43.679209 1981 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxcht" Feb 13 19:37:43.679598 kubelet[1981]: E0213 19:37:43.679254 1981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lxcht_calico-system(f9f2320c-ee95-40d0-96bc-00a1af393157)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxcht" podUID="f9f2320c-ee95-40d0-96bc-00a1af393157" Feb 13 19:37:43.701516 containerd[1467]: time="2025-02-13T19:37:43.701450716Z" level=info msg="StartContainer for \"648492246adc14c52145b267fd90fb9d0861caa554cc3b2f1529f1eb9e76cb4e\" returns successfully" Feb 13 19:37:43.810543 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:37:43.810695 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:37:43.869559 update_engine[1455]: I20250213 19:37:43.869313 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:37:43.870130 update_engine[1455]: I20250213 19:37:43.869762 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:37:43.870216 update_engine[1455]: I20250213 19:37:43.870144 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:37:43.872183 update_engine[1455]: E20250213 19:37:43.872092 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:37:43.872323 update_engine[1455]: I20250213 19:37:43.872203 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 19:37:44.291863 kubelet[1981]: E0213 19:37:44.291813 1981 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:44.307813 kubelet[1981]: E0213 19:37:44.307720 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:44.487385 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37-shm.mount: Deactivated successfully. Feb 13 19:37:44.508623 kubelet[1981]: I0213 19:37:44.508546 1981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f" Feb 13 19:37:44.510847 containerd[1467]: time="2025-02-13T19:37:44.510620866Z" level=info msg="StopPodSandbox for \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\"" Feb 13 19:37:44.514310 containerd[1467]: time="2025-02-13T19:37:44.511961089Z" level=info msg="Ensure that sandbox 95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f in task-service has been cleanup successfully" Feb 13 19:37:44.514337 systemd[1]: run-netns-cni\x2d6254f597\x2d8d9b\x2d8b71\x2de307\x2d1839aab073c9.mount: Deactivated successfully. Feb 13 19:37:44.515611 containerd[1467]: time="2025-02-13T19:37:44.515188207Z" level=info msg="TearDown network for sandbox \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\" successfully" Feb 13 19:37:44.515611 containerd[1467]: time="2025-02-13T19:37:44.515226566Z" level=info msg="StopPodSandbox for \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\" returns successfully" Feb 13 19:37:44.515737 kubelet[1981]: I0213 19:37:44.515272 1981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37" Feb 13 19:37:44.517567 containerd[1467]: time="2025-02-13T19:37:44.517324059Z" level=info msg="StopPodSandbox for \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\"" Feb 13 19:37:44.517567 containerd[1467]: time="2025-02-13T19:37:44.517515216Z" level=info msg="Ensure that sandbox 9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37 in task-service has been cleanup successfully" Feb 13 19:37:44.518363 containerd[1467]: time="2025-02-13T19:37:44.517692854Z" level=info msg="TearDown network for sandbox \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\" successfully" Feb 13 19:37:44.518363 containerd[1467]: time="2025-02-13T19:37:44.517710654Z" level=info msg="StopPodSandbox for \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\" returns successfully" Feb 13 19:37:44.518363 containerd[1467]: time="2025-02-13T19:37:44.517922451Z" level=info msg="StopPodSandbox for \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\"" Feb 13 19:37:44.518363 containerd[1467]: time="2025-02-13T19:37:44.518055329Z" level=info msg="TearDown network for sandbox \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\" successfully" Feb 13 19:37:44.518363 containerd[1467]: time="2025-02-13T19:37:44.518069249Z" level=info msg="StopPodSandbox for \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\" returns successfully" Feb 13 19:37:44.519711 systemd[1]: run-netns-cni\x2d41f0514c\x2d1f29\x2d43a2\x2d05f3\x2df47142581620.mount: Deactivated successfully. Feb 13 19:37:44.523281 containerd[1467]: time="2025-02-13T19:37:44.523240982Z" level=info msg="StopPodSandbox for \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\"" Feb 13 19:37:44.523904 containerd[1467]: time="2025-02-13T19:37:44.523872773Z" level=info msg="TearDown network for sandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\" successfully" Feb 13 19:37:44.523904 containerd[1467]: time="2025-02-13T19:37:44.523899893Z" level=info msg="StopPodSandbox for \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\" returns successfully" Feb 13 19:37:44.524848 containerd[1467]: time="2025-02-13T19:37:44.523884413Z" level=info msg="StopPodSandbox for \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\"" Feb 13 19:37:44.524848 containerd[1467]: time="2025-02-13T19:37:44.524029331Z" level=info msg="TearDown network for sandbox \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\" successfully" Feb 13 19:37:44.524848 containerd[1467]: time="2025-02-13T19:37:44.524038731Z" level=info msg="StopPodSandbox for \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\" returns successfully" Feb 13 19:37:44.526931 containerd[1467]: time="2025-02-13T19:37:44.526378501Z" level=info msg="StopPodSandbox for \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\"" Feb 13 19:37:44.527102 containerd[1467]: time="2025-02-13T19:37:44.527028172Z" level=info msg="TearDown network for sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\" successfully" Feb 13 19:37:44.527102 containerd[1467]: time="2025-02-13T19:37:44.527045252Z" level=info msg="StopPodSandbox for \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\" returns successfully" Feb 13 19:37:44.527102 containerd[1467]: time="2025-02-13T19:37:44.526561418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-znm5m,Uid:22a00d22-2381-4428-8e2c-0bd83180a5f0,Namespace:default,Attempt:2,}" Feb 13 19:37:44.527645 containerd[1467]: time="2025-02-13T19:37:44.527624524Z" level=info msg="StopPodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\"" Feb 13 19:37:44.527888 containerd[1467]: time="2025-02-13T19:37:44.527783362Z" level=info msg="TearDown network for sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" successfully" Feb 13 19:37:44.529077 containerd[1467]: time="2025-02-13T19:37:44.528904188Z" level=info msg="StopPodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" returns successfully" Feb 13 19:37:44.533141 containerd[1467]: time="2025-02-13T19:37:44.533103493Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\"" Feb 13 19:37:44.533269 containerd[1467]: time="2025-02-13T19:37:44.533236531Z" level=info msg="TearDown network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" successfully" Feb 13 19:37:44.533269 containerd[1467]: time="2025-02-13T19:37:44.533247851Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" returns successfully" Feb 13 19:37:44.534252 containerd[1467]: time="2025-02-13T19:37:44.534055520Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\"" Feb 13 19:37:44.534252 containerd[1467]: time="2025-02-13T19:37:44.534152559Z" level=info msg="TearDown network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" successfully" Feb 13 19:37:44.534252 containerd[1467]: time="2025-02-13T19:37:44.534162239Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" returns successfully" Feb 13 19:37:44.534854 containerd[1467]: time="2025-02-13T19:37:44.534716832Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\"" Feb 13 19:37:44.534854 containerd[1467]: time="2025-02-13T19:37:44.534794311Z" level=info msg="TearDown network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" successfully" Feb 13 19:37:44.534854 containerd[1467]: time="2025-02-13T19:37:44.534802911Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" returns successfully" Feb 13 19:37:44.535695 containerd[1467]: time="2025-02-13T19:37:44.535657379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:8,}" Feb 13 19:37:44.753181 systemd-networkd[1376]: cali012dfafd62f: Link UP Feb 13 19:37:44.753909 systemd-networkd[1376]: cali012dfafd62f: Gained carrier Feb 13 19:37:44.764508 kubelet[1981]: I0213 19:37:44.764451 1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7qq6h" podStartSLOduration=3.691630789 podStartE2EDuration="20.764432272s" podCreationTimestamp="2025-02-13 19:37:24 +0000 UTC" firstStartedPulling="2025-02-13 19:37:26.522726142 +0000 UTC m=+2.977643649" lastFinishedPulling="2025-02-13 19:37:43.595527625 +0000 UTC m=+20.050445132" observedRunningTime="2025-02-13 19:37:44.525246915 +0000 UTC m=+20.980164422" watchObservedRunningTime="2025-02-13 19:37:44.764432272 +0000 UTC m=+21.219349779" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.592 [INFO][2920] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.623 [INFO][2920] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0 nginx-deployment-7fcdb87857- default 22a00d22-2381-4428-8e2c-0bd83180a5f0 1521 0 2025-02-13 19:37:42 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-7fcdb87857-znm5m eth0 default [] [] [kns.default ksa.default.default] cali012dfafd62f [] []}} ContainerID="db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" Namespace="default" Pod="nginx-deployment-7fcdb87857-znm5m" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.623 [INFO][2920] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" Namespace="default" Pod="nginx-deployment-7fcdb87857-znm5m" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.677 [INFO][2944] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" HandleID="k8s-pod-network.db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" Workload="10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.697 [INFO][2944] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" HandleID="k8s-pod-network.db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" Workload="10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000429490), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-7fcdb87857-znm5m", "timestamp":"2025-02-13 19:37:44.677902882 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.697 [INFO][2944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.697 [INFO][2944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.698 [INFO][2944] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.701 [INFO][2944] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" host="10.0.0.4" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.707 [INFO][2944] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.713 [INFO][2944] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.716 [INFO][2944] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.721 [INFO][2944] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.721 [INFO][2944] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" host="10.0.0.4" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.724 [INFO][2944] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333 Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.730 [INFO][2944] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" host="10.0.0.4" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.740 [INFO][2944] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" host="10.0.0.4" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.740 [INFO][2944] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" host="10.0.0.4" Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.740 [INFO][2944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:37:44.768377 containerd[1467]: 2025-02-13 19:37:44.740 [INFO][2944] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" HandleID="k8s-pod-network.db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" Workload="10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0" Feb 13 19:37:44.768929 containerd[1467]: 2025-02-13 19:37:44.744 [INFO][2920] cni-plugin/k8s.go 386: Populated endpoint ContainerID="db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" Namespace="default" Pod="nginx-deployment-7fcdb87857-znm5m" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"22a00d22-2381-4428-8e2c-0bd83180a5f0", ResourceVersion:"1521", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-znm5m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali012dfafd62f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:37:44.768929 containerd[1467]: 2025-02-13 19:37:44.744 [INFO][2920] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.193/32] ContainerID="db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" Namespace="default" Pod="nginx-deployment-7fcdb87857-znm5m" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0" Feb 13 19:37:44.768929 containerd[1467]: 2025-02-13 19:37:44.744 [INFO][2920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali012dfafd62f ContainerID="db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" Namespace="default" Pod="nginx-deployment-7fcdb87857-znm5m" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0" Feb 13 19:37:44.768929 containerd[1467]: 2025-02-13 19:37:44.754 [INFO][2920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" Namespace="default" Pod="nginx-deployment-7fcdb87857-znm5m" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0" Feb 13 19:37:44.768929 containerd[1467]: 2025-02-13 19:37:44.755 [INFO][2920] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" Namespace="default" Pod="nginx-deployment-7fcdb87857-znm5m" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"22a00d22-2381-4428-8e2c-0bd83180a5f0", ResourceVersion:"1521", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333", Pod:"nginx-deployment-7fcdb87857-znm5m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali012dfafd62f", MAC:"66:7b:17:82:9e:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:37:44.768929 containerd[1467]: 2025-02-13 19:37:44.766 [INFO][2920] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333" Namespace="default" Pod="nginx-deployment-7fcdb87857-znm5m" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--znm5m-eth0" Feb 13 19:37:44.797235 containerd[1467]: time="2025-02-13T19:37:44.796803809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:37:44.797235 containerd[1467]: time="2025-02-13T19:37:44.796903248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:37:44.797235 containerd[1467]: time="2025-02-13T19:37:44.796932408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:44.797235 containerd[1467]: time="2025-02-13T19:37:44.797096526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:44.820663 systemd[1]: Started cri-containerd-db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333.scope - libcontainer container db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333. Feb 13 19:37:44.856457 systemd-networkd[1376]: calie74d791578c: Link UP Feb 13 19:37:44.857745 systemd-networkd[1376]: calie74d791578c: Gained carrier Feb 13 19:37:44.866155 containerd[1467]: time="2025-02-13T19:37:44.866095985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-znm5m,Uid:22a00d22-2381-4428-8e2c-0bd83180a5f0,Namespace:default,Attempt:2,} returns sandbox id \"db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333\"" Feb 13 19:37:44.867763 containerd[1467]: time="2025-02-13T19:37:44.867732083Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.603 [INFO][2917] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.626 [INFO][2917] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--lxcht-eth0 csi-node-driver- calico-system f9f2320c-ee95-40d0-96bc-00a1af393157 1430 0 2025-02-13 19:37:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-lxcht eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie74d791578c [] []}} ContainerID="63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" Namespace="calico-system" Pod="csi-node-driver-lxcht" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--lxcht-" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.626 [INFO][2917] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" Namespace="calico-system" Pod="csi-node-driver-lxcht" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--lxcht-eth0" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.677 [INFO][2943] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" HandleID="k8s-pod-network.63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" Workload="10.0.0.4-k8s-csi--node--driver--lxcht-eth0" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.699 [INFO][2943] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" HandleID="k8s-pod-network.63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" Workload="10.0.0.4-k8s-csi--node--driver--lxcht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c330), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-lxcht", "timestamp":"2025-02-13 19:37:44.677895202 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.699 [INFO][2943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.740 [INFO][2943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.740 [INFO][2943] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.802 [INFO][2943] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" host="10.0.0.4" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.809 [INFO][2943] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.819 [INFO][2943] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.823 [INFO][2943] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.827 [INFO][2943] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.828 [INFO][2943] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" host="10.0.0.4" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.831 [INFO][2943] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191 Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.836 [INFO][2943] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" host="10.0.0.4" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.844 [INFO][2943] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" host="10.0.0.4" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.845 [INFO][2943] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" host="10.0.0.4" Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.845 [INFO][2943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:37:44.878887 containerd[1467]: 2025-02-13 19:37:44.845 [INFO][2943] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" HandleID="k8s-pod-network.63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" Workload="10.0.0.4-k8s-csi--node--driver--lxcht-eth0" Feb 13 19:37:44.880091 containerd[1467]: 2025-02-13 19:37:44.847 [INFO][2917] cni-plugin/k8s.go 386: Populated endpoint ContainerID="63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" Namespace="calico-system" Pod="csi-node-driver-lxcht" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--lxcht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--lxcht-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f9f2320c-ee95-40d0-96bc-00a1af393157", ResourceVersion:"1430", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-lxcht", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie74d791578c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:37:44.880091 containerd[1467]: 2025-02-13 19:37:44.848 [INFO][2917] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.194/32] ContainerID="63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" Namespace="calico-system" Pod="csi-node-driver-lxcht" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--lxcht-eth0" Feb 13 19:37:44.880091 containerd[1467]: 2025-02-13 19:37:44.848 [INFO][2917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie74d791578c ContainerID="63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" Namespace="calico-system" Pod="csi-node-driver-lxcht" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--lxcht-eth0" Feb 13 19:37:44.880091 containerd[1467]: 2025-02-13 19:37:44.858 [INFO][2917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" Namespace="calico-system" Pod="csi-node-driver-lxcht" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--lxcht-eth0" Feb 13 19:37:44.880091 containerd[1467]: 2025-02-13 19:37:44.858 [INFO][2917] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" Namespace="calico-system" Pod="csi-node-driver-lxcht" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--lxcht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--lxcht-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f9f2320c-ee95-40d0-96bc-00a1af393157", ResourceVersion:"1430", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191", Pod:"csi-node-driver-lxcht", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie74d791578c", MAC:"46:fa:cb:74:94:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:37:44.880091 containerd[1467]: 2025-02-13 19:37:44.876 [INFO][2917] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191" Namespace="calico-system" Pod="csi-node-driver-lxcht" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--lxcht-eth0" Feb 13 19:37:44.905346 containerd[1467]: time="2025-02-13T19:37:44.905233473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:37:44.905879 containerd[1467]: time="2025-02-13T19:37:44.905593189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:37:44.905879 containerd[1467]: time="2025-02-13T19:37:44.905612268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:44.905879 containerd[1467]: time="2025-02-13T19:37:44.905783906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:44.922598 systemd[1]: Started cri-containerd-63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191.scope - libcontainer container 63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191. Feb 13 19:37:44.954092 containerd[1467]: time="2025-02-13T19:37:44.953942197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxcht,Uid:f9f2320c-ee95-40d0-96bc-00a1af393157,Namespace:calico-system,Attempt:8,} returns sandbox id \"63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191\"" Feb 13 19:37:45.308517 kubelet[1981]: E0213 19:37:45.308455 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:45.403487 kernel: bpftool[3175]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:37:45.625284 systemd-networkd[1376]: vxlan.calico: Link UP Feb 13 19:37:45.625361 systemd-networkd[1376]: vxlan.calico: Gained carrier Feb 13 19:37:46.309093 kubelet[1981]: E0213 19:37:46.309028 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:46.348855 systemd-networkd[1376]: calie74d791578c: Gained IPv6LL Feb 13 19:37:46.550525 systemd[1]: run-containerd-runc-k8s.io-648492246adc14c52145b267fd90fb9d0861caa554cc3b2f1529f1eb9e76cb4e-runc.YQPs6U.mount: Deactivated successfully. Feb 13 19:37:46.605756 systemd-networkd[1376]: cali012dfafd62f: Gained IPv6LL Feb 13 19:37:46.925321 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Feb 13 19:37:47.309746 kubelet[1981]: E0213 19:37:47.309632 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:47.981448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4060635432.mount: Deactivated successfully. Feb 13 19:37:48.310889 kubelet[1981]: E0213 19:37:48.310543 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:48.730641 containerd[1467]: time="2025-02-13T19:37:48.730569904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:48.732832 containerd[1467]: time="2025-02-13T19:37:48.732560124Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:37:48.733792 containerd[1467]: time="2025-02-13T19:37:48.733735832Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:48.737266 containerd[1467]: time="2025-02-13T19:37:48.737217957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:48.739076 containerd[1467]: time="2025-02-13T19:37:48.739026379Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 3.871253536s" Feb 13 19:37:48.739407 containerd[1467]: time="2025-02-13T19:37:48.739247057Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:37:48.743578 containerd[1467]: time="2025-02-13T19:37:48.743539894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:37:48.745151 containerd[1467]: time="2025-02-13T19:37:48.744883680Z" level=info msg="CreateContainer within sandbox \"db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:37:48.762941 containerd[1467]: time="2025-02-13T19:37:48.762749060Z" level=info msg="CreateContainer within sandbox \"db8732b7b8ca83fd3477633d815b44c660074c36bce6f37c9a0205d646f0f333\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3cd0110764ceca9e3846c84be966ccf32e7bdbee898f2fb996691ccc8d60465e\"" Feb 13 19:37:48.765088 containerd[1467]: time="2025-02-13T19:37:48.763945848Z" level=info msg="StartContainer for \"3cd0110764ceca9e3846c84be966ccf32e7bdbee898f2fb996691ccc8d60465e\"" Feb 13 19:37:48.801724 systemd[1]: Started cri-containerd-3cd0110764ceca9e3846c84be966ccf32e7bdbee898f2fb996691ccc8d60465e.scope - libcontainer container 3cd0110764ceca9e3846c84be966ccf32e7bdbee898f2fb996691ccc8d60465e. Feb 13 19:37:48.832896 containerd[1467]: time="2025-02-13T19:37:48.832755675Z" level=info msg="StartContainer for \"3cd0110764ceca9e3846c84be966ccf32e7bdbee898f2fb996691ccc8d60465e\" returns successfully" Feb 13 19:37:49.311494 kubelet[1981]: E0213 19:37:49.311422 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:49.561945 kubelet[1981]: I0213 19:37:49.561547 1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-znm5m" podStartSLOduration=3.68621623 podStartE2EDuration="7.561530363s" podCreationTimestamp="2025-02-13 19:37:42 +0000 UTC" firstStartedPulling="2025-02-13 19:37:44.867154371 +0000 UTC m=+21.322071878" lastFinishedPulling="2025-02-13 19:37:48.742468504 +0000 UTC m=+25.197386011" observedRunningTime="2025-02-13 19:37:49.560993248 +0000 UTC m=+26.015910715" watchObservedRunningTime="2025-02-13 19:37:49.561530363 +0000 UTC m=+26.016447870" Feb 13 19:37:50.311671 kubelet[1981]: E0213 19:37:50.311590 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:50.562505 containerd[1467]: time="2025-02-13T19:37:50.561517559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:50.564317 containerd[1467]: time="2025-02-13T19:37:50.564258735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 19:37:50.566422 containerd[1467]: time="2025-02-13T19:37:50.565613563Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:50.568079 containerd[1467]: time="2025-02-13T19:37:50.568024822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:50.568923 containerd[1467]: time="2025-02-13T19:37:50.568873655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.825049484s" Feb 13 19:37:50.569035 containerd[1467]: time="2025-02-13T19:37:50.569019174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 19:37:50.571831 containerd[1467]: time="2025-02-13T19:37:50.571794109Z" level=info msg="CreateContainer within sandbox \"63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:37:50.593658 containerd[1467]: time="2025-02-13T19:37:50.593608799Z" level=info msg="CreateContainer within sandbox \"63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2341dbbaa4f171d5f9123e2e2b162711dba8d7e2aabb769b8f2669233a8ffce7\"" Feb 13 19:37:50.594722 containerd[1467]: time="2025-02-13T19:37:50.594690390Z" level=info msg="StartContainer for \"2341dbbaa4f171d5f9123e2e2b162711dba8d7e2aabb769b8f2669233a8ffce7\"" Feb 13 19:37:50.627711 systemd[1]: Started cri-containerd-2341dbbaa4f171d5f9123e2e2b162711dba8d7e2aabb769b8f2669233a8ffce7.scope - libcontainer container 2341dbbaa4f171d5f9123e2e2b162711dba8d7e2aabb769b8f2669233a8ffce7. Feb 13 19:37:50.669024 containerd[1467]: time="2025-02-13T19:37:50.668974903Z" level=info msg="StartContainer for \"2341dbbaa4f171d5f9123e2e2b162711dba8d7e2aabb769b8f2669233a8ffce7\" returns successfully" Feb 13 19:37:50.671105 containerd[1467]: time="2025-02-13T19:37:50.671067445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:37:51.312126 kubelet[1981]: E0213 19:37:51.312070 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:52.313046 kubelet[1981]: E0213 19:37:52.312965 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:52.549172 containerd[1467]: time="2025-02-13T19:37:52.549118311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:52.552106 containerd[1467]: time="2025-02-13T19:37:52.552051009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 19:37:52.553310 containerd[1467]: time="2025-02-13T19:37:52.553238080Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:52.555638 containerd[1467]: time="2025-02-13T19:37:52.555588463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:52.556797 containerd[1467]: time="2025-02-13T19:37:52.556755094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.885479051s" Feb 13 19:37:52.556797 containerd[1467]: time="2025-02-13T19:37:52.556793974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 19:37:52.559533 containerd[1467]: time="2025-02-13T19:37:52.559494154Z" level=info msg="CreateContainer within sandbox \"63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:37:52.580125 containerd[1467]: time="2025-02-13T19:37:52.579837962Z" level=info msg="CreateContainer within sandbox \"63bc031db8adaf27716cc90799e72fa88c398ed0a4211f53a42932f8a7b60191\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"03c43c97674fb2881b61f907f8e03ab3569a23a30c62a83f944e25e010bbcae1\"" Feb 13 19:37:52.582842 containerd[1467]: time="2025-02-13T19:37:52.581789668Z" level=info msg="StartContainer for \"03c43c97674fb2881b61f907f8e03ab3569a23a30c62a83f944e25e010bbcae1\"" Feb 13 19:37:52.619578 systemd[1]: Started cri-containerd-03c43c97674fb2881b61f907f8e03ab3569a23a30c62a83f944e25e010bbcae1.scope - libcontainer container 03c43c97674fb2881b61f907f8e03ab3569a23a30c62a83f944e25e010bbcae1. Feb 13 19:37:52.650078 containerd[1467]: time="2025-02-13T19:37:52.649999881Z" level=info msg="StartContainer for \"03c43c97674fb2881b61f907f8e03ab3569a23a30c62a83f944e25e010bbcae1\" returns successfully" Feb 13 19:37:53.313436 kubelet[1981]: E0213 19:37:53.313311 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:53.428350 kubelet[1981]: I0213 19:37:53.428296 1981 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:37:53.428350 kubelet[1981]: I0213 19:37:53.428341 1981 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:37:53.869673 update_engine[1455]: I20250213 19:37:53.869499 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:37:53.870458 update_engine[1455]: I20250213 19:37:53.869969 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:37:53.870663 update_engine[1455]: I20250213 19:37:53.870599 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:37:53.871421 update_engine[1455]: E20250213 19:37:53.871286 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:37:53.871593 update_engine[1455]: I20250213 19:37:53.871441 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 19:37:54.314197 kubelet[1981]: E0213 19:37:54.314127 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:55.314781 kubelet[1981]: E0213 19:37:55.314575 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:56.231247 kubelet[1981]: I0213 19:37:56.231144 1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lxcht" podStartSLOduration=24.629460435 podStartE2EDuration="32.231120113s" podCreationTimestamp="2025-02-13 19:37:24 +0000 UTC" firstStartedPulling="2025-02-13 19:37:44.956182448 +0000 UTC m=+21.411099955" lastFinishedPulling="2025-02-13 19:37:52.557842126 +0000 UTC m=+29.012759633" observedRunningTime="2025-02-13 19:37:53.588666417 +0000 UTC m=+30.043583964" watchObservedRunningTime="2025-02-13 19:37:56.231120113 +0000 UTC m=+32.686037620" Feb 13 19:37:56.238340 systemd[1]: Created slice kubepods-besteffort-pod9cf4a90d_d757_4923_b80c_f73464f92519.slice - libcontainer container kubepods-besteffort-pod9cf4a90d_d757_4923_b80c_f73464f92519.slice. Feb 13 19:37:56.240282 kubelet[1981]: I0213 19:37:56.240240 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7qv7\" (UniqueName: \"kubernetes.io/projected/9cf4a90d-d757-4923-b80c-f73464f92519-kube-api-access-j7qv7\") pod \"nfs-server-provisioner-0\" (UID: \"9cf4a90d-d757-4923-b80c-f73464f92519\") " pod="default/nfs-server-provisioner-0" Feb 13 19:37:56.241076 kubelet[1981]: I0213 19:37:56.240979 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9cf4a90d-d757-4923-b80c-f73464f92519-data\") pod \"nfs-server-provisioner-0\" (UID: \"9cf4a90d-d757-4923-b80c-f73464f92519\") " pod="default/nfs-server-provisioner-0" Feb 13 19:37:56.314924 kubelet[1981]: E0213 19:37:56.314799 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:56.543591 containerd[1467]: time="2025-02-13T19:37:56.543089676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9cf4a90d-d757-4923-b80c-f73464f92519,Namespace:default,Attempt:0,}" Feb 13 19:37:56.709087 systemd-networkd[1376]: cali60e51b789ff: Link UP Feb 13 19:37:56.709274 systemd-networkd[1376]: cali60e51b789ff: Gained carrier Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.605 [INFO][3477] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 9cf4a90d-d757-4923-b80c-f73464f92519 1619 0 2025-02-13 19:37:56 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.605 [INFO][3477] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.638 [INFO][3488] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" HandleID="k8s-pod-network.022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.659 [INFO][3488] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" HandleID="k8s-pod-network.022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000291080), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:37:56.638291309 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.659 [INFO][3488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.659 [INFO][3488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.659 [INFO][3488] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.665 [INFO][3488] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" host="10.0.0.4" Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.671 [INFO][3488] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.679 [INFO][3488] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.682 [INFO][3488] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.685 [INFO][3488] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.686 [INFO][3488] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" host="10.0.0.4" Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.688 [INFO][3488] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.693 [INFO][3488] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" host="10.0.0.4" Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.702 [INFO][3488] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" host="10.0.0.4" Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.702 [INFO][3488] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" host="10.0.0.4" Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.702 [INFO][3488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:37:56.726063 containerd[1467]: 2025-02-13 19:37:56.702 [INFO][3488] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" HandleID="k8s-pod-network.022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:37:56.726859 containerd[1467]: 2025-02-13 19:37:56.704 [INFO][3477] cni-plugin/k8s.go 386: Populated endpoint ContainerID="022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"9cf4a90d-d757-4923-b80c-f73464f92519", ResourceVersion:"1619", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:37:56.726859 containerd[1467]: 2025-02-13 19:37:56.705 [INFO][3477] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.195/32] ContainerID="022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:37:56.726859 containerd[1467]: 2025-02-13 19:37:56.705 [INFO][3477] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:37:56.726859 containerd[1467]: 2025-02-13 19:37:56.707 [INFO][3477] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:37:56.727070 containerd[1467]: 2025-02-13 19:37:56.707 [INFO][3477] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"9cf4a90d-d757-4923-b80c-f73464f92519", ResourceVersion:"1619", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"02:e1:22:a9:52:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:37:56.727070 containerd[1467]: 2025-02-13 19:37:56.721 [INFO][3477] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:37:56.754550 containerd[1467]: time="2025-02-13T19:37:56.754316835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:37:56.755025 containerd[1467]: time="2025-02-13T19:37:56.754544754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:37:56.755145 containerd[1467]: time="2025-02-13T19:37:56.755095631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:56.755384 containerd[1467]: time="2025-02-13T19:37:56.755334590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:56.783686 systemd[1]: Started cri-containerd-022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c.scope - libcontainer container 022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c. Feb 13 19:37:56.824680 containerd[1467]: time="2025-02-13T19:37:56.824622596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9cf4a90d-d757-4923-b80c-f73464f92519,Namespace:default,Attempt:0,} returns sandbox id \"022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c\"" Feb 13 19:37:56.827030 containerd[1467]: time="2025-02-13T19:37:56.826956824Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:37:57.315566 kubelet[1981]: E0213 19:37:57.315517 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:58.060925 systemd-networkd[1376]: cali60e51b789ff: Gained IPv6LL Feb 13 19:37:58.316524 kubelet[1981]: E0213 19:37:58.315884 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:37:59.131313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3996810136.mount: Deactivated successfully. Feb 13 19:37:59.316630 kubelet[1981]: E0213 19:37:59.316553 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:00.317689 kubelet[1981]: E0213 19:38:00.317611 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:00.857920 containerd[1467]: time="2025-02-13T19:38:00.857779852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:38:00.859280 containerd[1467]: time="2025-02-13T19:38:00.859096928Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373691" Feb 13 19:38:00.860436 containerd[1467]: time="2025-02-13T19:38:00.860355404Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:38:00.866060 containerd[1467]: time="2025-02-13T19:38:00.865459708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:38:00.866823 containerd[1467]: time="2025-02-13T19:38:00.866745464Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 4.039706121s" Feb 13 19:38:00.866823 containerd[1467]: time="2025-02-13T19:38:00.866786224Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:38:00.870966 containerd[1467]: time="2025-02-13T19:38:00.870768252Z" level=info msg="CreateContainer within sandbox \"022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:38:00.890659 containerd[1467]: time="2025-02-13T19:38:00.890608031Z" level=info msg="CreateContainer within sandbox \"022242abcee64e92f69bc7b1e952c395be52b8264e799dd94ad156ac0c1a654c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"bafcf458539ff84920cd255c68b29915ffed7b355173b13ebc2756322f736243\"" Feb 13 19:38:00.892746 containerd[1467]: time="2025-02-13T19:38:00.891580188Z" level=info msg="StartContainer for \"bafcf458539ff84920cd255c68b29915ffed7b355173b13ebc2756322f736243\"" Feb 13 19:38:00.928649 systemd[1]: Started cri-containerd-bafcf458539ff84920cd255c68b29915ffed7b355173b13ebc2756322f736243.scope - libcontainer container bafcf458539ff84920cd255c68b29915ffed7b355173b13ebc2756322f736243. Feb 13 19:38:00.961700 containerd[1467]: time="2025-02-13T19:38:00.961595492Z" level=info msg="StartContainer for \"bafcf458539ff84920cd255c68b29915ffed7b355173b13ebc2756322f736243\" returns successfully" Feb 13 19:38:01.319078 kubelet[1981]: E0213 19:38:01.318590 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:01.612813 kubelet[1981]: I0213 19:38:01.612572 1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.569775469 podStartE2EDuration="5.612547179s" podCreationTimestamp="2025-02-13 19:37:56 +0000 UTC" firstStartedPulling="2025-02-13 19:37:56.826104028 +0000 UTC m=+33.281021495" lastFinishedPulling="2025-02-13 19:38:00.868875698 +0000 UTC m=+37.323793205" observedRunningTime="2025-02-13 19:38:01.6122189 +0000 UTC m=+38.067136407" watchObservedRunningTime="2025-02-13 19:38:01.612547179 +0000 UTC m=+38.067464686" Feb 13 19:38:02.319719 kubelet[1981]: E0213 19:38:02.319624 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:03.320520 kubelet[1981]: E0213 19:38:03.320456 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:03.870980 update_engine[1455]: I20250213 19:38:03.870366 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:38:03.870980 update_engine[1455]: I20250213 19:38:03.870719 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:38:03.871654 update_engine[1455]: I20250213 19:38:03.871054 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:38:03.871654 update_engine[1455]: E20250213 19:38:03.871579 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:38:03.871654 update_engine[1455]: I20250213 19:38:03.871642 1455 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:38:03.871654 update_engine[1455]: I20250213 19:38:03.871654 1455 omaha_request_action.cc:617] Omaha request response: Feb 13 19:38:03.871964 update_engine[1455]: E20250213 19:38:03.871743 1455 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 19:38:03.871964 update_engine[1455]: I20250213 19:38:03.871765 1455 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 19:38:03.871964 update_engine[1455]: I20250213 19:38:03.871773 1455 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:38:03.871964 update_engine[1455]: I20250213 19:38:03.871809 1455 update_attempter.cc:306] Processing Done. Feb 13 19:38:03.871964 update_engine[1455]: E20250213 19:38:03.871827 1455 update_attempter.cc:619] Update failed. Feb 13 19:38:03.871964 update_engine[1455]: I20250213 19:38:03.871834 1455 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 19:38:03.871964 update_engine[1455]: I20250213 19:38:03.871841 1455 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 19:38:03.871964 update_engine[1455]: I20250213 19:38:03.871850 1455 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 19:38:03.872294 update_engine[1455]: I20250213 19:38:03.871966 1455 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:38:03.872294 update_engine[1455]: I20250213 19:38:03.872030 1455 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:38:03.872294 update_engine[1455]: I20250213 19:38:03.872045 1455 omaha_request_action.cc:272] Request: Feb 13 19:38:03.872294 update_engine[1455]: Feb 13 19:38:03.872294 update_engine[1455]: Feb 13 19:38:03.872294 update_engine[1455]: Feb 13 19:38:03.872294 update_engine[1455]: Feb 13 19:38:03.872294 update_engine[1455]: Feb 13 19:38:03.872294 update_engine[1455]: Feb 13 19:38:03.872294 update_engine[1455]: I20250213 19:38:03.872054 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:38:03.872733 update_engine[1455]: I20250213 19:38:03.872306 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:38:03.872733 update_engine[1455]: I20250213 19:38:03.872548 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:38:03.873107 locksmithd[1488]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 19:38:03.873488 update_engine[1455]: E20250213 19:38:03.873199 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:38:03.873488 update_engine[1455]: I20250213 19:38:03.873246 1455 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:38:03.873488 update_engine[1455]: I20250213 19:38:03.873253 1455 omaha_request_action.cc:617] Omaha request response: Feb 13 19:38:03.873488 update_engine[1455]: I20250213 19:38:03.873260 1455 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:38:03.873488 update_engine[1455]: I20250213 19:38:03.873266 1455 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:38:03.873488 update_engine[1455]: I20250213 19:38:03.873271 1455 update_attempter.cc:306] Processing Done. Feb 13 19:38:03.873488 update_engine[1455]: I20250213 19:38:03.873277 1455 update_attempter.cc:310] Error event sent. Feb 13 19:38:03.873488 update_engine[1455]: I20250213 19:38:03.873286 1455 update_check_scheduler.cc:74] Next update check in 41m3s Feb 13 19:38:03.873757 locksmithd[1488]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 19:38:04.291569 kubelet[1981]: E0213 19:38:04.291515 1981 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:04.320794 kubelet[1981]: E0213 19:38:04.320707 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:05.321542 kubelet[1981]: E0213 19:38:05.321456 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:06.322269 kubelet[1981]: E0213 19:38:06.322156 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:07.323335 kubelet[1981]: E0213 19:38:07.323255 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:08.324432 kubelet[1981]: E0213 19:38:08.324357 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:09.324768 kubelet[1981]: E0213 19:38:09.324687 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:10.324920 kubelet[1981]: E0213 19:38:10.324840 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:11.095568 systemd[1]: Created slice kubepods-besteffort-poda8a6bad1_2192_46ff_91f0_ff97e5cba38d.slice - libcontainer container kubepods-besteffort-poda8a6bad1_2192_46ff_91f0_ff97e5cba38d.slice. Feb 13 19:38:11.152213 kubelet[1981]: I0213 19:38:11.152039 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6549\" (UniqueName: \"kubernetes.io/projected/a8a6bad1-2192-46ff-91f0-ff97e5cba38d-kube-api-access-x6549\") pod \"test-pod-1\" (UID: \"a8a6bad1-2192-46ff-91f0-ff97e5cba38d\") " pod="default/test-pod-1" Feb 13 19:38:11.152213 kubelet[1981]: I0213 19:38:11.152118 1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ae191d1e-55fb-44f7-b14e-503d6d8fa69a\" (UniqueName: \"kubernetes.io/nfs/a8a6bad1-2192-46ff-91f0-ff97e5cba38d-pvc-ae191d1e-55fb-44f7-b14e-503d6d8fa69a\") pod \"test-pod-1\" (UID: \"a8a6bad1-2192-46ff-91f0-ff97e5cba38d\") " pod="default/test-pod-1" Feb 13 19:38:11.294005 kernel: FS-Cache: Loaded Feb 13 19:38:11.323706 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:38:11.323843 kernel: RPC: Registered udp transport module. Feb 13 19:38:11.323866 kernel: RPC: Registered tcp transport module. Feb 13 19:38:11.323930 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:38:11.324795 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:38:11.325115 kubelet[1981]: E0213 19:38:11.324935 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:11.506750 kernel: NFS: Registering the id_resolver key type Feb 13 19:38:11.506883 kernel: Key type id_resolver registered Feb 13 19:38:11.506914 kernel: Key type id_legacy registered Feb 13 19:38:11.536469 nfsidmap[3671]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:38:11.539723 nfsidmap[3672]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:38:11.701782 containerd[1467]: time="2025-02-13T19:38:11.701703018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a8a6bad1-2192-46ff-91f0-ff97e5cba38d,Namespace:default,Attempt:0,}" Feb 13 19:38:11.869520 systemd-networkd[1376]: cali5ec59c6bf6e: Link UP Feb 13 19:38:11.869659 systemd-networkd[1376]: cali5ec59c6bf6e: Gained carrier Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.769 [INFO][3674] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default a8a6bad1-2192-46ff-91f0-ff97e5cba38d 1676 0 2025-02-13 19:37:58 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.769 [INFO][3674] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.804 [INFO][3686] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" HandleID="k8s-pod-network.12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" Workload="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.824 [INFO][3686] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" HandleID="k8s-pod-network.12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000220830), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2025-02-13 19:38:11.804299278 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.825 [INFO][3686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.825 [INFO][3686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.825 [INFO][3686] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.828 [INFO][3686] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" host="10.0.0.4" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.833 [INFO][3686] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.839 [INFO][3686] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.842 [INFO][3686] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.846 [INFO][3686] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.846 [INFO][3686] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" host="10.0.0.4" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.848 [INFO][3686] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.854 [INFO][3686] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" host="10.0.0.4" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.862 [INFO][3686] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" host="10.0.0.4" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.862 [INFO][3686] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" host="10.0.0.4" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.862 [INFO][3686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.862 [INFO][3686] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" HandleID="k8s-pod-network.12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" Workload="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:38:11.887001 containerd[1467]: 2025-02-13 19:38:11.866 [INFO][3674] cni-plugin/k8s.go 386: Populated endpoint ContainerID="12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a8a6bad1-2192-46ff-91f0-ff97e5cba38d", ResourceVersion:"1676", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 37, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:38:11.888117 containerd[1467]: 2025-02-13 19:38:11.866 [INFO][3674] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.196/32] ContainerID="12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:38:11.888117 containerd[1467]: 2025-02-13 19:38:11.866 [INFO][3674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:38:11.888117 containerd[1467]: 2025-02-13 19:38:11.871 [INFO][3674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:38:11.888117 containerd[1467]: 2025-02-13 19:38:11.872 [INFO][3674] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a8a6bad1-2192-46ff-91f0-ff97e5cba38d", ResourceVersion:"1676", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 37, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"7e:9a:ee:35:fd:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:38:11.888117 containerd[1467]: 2025-02-13 19:38:11.884 [INFO][3674] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:38:11.915191 containerd[1467]: time="2025-02-13T19:38:11.914290429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:38:11.915191 containerd[1467]: time="2025-02-13T19:38:11.914445349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:38:11.915191 containerd[1467]: time="2025-02-13T19:38:11.914474269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:38:11.915191 containerd[1467]: time="2025-02-13T19:38:11.914589789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:38:11.933657 systemd[1]: Started cri-containerd-12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b.scope - libcontainer container 12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b. Feb 13 19:38:11.973025 containerd[1467]: time="2025-02-13T19:38:11.972949390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a8a6bad1-2192-46ff-91f0-ff97e5cba38d,Namespace:default,Attempt:0,} returns sandbox id \"12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b\"" Feb 13 19:38:11.974606 containerd[1467]: time="2025-02-13T19:38:11.974570672Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:38:12.326118 kubelet[1981]: E0213 19:38:12.326049 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:12.385672 containerd[1467]: time="2025-02-13T19:38:12.385618683Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:38:12.389431 containerd[1467]: time="2025-02-13T19:38:12.387724246Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:38:12.394442 containerd[1467]: time="2025-02-13T19:38:12.394347337Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 419.731145ms" Feb 13 19:38:12.394566 containerd[1467]: time="2025-02-13T19:38:12.394461578Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:38:12.400542 containerd[1467]: time="2025-02-13T19:38:12.400444468Z" level=info msg="CreateContainer within sandbox \"12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:38:12.421533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240440552.mount: Deactivated successfully. Feb 13 19:38:12.428149 containerd[1467]: time="2025-02-13T19:38:12.428104875Z" level=info msg="CreateContainer within sandbox \"12737104487c42f7d68ed8f02e6241f726a1fef4d17596e6d9972e15a783273b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c82d87bad2447e54d41d6ad1c28024b3acf12d2bf102d10ef3f678628e857b07\"" Feb 13 19:38:12.429254 containerd[1467]: time="2025-02-13T19:38:12.429222797Z" level=info msg="StartContainer for \"c82d87bad2447e54d41d6ad1c28024b3acf12d2bf102d10ef3f678628e857b07\"" Feb 13 19:38:12.474630 systemd[1]: Started cri-containerd-c82d87bad2447e54d41d6ad1c28024b3acf12d2bf102d10ef3f678628e857b07.scope - libcontainer container c82d87bad2447e54d41d6ad1c28024b3acf12d2bf102d10ef3f678628e857b07. Feb 13 19:38:12.508547 containerd[1467]: time="2025-02-13T19:38:12.507906771Z" level=info msg="StartContainer for \"c82d87bad2447e54d41d6ad1c28024b3acf12d2bf102d10ef3f678628e857b07\" returns successfully" Feb 13 19:38:12.650035 kubelet[1981]: I0213 19:38:12.649968 1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.22555514 podStartE2EDuration="14.649944213s" podCreationTimestamp="2025-02-13 19:37:58 +0000 UTC" firstStartedPulling="2025-02-13 19:38:11.974086031 +0000 UTC m=+48.429003538" lastFinishedPulling="2025-02-13 19:38:12.398475104 +0000 UTC m=+48.853392611" observedRunningTime="2025-02-13 19:38:12.649345172 +0000 UTC m=+49.104262679" watchObservedRunningTime="2025-02-13 19:38:12.649944213 +0000 UTC m=+49.104861720" Feb 13 19:38:12.972890 systemd-networkd[1376]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:38:13.272229 systemd[1]: run-containerd-runc-k8s.io-c82d87bad2447e54d41d6ad1c28024b3acf12d2bf102d10ef3f678628e857b07-runc.dox75d.mount: Deactivated successfully. Feb 13 19:38:13.326560 kubelet[1981]: E0213 19:38:13.326476 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:14.327309 kubelet[1981]: E0213 19:38:14.327229 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:15.327610 kubelet[1981]: E0213 19:38:15.327512 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:16.328162 kubelet[1981]: E0213 19:38:16.328101 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:17.329356 kubelet[1981]: E0213 19:38:17.329283 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:18.330381 kubelet[1981]: E0213 19:38:18.330296 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:19.330596 kubelet[1981]: E0213 19:38:19.330519 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:20.331455 kubelet[1981]: E0213 19:38:20.331296 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:21.331575 kubelet[1981]: E0213 19:38:21.331506 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:22.332080 kubelet[1981]: E0213 19:38:22.331964 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:23.332656 kubelet[1981]: E0213 19:38:23.332589 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:24.292379 kubelet[1981]: E0213 19:38:24.292288 1981 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:24.331913 containerd[1467]: time="2025-02-13T19:38:24.331851897Z" level=info msg="StopPodSandbox for \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\"" Feb 13 19:38:24.332327 containerd[1467]: time="2025-02-13T19:38:24.332028898Z" level=info msg="TearDown network for sandbox \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\" successfully" Feb 13 19:38:24.332327 containerd[1467]: time="2025-02-13T19:38:24.332046018Z" level=info msg="StopPodSandbox for \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\" returns successfully" Feb 13 19:38:24.332786 kubelet[1981]: E0213 19:38:24.332724 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:24.333124 containerd[1467]: time="2025-02-13T19:38:24.332737141Z" level=info msg="RemovePodSandbox for \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\"" Feb 13 19:38:24.333124 containerd[1467]: time="2025-02-13T19:38:24.332782821Z" level=info msg="Forcibly stopping sandbox \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\"" Feb 13 19:38:24.333124 containerd[1467]: time="2025-02-13T19:38:24.332893102Z" level=info msg="TearDown network for sandbox \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\" successfully" Feb 13 19:38:24.337222 containerd[1467]: time="2025-02-13T19:38:24.337174003Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:38:24.337796 containerd[1467]: time="2025-02-13T19:38:24.337248844Z" level=info msg="RemovePodSandbox \"8884afd51137e8c2a858418a965a2e8e772bb7fc2abc4cc91d90462d5a312e24\" returns successfully" Feb 13 19:38:24.337906 containerd[1467]: time="2025-02-13T19:38:24.337810286Z" level=info msg="StopPodSandbox for \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\"" Feb 13 19:38:24.338109 containerd[1467]: time="2025-02-13T19:38:24.337966247Z" level=info msg="TearDown network for sandbox \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\" successfully" Feb 13 19:38:24.338109 containerd[1467]: time="2025-02-13T19:38:24.337989327Z" level=info msg="StopPodSandbox for \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\" returns successfully" Feb 13 19:38:24.338578 containerd[1467]: time="2025-02-13T19:38:24.338527250Z" level=info msg="RemovePodSandbox for \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\"" Feb 13 19:38:24.338578 containerd[1467]: time="2025-02-13T19:38:24.338555850Z" level=info msg="Forcibly stopping sandbox \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\"" Feb 13 19:38:24.338710 containerd[1467]: time="2025-02-13T19:38:24.338618010Z" level=info msg="TearDown network for sandbox \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\" successfully" Feb 13 19:38:24.342916 containerd[1467]: time="2025-02-13T19:38:24.342831111Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:38:24.343082 containerd[1467]: time="2025-02-13T19:38:24.342961752Z" level=info msg="RemovePodSandbox \"9fe5276af1dc188883415bd8b7fdbe15ecea84076e738c8a16b5190cf380ff37\" returns successfully" Feb 13 19:38:24.343616 containerd[1467]: time="2025-02-13T19:38:24.343592635Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\"" Feb 13 19:38:24.343721 containerd[1467]: time="2025-02-13T19:38:24.343703956Z" level=info msg="TearDown network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" successfully" Feb 13 19:38:24.343721 containerd[1467]: time="2025-02-13T19:38:24.343719076Z" level=info msg="StopPodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" returns successfully" Feb 13 19:38:24.344234 containerd[1467]: time="2025-02-13T19:38:24.344210358Z" level=info msg="RemovePodSandbox for \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\"" Feb 13 19:38:24.344302 containerd[1467]: time="2025-02-13T19:38:24.344239918Z" level=info msg="Forcibly stopping sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\"" Feb 13 19:38:24.344330 containerd[1467]: time="2025-02-13T19:38:24.344308999Z" level=info msg="TearDown network for sandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" successfully" Feb 13 19:38:24.347541 containerd[1467]: time="2025-02-13T19:38:24.347486854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:38:24.347635 containerd[1467]: time="2025-02-13T19:38:24.347553055Z" level=info msg="RemovePodSandbox \"6b9442fe07f4b138989c9495c0ff3427d55d3a36fc6aac19f139285f6e35979b\" returns successfully" Feb 13 19:38:24.348039 containerd[1467]: time="2025-02-13T19:38:24.348002097Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\"" Feb 13 19:38:24.348114 containerd[1467]: time="2025-02-13T19:38:24.348089177Z" level=info msg="TearDown network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" successfully" Feb 13 19:38:24.348114 containerd[1467]: time="2025-02-13T19:38:24.348102298Z" level=info msg="StopPodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" returns successfully" Feb 13 19:38:24.348703 containerd[1467]: time="2025-02-13T19:38:24.348620180Z" level=info msg="RemovePodSandbox for \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\"" Feb 13 19:38:24.348703 containerd[1467]: time="2025-02-13T19:38:24.348644660Z" level=info msg="Forcibly stopping sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\"" Feb 13 19:38:24.349089 containerd[1467]: time="2025-02-13T19:38:24.348708821Z" level=info msg="TearDown network for sandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" successfully" Feb 13 19:38:24.352350 containerd[1467]: time="2025-02-13T19:38:24.352260438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:38:24.352460 containerd[1467]: time="2025-02-13T19:38:24.352372759Z" level=info msg="RemovePodSandbox \"21360da9a039770a1ddd6f376c28514833dafcf9ebd77532702bd83ce5059ea4\" returns successfully" Feb 13 19:38:24.352902 containerd[1467]: time="2025-02-13T19:38:24.352874761Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\"" Feb 13 19:38:24.353020 containerd[1467]: time="2025-02-13T19:38:24.352972642Z" level=info msg="TearDown network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" successfully" Feb 13 19:38:24.353020 containerd[1467]: time="2025-02-13T19:38:24.352983202Z" level=info msg="StopPodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" returns successfully" Feb 13 19:38:24.354798 containerd[1467]: time="2025-02-13T19:38:24.354772291Z" level=info msg="RemovePodSandbox for \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\"" Feb 13 19:38:24.354935 containerd[1467]: time="2025-02-13T19:38:24.354805131Z" level=info msg="Forcibly stopping sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\"" Feb 13 19:38:24.354935 containerd[1467]: time="2025-02-13T19:38:24.354897131Z" level=info msg="TearDown network for sandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" successfully" Feb 13 19:38:24.359610 containerd[1467]: time="2025-02-13T19:38:24.359549674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:38:24.359720 containerd[1467]: time="2025-02-13T19:38:24.359659955Z" level=info msg="RemovePodSandbox \"facde04d6688a60149e1a86985bdc15b51cf433c5b6701c0c55adac2d0f4e15d\" returns successfully" Feb 13 19:38:24.360238 containerd[1467]: time="2025-02-13T19:38:24.360066157Z" level=info msg="StopPodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\"" Feb 13 19:38:24.360238 containerd[1467]: time="2025-02-13T19:38:24.360162117Z" level=info msg="TearDown network for sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" successfully" Feb 13 19:38:24.360238 containerd[1467]: time="2025-02-13T19:38:24.360171558Z" level=info msg="StopPodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" returns successfully" Feb 13 19:38:24.360793 containerd[1467]: time="2025-02-13T19:38:24.360471199Z" level=info msg="RemovePodSandbox for \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\"" Feb 13 19:38:24.360793 containerd[1467]: time="2025-02-13T19:38:24.360501999Z" level=info msg="Forcibly stopping sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\"" Feb 13 19:38:24.360793 containerd[1467]: time="2025-02-13T19:38:24.360582600Z" level=info msg="TearDown network for sandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" successfully" Feb 13 19:38:24.364788 containerd[1467]: time="2025-02-13T19:38:24.364331098Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:38:24.364788 containerd[1467]: time="2025-02-13T19:38:24.364416019Z" level=info msg="RemovePodSandbox \"56f2a546ec17712463a302b92e99ce95a823b36afa3e6835f70d61607dc5f9b2\" returns successfully" Feb 13 19:38:24.364974 containerd[1467]: time="2025-02-13T19:38:24.364952061Z" level=info msg="StopPodSandbox for \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\"" Feb 13 19:38:24.365098 containerd[1467]: time="2025-02-13T19:38:24.365053302Z" level=info msg="TearDown network for sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\" successfully" Feb 13 19:38:24.365098 containerd[1467]: time="2025-02-13T19:38:24.365075662Z" level=info msg="StopPodSandbox for \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\" returns successfully" Feb 13 19:38:24.365635 containerd[1467]: time="2025-02-13T19:38:24.365584064Z" level=info msg="RemovePodSandbox for \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\"" Feb 13 19:38:24.365635 containerd[1467]: time="2025-02-13T19:38:24.365621865Z" level=info msg="Forcibly stopping sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\"" Feb 13 19:38:24.365725 containerd[1467]: time="2025-02-13T19:38:24.365686785Z" level=info msg="TearDown network for sandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\" successfully" Feb 13 19:38:24.369017 containerd[1467]: time="2025-02-13T19:38:24.368938841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:38:24.369017 containerd[1467]: time="2025-02-13T19:38:24.369013961Z" level=info msg="RemovePodSandbox \"b77d09c9c7029da414b452f92fffa1342c91f45990069b8349db31cef689a87f\" returns successfully" Feb 13 19:38:24.370128 containerd[1467]: time="2025-02-13T19:38:24.369726885Z" level=info msg="StopPodSandbox for \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\"" Feb 13 19:38:24.370128 containerd[1467]: time="2025-02-13T19:38:24.369889006Z" level=info msg="TearDown network for sandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\" successfully" Feb 13 19:38:24.370128 containerd[1467]: time="2025-02-13T19:38:24.369903966Z" level=info msg="StopPodSandbox for \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\" returns successfully" Feb 13 19:38:24.371504 containerd[1467]: time="2025-02-13T19:38:24.370425208Z" level=info msg="RemovePodSandbox for \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\"" Feb 13 19:38:24.371504 containerd[1467]: time="2025-02-13T19:38:24.370453409Z" level=info msg="Forcibly stopping sandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\"" Feb 13 19:38:24.371504 containerd[1467]: time="2025-02-13T19:38:24.370516769Z" level=info msg="TearDown network for sandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\" successfully" Feb 13 19:38:24.376220 containerd[1467]: time="2025-02-13T19:38:24.376123277Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:38:24.376220 containerd[1467]: time="2025-02-13T19:38:24.376214597Z" level=info msg="RemovePodSandbox \"3e7cea98176a243e49ae4170b1178c28e081dba1ddeddd0eadf157f4f907f188\" returns successfully" Feb 13 19:38:24.377587 containerd[1467]: time="2025-02-13T19:38:24.376988001Z" level=info msg="StopPodSandbox for \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\"" Feb 13 19:38:24.377587 containerd[1467]: time="2025-02-13T19:38:24.377116402Z" level=info msg="TearDown network for sandbox \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\" successfully" Feb 13 19:38:24.377587 containerd[1467]: time="2025-02-13T19:38:24.377128762Z" level=info msg="StopPodSandbox for \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\" returns successfully" Feb 13 19:38:24.378469 containerd[1467]: time="2025-02-13T19:38:24.378439168Z" level=info msg="RemovePodSandbox for \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\"" Feb 13 19:38:24.378891 containerd[1467]: time="2025-02-13T19:38:24.378630969Z" level=info msg="Forcibly stopping sandbox \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\"" Feb 13 19:38:24.378891 containerd[1467]: time="2025-02-13T19:38:24.378807290Z" level=info msg="TearDown network for sandbox \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\" successfully" Feb 13 19:38:24.384953 containerd[1467]: time="2025-02-13T19:38:24.384900040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:38:24.385270 containerd[1467]: time="2025-02-13T19:38:24.385163322Z" level=info msg="RemovePodSandbox \"f11dab4666f3d77d115519e2a54f9cacf7982cf64c08a90f333082bbd38552ce\" returns successfully" Feb 13 19:38:24.385865 containerd[1467]: time="2025-02-13T19:38:24.385737565Z" level=info msg="StopPodSandbox for \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\"" Feb 13 19:38:24.386023 containerd[1467]: time="2025-02-13T19:38:24.385930885Z" level=info msg="TearDown network for sandbox \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\" successfully" Feb 13 19:38:24.386023 containerd[1467]: time="2025-02-13T19:38:24.385951766Z" level=info msg="StopPodSandbox for \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\" returns successfully" Feb 13 19:38:24.387555 containerd[1467]: time="2025-02-13T19:38:24.386426848Z" level=info msg="RemovePodSandbox for \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\"" Feb 13 19:38:24.387555 containerd[1467]: time="2025-02-13T19:38:24.386459488Z" level=info msg="Forcibly stopping sandbox \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\"" Feb 13 19:38:24.387555 containerd[1467]: time="2025-02-13T19:38:24.386563089Z" level=info msg="TearDown network for sandbox \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\" successfully" Feb 13 19:38:24.390261 containerd[1467]: time="2025-02-13T19:38:24.390216067Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:38:24.390504 containerd[1467]: time="2025-02-13T19:38:24.390485348Z" level=info msg="RemovePodSandbox \"95ec2ed29be0e2f9e885236ba070ce0d75829545174ad26723753d610d7f491f\" returns successfully" Feb 13 19:38:25.333066 kubelet[1981]: E0213 19:38:25.332944 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:26.333293 kubelet[1981]: E0213 19:38:26.333155 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:27.334528 kubelet[1981]: E0213 19:38:27.334431 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:28.334706 kubelet[1981]: E0213 19:38:28.334628 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:29.335182 kubelet[1981]: E0213 19:38:29.335056 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:30.336240 kubelet[1981]: E0213 19:38:30.336140 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:31.336761 kubelet[1981]: E0213 19:38:31.336680 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:32.338004 kubelet[1981]: E0213 19:38:32.337880 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:33.338217 kubelet[1981]: E0213 19:38:33.338162 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:34.339471 kubelet[1981]: E0213 19:38:34.339369 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:35.340688 kubelet[1981]: E0213 19:38:35.340314 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:35.478511 kubelet[1981]: E0213 19:38:35.477971 1981 kubelet_node_status.go:549] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T19:38:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T19:38:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T19:38:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T19:38:25Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\\\",\\\"ghcr.io/flatcar/calico/node:v3.29.1\\\"],\\\"sizeBytes\\\":137671624},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\\\",\\\"ghcr.io/flatcar/calico/cni:v3.29.1\\\"],\\\"sizeBytes\\\":91072777},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":69692964},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\\\",\\\"registry.k8s.io/kube-proxy:v1.32.2\\\"],\\\"sizeBytes\\\":27362401},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\\\"],\\\"sizeBytes\\\":11252974},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\\\",\\\"ghcr.io/flatcar/calico/csi:v3.29.1\\\"],\\\"sizeBytes\\\":8834384},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\\\"],\\\"sizeBytes\\\":6487425},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":268403}]}}\" for node \"10.0.0.4\": Patch \"https://188.245.116.144:6443/api/v1/nodes/10.0.0.4/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:38:35.847056 kubelet[1981]: E0213 19:38:35.846718 1981 controller.go:195] "Failed to update lease" err="Put \"https://188.245.116.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:38:36.340713 kubelet[1981]: E0213 19:38:36.340634 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:38:37.340970 kubelet[1981]: E0213 19:38:37.340886 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"