Feb 13 19:51:54.897935 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:51:54.897960 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:51:54.897969 kernel: KASLR enabled Feb 13 19:51:54.897975 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 19:51:54.897981 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d98 Feb 13 19:51:54.897987 kernel: random: crng init done Feb 13 19:51:54.897993 kernel: secureboot: Secure boot disabled Feb 13 19:51:54.897999 kernel: ACPI: Early table checksum verification disabled Feb 13 19:51:54.898005 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 19:51:54.898013 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:51:54.898019 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:54.898024 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:54.898030 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:54.898036 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:54.898043 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:54.898051 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:54.898057 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:54.898063 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:54.898070 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:54.898076 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:51:54.898082 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 19:51:54.898088 kernel: NUMA: Failed to initialise from firmware Feb 13 19:51:54.898094 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 19:51:54.898101 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 19:51:54.898107 kernel: Zone ranges: Feb 13 19:51:54.898114 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:51:54.898120 kernel: DMA32 empty Feb 13 19:51:54.898127 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 19:51:54.898133 kernel: Movable zone start for each node Feb 13 19:51:54.898139 kernel: Early memory node ranges Feb 13 19:51:54.898145 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Feb 13 19:51:54.898151 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 19:51:54.898157 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 19:51:54.898163 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 19:51:54.898169 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 19:51:54.898175 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 19:51:54.898181 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 19:51:54.898188 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 19:51:54.898195 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 19:51:54.898201 kernel: psci: probing for conduit method from ACPI. Feb 13 19:51:54.898210 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:51:54.898217 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:51:54.898223 kernel: psci: Trusted OS migration not required Feb 13 19:51:54.898231 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:51:54.898238 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:51:54.898245 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:51:54.898251 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:51:54.898258 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:51:54.898264 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:51:54.898271 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:51:54.898278 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:51:54.898284 kernel: CPU features: detected: Spectre-v4 Feb 13 19:51:54.898291 kernel: CPU features: detected: Spectre-BHB Feb 13 19:51:54.898299 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:51:54.898305 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:51:54.898312 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:51:54.898318 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:51:54.898325 kernel: alternatives: applying boot alternatives Feb 13 19:51:54.898333 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:51:54.898340 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:51:54.898346 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:51:54.898353 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:51:54.898360 kernel: Fallback order for Node 0: 0 Feb 13 19:51:54.898366 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 19:51:54.898374 kernel: Policy zone: Normal Feb 13 19:51:54.898381 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:51:54.898387 kernel: software IO TLB: area num 2. Feb 13 19:51:54.898394 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 19:51:54.898401 kernel: Memory: 3882680K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 213320K reserved, 0K cma-reserved) Feb 13 19:51:54.898408 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:51:54.898415 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:51:54.898422 kernel: rcu: RCU event tracing is enabled. Feb 13 19:51:54.898429 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:51:54.898436 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:51:54.898442 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:51:54.898449 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:51:54.898457 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:51:54.898464 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:51:54.898470 kernel: GICv3: 256 SPIs implemented Feb 13 19:51:54.898477 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:51:54.898483 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:51:54.898490 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:51:54.898497 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:51:54.898503 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:51:54.898510 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:51:54.898517 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:51:54.898523 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 19:51:54.898531 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 19:51:54.898538 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:51:54.898545 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:51:54.898552 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:51:54.898558 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:51:54.898565 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:51:54.898572 kernel: Console: colour dummy device 80x25 Feb 13 19:51:54.898579 kernel: ACPI: Core revision 20230628 Feb 13 19:51:54.898585 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:51:54.898592 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:51:54.898600 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:51:54.898607 kernel: landlock: Up and running. Feb 13 19:51:54.898614 kernel: SELinux: Initializing. Feb 13 19:51:54.898621 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:51:54.898627 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:51:54.898634 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:51:54.898642 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:51:54.898649 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:51:54.898656 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:51:54.898663 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:51:54.898685 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:51:54.898692 kernel: Remapping and enabling EFI services. Feb 13 19:51:54.898699 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:51:54.898706 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:51:54.898713 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:51:54.898720 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 19:51:54.898727 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:51:54.898734 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:51:54.898775 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:51:54.898786 kernel: SMP: Total of 2 processors activated. Feb 13 19:51:54.898794 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:51:54.898806 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:51:54.898815 kernel: CPU features: detected: Common not Private translations Feb 13 19:51:54.898822 kernel: CPU features: detected: CRC32 instructions Feb 13 19:51:54.898829 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:51:54.898836 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:51:54.898844 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:51:54.898851 kernel: CPU features: detected: Privileged Access Never Feb 13 19:51:54.898860 kernel: CPU features: detected: RAS Extension Support Feb 13 19:51:54.898867 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:51:54.898875 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:51:54.898882 kernel: alternatives: applying system-wide alternatives Feb 13 19:51:54.898889 kernel: devtmpfs: initialized Feb 13 19:51:54.898897 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:51:54.898904 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:51:54.898913 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:51:54.898920 kernel: SMBIOS 3.0.0 present. Feb 13 19:51:54.898928 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 19:51:54.898935 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:51:54.898942 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:51:54.898950 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:51:54.898957 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:51:54.898965 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:51:54.898972 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Feb 13 19:51:54.898981 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:51:54.898988 kernel: cpuidle: using governor menu Feb 13 19:51:54.898995 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:51:54.899003 kernel: ASID allocator initialised with 32768 entries Feb 13 19:51:54.899010 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:51:54.899018 kernel: Serial: AMBA PL011 UART driver Feb 13 19:51:54.899025 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:51:54.899032 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:51:54.899039 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:51:54.899048 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:51:54.899055 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:51:54.899062 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:51:54.899070 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:51:54.899077 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:51:54.899084 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:51:54.899091 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:51:54.899098 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:51:54.899105 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:51:54.899112 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:51:54.899121 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:51:54.899128 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:51:54.899135 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:51:54.899143 kernel: ACPI: Interpreter enabled Feb 13 19:51:54.899150 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:51:54.899157 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:51:54.899164 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:51:54.899171 kernel: printk: console [ttyAMA0] enabled Feb 13 19:51:54.899178 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:51:54.899320 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:51:54.899391 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:51:54.899455 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:51:54.899516 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:51:54.899578 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:51:54.899588 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:51:54.899595 kernel: PCI host bridge to bus 0000:00 Feb 13 19:51:54.901478 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:51:54.901618 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:51:54.902399 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:51:54.902485 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:51:54.902571 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:51:54.902650 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 19:51:54.902780 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 19:51:54.902851 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 19:51:54.902926 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 19:51:54.902992 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 19:51:54.903064 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 19:51:54.903130 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 19:51:54.903206 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 19:51:54.903271 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 19:51:54.903345 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 19:51:54.903413 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 19:51:54.903486 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 19:51:54.903551 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 19:51:54.903642 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 19:51:54.903789 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 19:51:54.903873 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 19:51:54.903944 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 19:51:54.904016 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 19:51:54.904081 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 19:51:54.904152 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 19:51:54.904222 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 19:51:54.904293 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 19:51:54.904360 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 19:51:54.904436 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 19:51:54.904504 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 19:51:54.904571 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:51:54.904640 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 19:51:54.904818 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 19:51:54.904892 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 19:51:54.904966 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 19:51:54.905037 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 19:51:54.905115 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 19:51:54.905200 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 19:51:54.905281 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 19:51:54.905361 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 19:51:54.905427 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 19:51:54.905493 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 19:51:54.905566 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 19:51:54.905634 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 19:51:54.905792 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 19:51:54.905871 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 19:51:54.905938 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 19:51:54.906003 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 19:51:54.906069 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 19:51:54.906135 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 19:51:54.906203 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 19:51:54.906265 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 19:51:54.906330 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 19:51:54.906393 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 19:51:54.906455 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 19:51:54.906521 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 19:51:54.906584 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 19:51:54.906648 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 19:51:54.906803 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 19:51:54.906877 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 19:51:54.906940 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 19:51:54.907006 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 19:51:54.907068 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 19:51:54.907129 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 19:51:54.907195 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 19:51:54.907263 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 19:51:54.907326 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 19:51:54.907392 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 19:51:54.907456 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 19:51:54.907519 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 19:51:54.907585 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 19:51:54.907649 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 19:51:54.907784 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 19:51:54.907856 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 19:51:54.907921 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 19:51:54.907984 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 19:51:54.908050 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 19:51:54.908114 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 19:51:54.908179 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 19:51:54.908248 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 19:51:54.908317 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 19:51:54.908380 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 19:51:54.908456 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 19:51:54.908535 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 19:51:54.908610 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 19:51:54.909785 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 19:51:54.909908 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 19:51:54.909975 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 19:51:54.910041 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 19:51:54.910106 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 19:51:54.910172 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 19:51:54.910235 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 19:51:54.910302 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 19:51:54.910368 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 19:51:54.910435 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 19:51:54.910498 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 19:51:54.910564 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 19:51:54.910628 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 19:51:54.911849 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 19:51:54.911937 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 19:51:54.912005 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 19:51:54.912077 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 19:51:54.912143 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 19:51:54.912206 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 19:51:54.912270 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 19:51:54.912335 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 19:51:54.912400 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 19:51:54.912463 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 19:51:54.912531 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 19:51:54.912599 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 19:51:54.914695 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 19:51:54.914838 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 19:51:54.914910 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 19:51:54.914977 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 19:51:54.915048 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 19:51:54.915125 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 19:51:54.915192 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:51:54.915269 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 19:51:54.915336 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 19:51:54.915402 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 19:51:54.915465 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 19:51:54.915531 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 19:51:54.915604 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 19:51:54.915762 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 19:51:54.915839 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 19:51:54.915904 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 19:51:54.915967 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 19:51:54.916037 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 19:51:54.916105 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 19:51:54.916176 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 19:51:54.916239 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 19:51:54.916303 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 19:51:54.916367 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 19:51:54.916439 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 19:51:54.916505 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 19:51:54.916568 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 19:51:54.916631 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 19:51:54.916723 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 19:51:54.916851 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 19:51:54.916925 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 19:51:54.916994 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 19:51:54.917065 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 19:51:54.917130 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 19:51:54.917196 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 19:51:54.917267 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 19:51:54.917340 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 19:51:54.917406 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 19:51:54.917469 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 19:51:54.917533 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 19:51:54.917600 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 19:51:54.918181 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 19:51:54.918286 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 19:51:54.918361 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 19:51:54.918427 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 19:51:54.918490 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 19:51:54.918557 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 19:51:54.918620 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 19:51:54.918877 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 19:51:54.918955 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 19:51:54.919021 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 19:51:54.919090 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 19:51:54.919156 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 19:51:54.919218 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 19:51:54.919281 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 19:51:54.919344 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 19:51:54.919409 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:51:54.919466 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:51:54.919528 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:51:54.919605 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 19:51:54.919678 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 19:51:54.919758 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 19:51:54.919833 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 19:51:54.919893 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 19:51:54.919951 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 19:51:54.920021 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 19:51:54.920096 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 19:51:54.920180 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 19:51:54.920256 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 19:51:54.920315 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 19:51:54.920373 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 19:51:54.920438 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 19:51:54.920500 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 19:51:54.920559 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 19:51:54.920629 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 19:51:54.920802 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 19:51:54.920872 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 19:51:54.920940 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 19:51:54.920999 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 19:51:54.921058 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 19:51:54.921124 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 19:51:54.921184 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 19:51:54.921243 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 19:51:54.921310 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 19:51:54.921369 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 19:51:54.921427 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 19:51:54.921437 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:51:54.921445 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:51:54.921453 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:51:54.921461 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:51:54.921469 kernel: iommu: Default domain type: Translated Feb 13 19:51:54.921478 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:51:54.921486 kernel: efivars: Registered efivars operations Feb 13 19:51:54.921493 kernel: vgaarb: loaded Feb 13 19:51:54.921501 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:51:54.921510 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:51:54.921518 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:51:54.921526 kernel: pnp: PnP ACPI init Feb 13 19:51:54.921596 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:51:54.921608 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:51:54.921616 kernel: NET: Registered PF_INET protocol family Feb 13 19:51:54.921624 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:51:54.921632 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:51:54.921639 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:51:54.921647 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:51:54.921655 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:51:54.921663 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:51:54.921684 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:51:54.921693 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:51:54.921701 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:51:54.921824 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 19:51:54.921837 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:51:54.921845 kernel: kvm [1]: HYP mode not available Feb 13 19:51:54.921853 kernel: Initialise system trusted keyrings Feb 13 19:51:54.921860 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:51:54.921868 kernel: Key type asymmetric registered Feb 13 19:51:54.921875 kernel: Asymmetric key parser 'x509' registered Feb 13 19:51:54.921886 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:51:54.921894 kernel: io scheduler mq-deadline registered Feb 13 19:51:54.921902 kernel: io scheduler kyber registered Feb 13 19:51:54.921910 kernel: io scheduler bfq registered Feb 13 19:51:54.921918 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:51:54.921988 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 19:51:54.922054 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 19:51:54.922118 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:51:54.922186 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 19:51:54.922253 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 19:51:54.922317 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:51:54.922383 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 19:51:54.922448 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 19:51:54.922511 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:51:54.922579 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 19:51:54.922644 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 19:51:54.922750 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:51:54.922824 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 19:51:54.922890 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 19:51:54.922953 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:51:54.923023 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 19:51:54.923087 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 19:51:54.923152 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:51:54.923219 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 19:51:54.923285 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 19:51:54.923349 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:51:54.923419 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 19:51:54.923483 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 19:51:54.923547 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:51:54.923558 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 19:51:54.923622 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 19:51:54.924073 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 19:51:54.924160 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 19:51:54.924171 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:51:54.924179 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:51:54.924188 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:51:54.924268 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 19:51:54.924356 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 19:51:54.924370 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:51:54.924379 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:51:54.924450 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 19:51:54.924461 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 19:51:54.924469 kernel: thunder_xcv, ver 1.0 Feb 13 19:51:54.924476 kernel: thunder_bgx, ver 1.0 Feb 13 19:51:54.924483 kernel: nicpf, ver 1.0 Feb 13 19:51:54.924493 kernel: nicvf, ver 1.0 Feb 13 19:51:54.924580 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:51:54.924656 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:51:54 UTC (1739476314) Feb 13 19:51:54.925126 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:51:54.925139 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:51:54.925147 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:51:54.925155 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:51:54.925163 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:51:54.925171 kernel: Segment Routing with IPv6 Feb 13 19:51:54.925178 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:51:54.925186 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:51:54.925193 kernel: Key type dns_resolver registered Feb 13 19:51:54.925201 kernel: registered taskstats version 1 Feb 13 19:51:54.925211 kernel: Loading compiled-in X.509 certificates Feb 13 19:51:54.925218 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:51:54.925226 kernel: Key type .fscrypt registered Feb 13 19:51:54.925234 kernel: Key type fscrypt-provisioning registered Feb 13 19:51:54.925241 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:51:54.925249 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:51:54.925257 kernel: ima: No architecture policies found Feb 13 19:51:54.925264 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:51:54.925274 kernel: clk: Disabling unused clocks Feb 13 19:51:54.925281 kernel: Freeing unused kernel memory: 39680K Feb 13 19:51:54.925289 kernel: Run /init as init process Feb 13 19:51:54.925297 kernel: with arguments: Feb 13 19:51:54.925305 kernel: /init Feb 13 19:51:54.925312 kernel: with environment: Feb 13 19:51:54.925319 kernel: HOME=/ Feb 13 19:51:54.925327 kernel: TERM=linux Feb 13 19:51:54.925334 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:51:54.925346 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:51:54.925356 systemd[1]: Detected virtualization kvm. Feb 13 19:51:54.925364 systemd[1]: Detected architecture arm64. Feb 13 19:51:54.925372 systemd[1]: Running in initrd. Feb 13 19:51:54.925380 systemd[1]: No hostname configured, using default hostname. Feb 13 19:51:54.925387 systemd[1]: Hostname set to . Feb 13 19:51:54.925396 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:51:54.925405 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:51:54.925414 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:54.925422 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:54.925431 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:51:54.925440 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:51:54.925448 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:51:54.925456 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:51:54.925467 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:51:54.925476 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:51:54.925486 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:54.925494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:54.925502 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:51:54.925510 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:51:54.925519 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:51:54.925527 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:51:54.925536 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:51:54.925544 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:51:54.925552 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:51:54.925560 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:51:54.925568 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:54.925576 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:54.925584 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:54.925592 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:51:54.925600 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:51:54.925610 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:51:54.925618 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:51:54.925626 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:51:54.925634 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:51:54.925643 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:51:54.925651 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:54.925660 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:51:54.925686 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:54.925697 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:51:54.925709 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:51:54.925717 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:51:54.925766 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 19:51:54.925790 kernel: Bridge firewalling registered Feb 13 19:51:54.925798 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:54.925806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:54.925815 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:54.925823 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:51:54.925832 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:54.925842 systemd-journald[237]: Journal started Feb 13 19:51:54.925861 systemd-journald[237]: Runtime Journal (/run/log/journal/bc1391b481c64dd9b6edc8095458e6a3) is 8.0M, max 76.6M, 68.6M free. Feb 13 19:51:54.881013 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 19:51:54.896155 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 19:51:54.931718 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:51:54.935698 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:51:54.940190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:54.952289 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:51:54.955914 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:54.956927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:54.966173 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:51:54.967796 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:54.972395 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:51:54.980652 dracut-cmdline[272]: dracut-dracut-053 Feb 13 19:51:54.983609 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:51:55.012170 systemd-resolved[277]: Positive Trust Anchors: Feb 13 19:51:55.012243 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:51:55.012273 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:51:55.022238 systemd-resolved[277]: Defaulting to hostname 'linux'. Feb 13 19:51:55.024215 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:51:55.024888 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:55.059700 kernel: SCSI subsystem initialized Feb 13 19:51:55.063699 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:51:55.071706 kernel: iscsi: registered transport (tcp) Feb 13 19:51:55.084732 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:51:55.084839 kernel: QLogic iSCSI HBA Driver Feb 13 19:51:55.134487 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:51:55.142840 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:51:55.171898 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:51:55.171977 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:51:55.172700 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:51:55.224761 kernel: raid6: neonx8 gen() 15663 MB/s Feb 13 19:51:55.241734 kernel: raid6: neonx4 gen() 15537 MB/s Feb 13 19:51:55.258735 kernel: raid6: neonx2 gen() 13151 MB/s Feb 13 19:51:55.275761 kernel: raid6: neonx1 gen() 10428 MB/s Feb 13 19:51:55.292714 kernel: raid6: int64x8 gen() 6905 MB/s Feb 13 19:51:55.309774 kernel: raid6: int64x4 gen() 7305 MB/s Feb 13 19:51:55.326862 kernel: raid6: int64x2 gen() 6092 MB/s Feb 13 19:51:55.343729 kernel: raid6: int64x1 gen() 5037 MB/s Feb 13 19:51:55.343818 kernel: raid6: using algorithm neonx8 gen() 15663 MB/s Feb 13 19:51:55.360720 kernel: raid6: .... xor() 11855 MB/s, rmw enabled Feb 13 19:51:55.360783 kernel: raid6: using neon recovery algorithm Feb 13 19:51:55.365815 kernel: xor: measuring software checksum speed Feb 13 19:51:55.365874 kernel: 8regs : 19716 MB/sec Feb 13 19:51:55.365899 kernel: 32regs : 19683 MB/sec Feb 13 19:51:55.365919 kernel: arm64_neon : 27087 MB/sec Feb 13 19:51:55.366704 kernel: xor: using function: arm64_neon (27087 MB/sec) Feb 13 19:51:55.416718 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:51:55.432700 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:51:55.438858 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:55.468205 systemd-udevd[456]: Using default interface naming scheme 'v255'. Feb 13 19:51:55.471711 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:55.480914 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:51:55.496241 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Feb 13 19:51:55.531788 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:51:55.539177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:51:55.587393 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:55.594913 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:51:55.618072 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:51:55.619916 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:51:55.624462 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:55.625123 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:51:55.631929 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:51:55.653899 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:51:55.689710 kernel: scsi host0: Virtio SCSI HBA Feb 13 19:51:55.696686 kernel: ACPI: bus type USB registered Feb 13 19:51:55.696774 kernel: usbcore: registered new interface driver usbfs Feb 13 19:51:55.697934 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:51:55.698000 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 19:51:55.701692 kernel: usbcore: registered new interface driver hub Feb 13 19:51:55.701756 kernel: usbcore: registered new device driver usb Feb 13 19:51:55.722205 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:51:55.722328 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:55.723166 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:55.725245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:55.725395 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:55.726051 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:55.737947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:55.753265 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 19:51:55.758172 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 19:51:55.758314 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:51:55.758328 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:51:55.754222 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:55.762882 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:55.766111 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 19:51:55.788883 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 19:51:55.789012 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 19:51:55.789098 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 19:51:55.789204 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 19:51:55.789287 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 19:51:55.789367 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 19:51:55.789449 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:51:55.789530 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 19:51:55.789611 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 19:51:55.789716 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:51:55.789726 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 19:51:55.789828 kernel: GPT:17805311 != 80003071 Feb 13 19:51:55.789838 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:51:55.789847 kernel: GPT:17805311 != 80003071 Feb 13 19:51:55.789855 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:51:55.789864 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:55.789873 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 19:51:55.789963 kernel: hub 1-0:1.0: USB hub found Feb 13 19:51:55.790061 kernel: hub 1-0:1.0: 4 ports detected Feb 13 19:51:55.790135 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 19:51:55.790228 kernel: hub 2-0:1.0: USB hub found Feb 13 19:51:55.790312 kernel: hub 2-0:1.0: 4 ports detected Feb 13 19:51:55.797681 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:55.828900 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (506) Feb 13 19:51:55.832694 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 19:51:55.839700 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (516) Feb 13 19:51:55.846421 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 19:51:55.856595 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 19:51:55.857359 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 19:51:55.866040 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 19:51:55.872939 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:51:55.880310 disk-uuid[574]: Primary Header is updated. Feb 13 19:51:55.880310 disk-uuid[574]: Secondary Entries is updated. Feb 13 19:51:55.880310 disk-uuid[574]: Secondary Header is updated. Feb 13 19:51:56.018782 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 19:51:56.260752 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 19:51:56.397040 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 19:51:56.397090 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 19:51:56.399692 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 19:51:56.453729 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 19:51:56.454059 kernel: usbcore: registered new interface driver usbhid Feb 13 19:51:56.455662 kernel: usbhid: USB HID core driver Feb 13 19:51:56.896968 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:56.897036 disk-uuid[575]: The operation has completed successfully. Feb 13 19:51:56.950716 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:51:56.951949 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:51:56.971583 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:51:56.976782 sh[583]: Success Feb 13 19:51:56.989703 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:51:57.040546 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:51:57.048012 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:51:57.053300 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:51:57.077755 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:51:57.077839 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:51:57.077879 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:51:57.078825 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:51:57.079792 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:51:57.088711 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:51:57.090581 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:51:57.091287 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:51:57.096853 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:51:57.099721 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:51:57.111928 kernel: BTRFS info (device sda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:51:57.111990 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:51:57.112001 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:51:57.117775 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:51:57.117839 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:51:57.131706 kernel: BTRFS info (device sda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:51:57.131885 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:51:57.138653 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:51:57.142848 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:51:57.249221 ignition[666]: Ignition 2.20.0 Feb 13 19:51:57.249922 ignition[666]: Stage: fetch-offline Feb 13 19:51:57.250324 ignition[666]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:57.250334 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:51:57.250509 ignition[666]: parsed url from cmdline: "" Feb 13 19:51:57.251854 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:51:57.250513 ignition[666]: no config URL provided Feb 13 19:51:57.250517 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:51:57.250525 ignition[666]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:51:57.250531 ignition[666]: failed to fetch config: resource requires networking Feb 13 19:51:57.250746 ignition[666]: Ignition finished successfully Feb 13 19:51:57.263622 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:51:57.264558 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:51:57.284258 systemd-networkd[770]: lo: Link UP Feb 13 19:51:57.284273 systemd-networkd[770]: lo: Gained carrier Feb 13 19:51:57.285878 systemd-networkd[770]: Enumeration completed Feb 13 19:51:57.286089 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:51:57.286831 systemd[1]: Reached target network.target - Network. Feb 13 19:51:57.288207 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:57.288210 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:51:57.289037 systemd-networkd[770]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:57.289040 systemd-networkd[770]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:51:57.289884 systemd-networkd[770]: eth0: Link UP Feb 13 19:51:57.289887 systemd-networkd[770]: eth0: Gained carrier Feb 13 19:51:57.289895 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:57.293990 systemd-networkd[770]: eth1: Link UP Feb 13 19:51:57.293993 systemd-networkd[770]: eth1: Gained carrier Feb 13 19:51:57.294001 systemd-networkd[770]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:57.297877 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:51:57.311632 ignition[773]: Ignition 2.20.0 Feb 13 19:51:57.311644 ignition[773]: Stage: fetch Feb 13 19:51:57.311907 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:57.311918 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:51:57.312036 ignition[773]: parsed url from cmdline: "" Feb 13 19:51:57.312040 ignition[773]: no config URL provided Feb 13 19:51:57.312045 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:51:57.312053 ignition[773]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:51:57.312138 ignition[773]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 19:51:57.313075 ignition[773]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 19:51:57.319770 systemd-networkd[770]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:51:57.357807 systemd-networkd[770]: eth0: DHCPv4 address 188.245.239.161/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 19:51:57.514018 ignition[773]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 19:51:57.519330 ignition[773]: GET result: OK Feb 13 19:51:57.519397 ignition[773]: parsing config with SHA512: 81046c9ceba6ea625d572f6e44cb05b25907407bc5b68a324831fadb21644ce39d0d3e674a370a184e7ae0989e41e979ac476fed2fd16ecad5f57645452b4614 Feb 13 19:51:57.525426 unknown[773]: fetched base config from "system" Feb 13 19:51:57.526054 unknown[773]: fetched base config from "system" Feb 13 19:51:57.526071 unknown[773]: fetched user config from "hetzner" Feb 13 19:51:57.526660 ignition[773]: fetch: fetch complete Feb 13 19:51:57.528840 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:51:57.526712 ignition[773]: fetch: fetch passed Feb 13 19:51:57.526839 ignition[773]: Ignition finished successfully Feb 13 19:51:57.537003 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:51:57.551065 ignition[781]: Ignition 2.20.0 Feb 13 19:51:57.551718 ignition[781]: Stage: kargs Feb 13 19:51:57.552335 ignition[781]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:57.552350 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:51:57.553099 ignition[781]: kargs: kargs passed Feb 13 19:51:57.553149 ignition[781]: Ignition finished successfully Feb 13 19:51:57.556385 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:51:57.561864 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:51:57.574992 ignition[788]: Ignition 2.20.0 Feb 13 19:51:57.575002 ignition[788]: Stage: disks Feb 13 19:51:57.575183 ignition[788]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:57.577398 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:51:57.575194 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:51:57.579551 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:51:57.575945 ignition[788]: disks: disks passed Feb 13 19:51:57.580280 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:51:57.575995 ignition[788]: Ignition finished successfully Feb 13 19:51:57.582017 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:51:57.583014 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:51:57.584044 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:51:57.589895 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:51:57.605461 systemd-fsck[796]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 19:51:57.610153 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:51:57.615008 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:51:57.666751 kernel: EXT4-fs (sda9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:51:57.667892 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:51:57.668863 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:51:57.679865 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:51:57.683873 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:51:57.686952 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 19:51:57.689869 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:51:57.689905 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:51:57.697696 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (804) Feb 13 19:51:57.700260 kernel: BTRFS info (device sda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:51:57.700306 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:51:57.700318 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:51:57.703237 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:51:57.705412 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:51:57.713589 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:51:57.713641 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:51:57.715344 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:51:57.772228 coreos-metadata[806]: Feb 13 19:51:57.772 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 19:51:57.775346 coreos-metadata[806]: Feb 13 19:51:57.775 INFO Fetch successful Feb 13 19:51:57.778022 coreos-metadata[806]: Feb 13 19:51:57.776 INFO wrote hostname ci-4152-2-1-8-affa313201 to /sysroot/etc/hostname Feb 13 19:51:57.781158 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:51:57.781918 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:51:57.790783 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:51:57.796257 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:51:57.800596 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:51:57.898426 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:51:57.903834 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:51:57.906621 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:51:57.917699 kernel: BTRFS info (device sda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:51:57.934555 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:51:57.942100 ignition[922]: INFO : Ignition 2.20.0 Feb 13 19:51:57.942100 ignition[922]: INFO : Stage: mount Feb 13 19:51:57.943219 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:57.943219 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:51:57.943219 ignition[922]: INFO : mount: mount passed Feb 13 19:51:57.943219 ignition[922]: INFO : Ignition finished successfully Feb 13 19:51:57.944707 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:51:57.952907 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:51:58.077245 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:51:58.084241 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:51:58.097705 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (934) Feb 13 19:51:58.099088 kernel: BTRFS info (device sda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:51:58.099131 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:51:58.099155 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:51:58.102697 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:51:58.102758 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:51:58.105747 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:51:58.136083 ignition[951]: INFO : Ignition 2.20.0 Feb 13 19:51:58.137484 ignition[951]: INFO : Stage: files Feb 13 19:51:58.139222 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:58.139222 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:51:58.139222 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:51:58.144086 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:51:58.144086 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:51:58.148107 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:51:58.149265 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:51:58.150884 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:51:58.150884 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:51:58.149319 unknown[951]: wrote ssh authorized keys file for user: core Feb 13 19:51:58.158540 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:51:58.158540 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:51:58.158540 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:51:58.158540 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:51:58.158540 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:51:58.158540 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:51:58.158540 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:51:58.753470 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:51:59.168925 systemd-networkd[770]: eth0: Gained IPv6LL Feb 13 19:51:59.169894 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:51:59.169894 ignition[951]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 19:51:59.172274 ignition[951]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 19:51:59.172274 ignition[951]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 19:51:59.172274 ignition[951]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 19:51:59.172274 ignition[951]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:51:59.172274 ignition[951]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:51:59.172274 ignition[951]: INFO : files: files passed Feb 13 19:51:59.172274 ignition[951]: INFO : Ignition finished successfully Feb 13 19:51:59.173264 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:51:59.179925 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:51:59.183033 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:51:59.187072 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:51:59.194218 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:51:59.205838 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:59.207506 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:59.209070 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:59.209930 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:51:59.211653 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:51:59.216969 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:51:59.270104 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:51:59.271153 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:51:59.272443 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:51:59.273520 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:51:59.274633 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:51:59.279932 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:51:59.295476 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:51:59.300899 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:51:59.313996 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:59.314736 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:59.316757 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:51:59.317799 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:51:59.317926 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:51:59.319397 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:51:59.320083 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:51:59.321209 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:51:59.322311 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:51:59.323311 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:51:59.324392 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:51:59.325468 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:51:59.326621 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:51:59.327646 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:51:59.328785 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:51:59.329621 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:51:59.329769 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:51:59.331032 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:59.331647 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:59.332663 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:51:59.333173 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:59.333947 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:51:59.334069 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:51:59.335563 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:51:59.335693 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:51:59.336932 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:51:59.337034 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:51:59.338081 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 19:51:59.338173 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 19:51:59.347957 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:51:59.353314 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:51:59.354880 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:51:59.355045 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:59.358708 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:51:59.358960 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:51:59.361210 systemd-networkd[770]: eth1: Gained IPv6LL Feb 13 19:51:59.366066 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:51:59.366787 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:51:59.369016 ignition[1005]: INFO : Ignition 2.20.0 Feb 13 19:51:59.369016 ignition[1005]: INFO : Stage: umount Feb 13 19:51:59.370959 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:59.370959 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 19:51:59.377881 ignition[1005]: INFO : umount: umount passed Feb 13 19:51:59.377881 ignition[1005]: INFO : Ignition finished successfully Feb 13 19:51:59.375574 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:51:59.376140 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:51:59.376691 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:51:59.380826 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:51:59.380979 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:51:59.384550 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:51:59.384637 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:51:59.386144 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:51:59.386190 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:51:59.390188 systemd[1]: Stopped target network.target - Network. Feb 13 19:51:59.391620 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:51:59.391733 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:51:59.395351 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:51:59.397291 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:51:59.398264 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:59.400484 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:51:59.402923 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:51:59.404596 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:51:59.404680 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:51:59.405815 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:51:59.405863 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:51:59.407804 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:51:59.407896 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:51:59.409353 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:51:59.409409 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:51:59.410212 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:51:59.410859 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:51:59.414806 systemd-networkd[770]: eth0: DHCPv6 lease lost Feb 13 19:51:59.416229 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:51:59.416377 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:51:59.417361 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:51:59.417414 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:51:59.419774 systemd-networkd[770]: eth1: DHCPv6 lease lost Feb 13 19:51:59.423651 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:51:59.424204 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:51:59.425404 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:51:59.425491 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:51:59.427942 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:51:59.427989 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:59.433925 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:51:59.435330 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:51:59.435394 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:51:59.436685 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:51:59.436785 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:59.437336 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:51:59.437374 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:59.438128 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:51:59.438173 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:59.439293 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:59.453403 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:51:59.453538 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:51:59.462891 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:51:59.463138 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:59.466080 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:51:59.466135 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:59.467407 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:51:59.467448 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:59.468869 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:51:59.468931 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:51:59.470811 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:51:59.470859 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:51:59.472205 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:51:59.472254 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:59.477961 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:51:59.478548 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:51:59.478608 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:59.481257 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:51:59.481307 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:59.482350 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:51:59.482395 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:59.483998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:59.484050 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:59.491368 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:51:59.491471 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:51:59.492863 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:51:59.497944 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:51:59.509930 systemd[1]: Switching root. Feb 13 19:51:59.551033 systemd-journald[237]: Journal stopped Feb 13 19:52:00.455358 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 19:52:00.455437 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:52:00.455454 kernel: SELinux: policy capability open_perms=1 Feb 13 19:52:00.455466 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:52:00.455476 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:52:00.455485 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:52:00.455495 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:52:00.455504 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:52:00.455514 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:52:00.455523 kernel: audit: type=1403 audit(1739476319.669:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:52:00.455535 systemd[1]: Successfully loaded SELinux policy in 35.846ms. Feb 13 19:52:00.455593 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.789ms. Feb 13 19:52:00.455607 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:52:00.455618 systemd[1]: Detected virtualization kvm. Feb 13 19:52:00.455632 systemd[1]: Detected architecture arm64. Feb 13 19:52:00.455642 systemd[1]: Detected first boot. Feb 13 19:52:00.455652 systemd[1]: Hostname set to . Feb 13 19:52:00.455662 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:52:00.455691 zram_generator::config[1049]: No configuration found. Feb 13 19:52:00.455704 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:52:00.455728 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:52:00.455739 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:52:00.455749 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:52:00.455760 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:52:00.455772 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:52:00.455783 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:52:00.455793 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:52:00.455805 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:52:00.455815 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:52:00.455825 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:52:00.455835 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:52:00.455845 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:52:00.455856 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:52:00.455865 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:52:00.455875 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:52:00.455890 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:52:00.455901 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:52:00.455910 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:52:00.455920 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:52:00.455930 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:52:00.455940 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:52:00.455952 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:52:00.455962 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:52:00.455972 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:52:00.455985 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:52:00.455997 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:52:00.456007 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:52:00.456017 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:52:00.456028 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:52:00.456038 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:52:00.456049 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:52:00.456060 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:52:00.456070 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:52:00.456080 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:52:00.456090 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:52:00.456100 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:52:00.456113 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:52:00.456123 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:52:00.456133 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:52:00.456143 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:52:00.456155 systemd[1]: Reached target machines.target - Containers. Feb 13 19:52:00.456165 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:52:00.456175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:52:00.456186 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:52:00.456196 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:52:00.456206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:52:00.456218 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:52:00.456228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:52:00.456239 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:52:00.456249 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:52:00.456260 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:52:00.456270 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:52:00.456280 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:52:00.456290 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:52:00.456299 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:52:00.456309 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:52:00.456320 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:52:00.456333 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:52:00.456345 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:52:00.456357 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:52:00.456368 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:52:00.456378 systemd[1]: Stopped verity-setup.service. Feb 13 19:52:00.456390 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:52:00.456400 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:52:00.456410 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:52:00.456420 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:52:00.456430 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:52:00.456443 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:52:00.456453 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:52:00.456463 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:52:00.456473 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:52:00.456483 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:52:00.456493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:52:00.456504 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:52:00.456514 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:52:00.456526 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:52:00.456536 kernel: fuse: init (API version 7.39) Feb 13 19:52:00.456547 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:52:00.456557 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:52:00.456567 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:52:00.456578 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:52:00.456589 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:52:00.456599 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:52:00.456609 kernel: ACPI: bus type drm_connector registered Feb 13 19:52:00.456619 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:52:00.456630 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:52:00.456640 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:52:00.456650 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:52:00.456662 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:52:00.456682 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:52:00.456695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:52:00.456706 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:52:00.456752 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:52:00.456767 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:52:00.456825 systemd-journald[1116]: Collecting audit messages is disabled. Feb 13 19:52:00.456855 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:52:00.456867 systemd-journald[1116]: Journal started Feb 13 19:52:00.456893 systemd-journald[1116]: Runtime Journal (/run/log/journal/bc1391b481c64dd9b6edc8095458e6a3) is 8.0M, max 76.6M, 68.6M free. Feb 13 19:52:00.148357 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:52:00.173830 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 19:52:00.174219 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:52:00.460726 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:52:00.468231 kernel: loop: module loaded Feb 13 19:52:00.461934 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:52:00.463279 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:52:00.464923 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:52:00.465936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:52:00.466242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:52:00.471909 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:52:00.473231 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:52:00.473749 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:52:00.476923 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:52:00.510997 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:52:00.511805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:52:00.511880 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:52:00.514013 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:52:00.515644 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:52:00.523322 systemd-tmpfiles[1132]: ACLs are not supported, ignoring. Feb 13 19:52:00.523338 systemd-tmpfiles[1132]: ACLs are not supported, ignoring. Feb 13 19:52:00.526878 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:52:00.540784 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:52:00.549924 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:52:00.552753 kernel: loop0: detected capacity change from 0 to 194096 Feb 13 19:52:00.560850 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:52:00.565761 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:52:00.571740 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:52:00.580458 systemd-journald[1116]: Time spent on flushing to /var/log/journal/bc1391b481c64dd9b6edc8095458e6a3 is 22.635ms for 1122 entries. Feb 13 19:52:00.580458 systemd-journald[1116]: System Journal (/var/log/journal/bc1391b481c64dd9b6edc8095458e6a3) is 8.0M, max 584.8M, 576.8M free. Feb 13 19:52:00.613709 systemd-journald[1116]: Received client request to flush runtime journal. Feb 13 19:52:00.613791 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:52:00.615098 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:52:00.619745 kernel: loop1: detected capacity change from 0 to 113536 Feb 13 19:52:00.637090 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:52:00.648115 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:52:00.664907 kernel: loop2: detected capacity change from 0 to 116808 Feb 13 19:52:00.676224 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:52:00.689997 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:52:00.696251 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Feb 13 19:52:00.696270 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Feb 13 19:52:00.703696 kernel: loop3: detected capacity change from 0 to 8 Feb 13 19:52:00.710198 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:52:00.718871 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:52:00.728690 kernel: loop4: detected capacity change from 0 to 194096 Feb 13 19:52:00.755002 kernel: loop5: detected capacity change from 0 to 113536 Feb 13 19:52:00.773822 kernel: loop6: detected capacity change from 0 to 116808 Feb 13 19:52:00.796779 kernel: loop7: detected capacity change from 0 to 8 Feb 13 19:52:00.798501 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 19:52:00.799298 (sd-merge)[1191]: Merged extensions into '/usr'. Feb 13 19:52:00.806159 systemd[1]: Reloading requested from client PID 1147 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:52:00.806465 systemd[1]: Reloading... Feb 13 19:52:00.890693 zram_generator::config[1213]: No configuration found. Feb 13 19:52:00.961541 ldconfig[1142]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:52:01.038420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:52:01.084289 systemd[1]: Reloading finished in 277 ms. Feb 13 19:52:01.114546 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:52:01.117967 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:52:01.129051 systemd[1]: Starting ensure-sysext.service... Feb 13 19:52:01.136004 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:52:01.141917 systemd[1]: Reloading requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:52:01.141931 systemd[1]: Reloading... Feb 13 19:52:01.188917 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:52:01.189201 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:52:01.189908 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:52:01.190140 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Feb 13 19:52:01.190197 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Feb 13 19:52:01.195460 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:52:01.195479 systemd-tmpfiles[1255]: Skipping /boot Feb 13 19:52:01.211532 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:52:01.211553 systemd-tmpfiles[1255]: Skipping /boot Feb 13 19:52:01.236703 zram_generator::config[1281]: No configuration found. Feb 13 19:52:01.349140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:52:01.395095 systemd[1]: Reloading finished in 252 ms. Feb 13 19:52:01.419422 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:52:01.427650 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:52:01.434478 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:52:01.438891 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:52:01.444267 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:52:01.455897 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:52:01.463028 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:52:01.473894 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:52:01.482914 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:52:01.485380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:52:01.501164 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:52:01.505109 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:52:01.517997 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:52:01.518641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:52:01.520174 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:52:01.526837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:52:01.526974 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:52:01.530592 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Feb 13 19:52:01.542187 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:52:01.544551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:52:01.544910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:52:01.546936 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:52:01.547548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:52:01.557729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:52:01.558054 augenrules[1352]: No rules Feb 13 19:52:01.566013 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:52:01.571514 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:52:01.576954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:52:01.577687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:52:01.578435 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:52:01.579408 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:52:01.581402 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:52:01.583541 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:52:01.585331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:52:01.586027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:52:01.587196 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:52:01.606444 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:52:01.608860 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:52:01.610567 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:52:01.614864 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:52:01.615502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:52:01.618610 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:52:01.619487 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:52:01.620866 systemd[1]: Finished ensure-sysext.service. Feb 13 19:52:01.621608 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:52:01.622803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:52:01.623919 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:52:01.624829 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:52:01.624941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:52:01.632974 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:52:01.640119 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:52:01.641925 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:52:01.666855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:52:01.668280 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:52:01.670613 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:52:01.673293 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:52:01.673977 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:52:01.678605 augenrules[1372]: /sbin/augenrules: No change Feb 13 19:52:01.715763 augenrules[1416]: No rules Feb 13 19:52:01.718041 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:52:01.718218 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:52:01.743629 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:52:01.772527 systemd-networkd[1380]: lo: Link UP Feb 13 19:52:01.774707 systemd-networkd[1380]: lo: Gained carrier Feb 13 19:52:01.775500 systemd-networkd[1380]: Enumeration completed Feb 13 19:52:01.775614 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:52:01.785865 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:52:01.789702 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:52:01.792236 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:52:01.804132 systemd-resolved[1324]: Positive Trust Anchors: Feb 13 19:52:01.804216 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:52:01.804249 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:52:01.811477 systemd-resolved[1324]: Using system hostname 'ci-4152-2-1-8-affa313201'. Feb 13 19:52:01.814850 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:52:01.816183 systemd[1]: Reached target network.target - Network. Feb 13 19:52:01.816843 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:52:01.832968 systemd-networkd[1380]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:52:01.833171 systemd-networkd[1380]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:52:01.834683 systemd-networkd[1380]: eth1: Link UP Feb 13 19:52:01.834691 systemd-networkd[1380]: eth1: Gained carrier Feb 13 19:52:01.834726 systemd-networkd[1380]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:52:01.864873 systemd-networkd[1380]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:52:01.866550 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:52:01.866556 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:52:01.867453 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Feb 13 19:52:01.869126 systemd-networkd[1380]: eth0: Link UP Feb 13 19:52:01.869136 systemd-networkd[1380]: eth0: Gained carrier Feb 13 19:52:01.869156 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:52:01.901747 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:52:01.901982 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1422) Feb 13 19:52:01.931251 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 19:52:01.931544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:52:01.933725 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 19:52:01.933782 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 19:52:01.933795 kernel: [drm] features: -context_init Feb 13 19:52:01.957022 systemd-networkd[1380]: eth0: DHCPv4 address 188.245.239.161/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 19:52:01.957657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:52:01.961818 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Feb 13 19:52:01.962411 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:52:01.967176 kernel: [drm] number of scanouts: 1 Feb 13 19:52:01.967248 kernel: [drm] number of cap sets: 0 Feb 13 19:52:01.966523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:52:01.967964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:52:01.968097 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:52:01.968504 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:52:01.969012 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:52:01.970360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:52:01.970977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:52:01.975693 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 19:52:01.982699 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 19:52:01.982419 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 19:52:01.993697 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 19:52:01.996882 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:52:01.997291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:52:02.022192 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:52:02.022902 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:52:02.022955 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:52:02.031679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:52:02.037794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:52:02.038620 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:52:02.040764 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:52:02.047118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:52:02.114346 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:52:02.182361 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:52:02.192000 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:52:02.218552 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:52:02.241008 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:52:02.243518 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:52:02.244382 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:52:02.245210 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:52:02.245911 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:52:02.246830 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:52:02.247492 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:52:02.248226 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:52:02.248900 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:52:02.248934 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:52:02.249392 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:52:02.251173 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:52:02.253276 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:52:02.258328 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:52:02.260382 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:52:02.261768 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:52:02.262534 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:52:02.263179 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:52:02.263843 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:52:02.263877 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:52:02.269925 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:52:02.273914 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:52:02.277644 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:52:02.282895 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:52:02.287877 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:52:02.290850 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:52:02.291803 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:52:02.299908 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:52:02.303897 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 19:52:02.310405 jq[1475]: false Feb 13 19:52:02.308894 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:52:02.320937 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:52:02.325977 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:52:02.329094 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:52:02.332911 dbus-daemon[1474]: [system] SELinux support is enabled Feb 13 19:52:02.329619 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:52:02.333379 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:52:02.337848 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:52:02.340078 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:52:02.350467 jq[1487]: true Feb 13 19:52:02.346927 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:52:02.356886 coreos-metadata[1473]: Feb 13 19:52:02.352 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 19:52:02.352282 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:52:02.352465 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:52:02.371612 coreos-metadata[1473]: Feb 13 19:52:02.369 INFO Fetch successful Feb 13 19:52:02.371612 coreos-metadata[1473]: Feb 13 19:52:02.369 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 19:52:02.371612 coreos-metadata[1473]: Feb 13 19:52:02.369 INFO Fetch successful Feb 13 19:52:02.366860 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:52:02.366902 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:52:02.367620 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:52:02.367641 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:52:02.386102 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:52:02.386836 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:52:02.395574 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:52:02.396871 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:52:02.417196 (ntainerd)[1506]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:52:02.419292 jq[1491]: true Feb 13 19:52:02.422626 extend-filesystems[1476]: Found loop4 Feb 13 19:52:02.434696 extend-filesystems[1476]: Found loop5 Feb 13 19:52:02.434696 extend-filesystems[1476]: Found loop6 Feb 13 19:52:02.434696 extend-filesystems[1476]: Found loop7 Feb 13 19:52:02.434696 extend-filesystems[1476]: Found sda Feb 13 19:52:02.434696 extend-filesystems[1476]: Found sda1 Feb 13 19:52:02.434696 extend-filesystems[1476]: Found sda2 Feb 13 19:52:02.434696 extend-filesystems[1476]: Found sda3 Feb 13 19:52:02.434696 extend-filesystems[1476]: Found usr Feb 13 19:52:02.434696 extend-filesystems[1476]: Found sda4 Feb 13 19:52:02.434696 extend-filesystems[1476]: Found sda6 Feb 13 19:52:02.434696 extend-filesystems[1476]: Found sda7 Feb 13 19:52:02.434696 extend-filesystems[1476]: Found sda9 Feb 13 19:52:02.434696 extend-filesystems[1476]: Checking size of /dev/sda9 Feb 13 19:52:02.464513 update_engine[1486]: I20250213 19:52:02.463821 1486 main.cc:92] Flatcar Update Engine starting Feb 13 19:52:02.480248 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:52:02.484217 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:52:02.485604 update_engine[1486]: I20250213 19:52:02.485044 1486 update_check_scheduler.cc:74] Next update check in 7m1s Feb 13 19:52:02.492609 extend-filesystems[1476]: Resized partition /dev/sda9 Feb 13 19:52:02.493469 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:52:02.494362 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:52:02.497511 extend-filesystems[1534]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:52:02.502720 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 19:52:02.506450 systemd-logind[1483]: New seat seat0. Feb 13 19:52:02.511144 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:52:02.511171 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 19:52:02.511704 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:52:02.586096 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:52:02.589030 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:52:02.599023 systemd[1]: Starting sshkeys.service... Feb 13 19:52:02.622972 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1402) Feb 13 19:52:02.648912 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:52:02.657133 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:52:02.663695 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 19:52:02.683120 extend-filesystems[1534]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 19:52:02.683120 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 19:52:02.683120 extend-filesystems[1534]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 19:52:02.691877 extend-filesystems[1476]: Resized filesystem in /dev/sda9 Feb 13 19:52:02.691877 extend-filesystems[1476]: Found sr0 Feb 13 19:52:02.687941 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:52:02.688833 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:52:02.702107 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:52:02.707903 coreos-metadata[1553]: Feb 13 19:52:02.707 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 19:52:02.711144 coreos-metadata[1553]: Feb 13 19:52:02.709 INFO Fetch successful Feb 13 19:52:02.714553 unknown[1553]: wrote ssh authorized keys file for user: core Feb 13 19:52:02.743747 update-ssh-keys[1557]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:52:02.743161 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:52:02.747333 systemd[1]: Finished sshkeys.service. Feb 13 19:52:02.770399 containerd[1506]: time="2025-02-13T19:52:02.770287320Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:52:02.818157 containerd[1506]: time="2025-02-13T19:52:02.818090360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:02.819822 containerd[1506]: time="2025-02-13T19:52:02.819774000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:52:02.819822 containerd[1506]: time="2025-02-13T19:52:02.819820680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:52:02.819877 containerd[1506]: time="2025-02-13T19:52:02.819842160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:52:02.820045 containerd[1506]: time="2025-02-13T19:52:02.820023560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:52:02.820069 containerd[1506]: time="2025-02-13T19:52:02.820050320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:02.820141 containerd[1506]: time="2025-02-13T19:52:02.820115760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:52:02.820141 containerd[1506]: time="2025-02-13T19:52:02.820128000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:02.820329 containerd[1506]: time="2025-02-13T19:52:02.820305400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:52:02.820353 containerd[1506]: time="2025-02-13T19:52:02.820327520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:02.820353 containerd[1506]: time="2025-02-13T19:52:02.820342560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:52:02.820353 containerd[1506]: time="2025-02-13T19:52:02.820351320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:02.820449 containerd[1506]: time="2025-02-13T19:52:02.820429880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:02.820685 containerd[1506]: time="2025-02-13T19:52:02.820646840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:02.820830 containerd[1506]: time="2025-02-13T19:52:02.820807000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:52:02.820876 containerd[1506]: time="2025-02-13T19:52:02.820829480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:52:02.820957 containerd[1506]: time="2025-02-13T19:52:02.820937040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:52:02.821010 containerd[1506]: time="2025-02-13T19:52:02.820992440Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:52:02.827051 containerd[1506]: time="2025-02-13T19:52:02.827005960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:52:02.827145 containerd[1506]: time="2025-02-13T19:52:02.827092320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:52:02.827145 containerd[1506]: time="2025-02-13T19:52:02.827117000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:52:02.827197 containerd[1506]: time="2025-02-13T19:52:02.827135240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:52:02.827220 containerd[1506]: time="2025-02-13T19:52:02.827202160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:52:02.827405 containerd[1506]: time="2025-02-13T19:52:02.827384320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:52:02.828096 containerd[1506]: time="2025-02-13T19:52:02.828066920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:52:02.828264 containerd[1506]: time="2025-02-13T19:52:02.828239960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:52:02.828288 containerd[1506]: time="2025-02-13T19:52:02.828269680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:52:02.828316 containerd[1506]: time="2025-02-13T19:52:02.828300920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:52:02.828343 containerd[1506]: time="2025-02-13T19:52:02.828324000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:52:02.828362 containerd[1506]: time="2025-02-13T19:52:02.828341840Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:52:02.828380 containerd[1506]: time="2025-02-13T19:52:02.828359120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:52:02.828403 containerd[1506]: time="2025-02-13T19:52:02.828377920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:52:02.828403 containerd[1506]: time="2025-02-13T19:52:02.828397440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:52:02.828438 containerd[1506]: time="2025-02-13T19:52:02.828414160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:52:02.828438 containerd[1506]: time="2025-02-13T19:52:02.828430360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:52:02.828471 containerd[1506]: time="2025-02-13T19:52:02.828447280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:52:02.828489 containerd[1506]: time="2025-02-13T19:52:02.828472720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828510 containerd[1506]: time="2025-02-13T19:52:02.828491880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828529 containerd[1506]: time="2025-02-13T19:52:02.828508840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828550 containerd[1506]: time="2025-02-13T19:52:02.828525440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828569 containerd[1506]: time="2025-02-13T19:52:02.828548240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828590 containerd[1506]: time="2025-02-13T19:52:02.828566880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828590 containerd[1506]: time="2025-02-13T19:52:02.828581760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828624 containerd[1506]: time="2025-02-13T19:52:02.828596960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828642 containerd[1506]: time="2025-02-13T19:52:02.828619840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828675 containerd[1506]: time="2025-02-13T19:52:02.828639560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828675 containerd[1506]: time="2025-02-13T19:52:02.828655560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828722 containerd[1506]: time="2025-02-13T19:52:02.828688960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828752 containerd[1506]: time="2025-02-13T19:52:02.828719520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828752 containerd[1506]: time="2025-02-13T19:52:02.828744440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:52:02.828786 containerd[1506]: time="2025-02-13T19:52:02.828772640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828804 containerd[1506]: time="2025-02-13T19:52:02.828790440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.828822 containerd[1506]: time="2025-02-13T19:52:02.828807200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:52:02.830671 containerd[1506]: time="2025-02-13T19:52:02.829014800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:52:02.830671 containerd[1506]: time="2025-02-13T19:52:02.829044000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:52:02.830671 containerd[1506]: time="2025-02-13T19:52:02.829125920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:52:02.830671 containerd[1506]: time="2025-02-13T19:52:02.829146240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:52:02.830671 containerd[1506]: time="2025-02-13T19:52:02.829159120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.830671 containerd[1506]: time="2025-02-13T19:52:02.829177000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:52:02.830671 containerd[1506]: time="2025-02-13T19:52:02.829190400Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:52:02.830671 containerd[1506]: time="2025-02-13T19:52:02.829203560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:52:02.830838 containerd[1506]: time="2025-02-13T19:52:02.829618480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:52:02.830838 containerd[1506]: time="2025-02-13T19:52:02.829737000Z" level=info msg="Connect containerd service" Feb 13 19:52:02.830838 containerd[1506]: time="2025-02-13T19:52:02.829792520Z" level=info msg="using legacy CRI server" Feb 13 19:52:02.830838 containerd[1506]: time="2025-02-13T19:52:02.829802400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:52:02.830838 containerd[1506]: time="2025-02-13T19:52:02.830069520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:52:02.831153 containerd[1506]: time="2025-02-13T19:52:02.831112120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:52:02.831376 containerd[1506]: time="2025-02-13T19:52:02.831337480Z" level=info msg="Start subscribing containerd event" Feb 13 19:52:02.831420 containerd[1506]: time="2025-02-13T19:52:02.831403160Z" level=info msg="Start recovering state" Feb 13 19:52:02.831505 containerd[1506]: time="2025-02-13T19:52:02.831488280Z" level=info msg="Start event monitor" Feb 13 19:52:02.831540 containerd[1506]: time="2025-02-13T19:52:02.831507200Z" level=info msg="Start snapshots syncer" Feb 13 19:52:02.831540 containerd[1506]: time="2025-02-13T19:52:02.831519080Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:52:02.831540 containerd[1506]: time="2025-02-13T19:52:02.831528600Z" level=info msg="Start streaming server" Feb 13 19:52:02.832484 containerd[1506]: time="2025-02-13T19:52:02.832453240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:52:02.832541 containerd[1506]: time="2025-02-13T19:52:02.832523880Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:52:02.833771 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:52:02.834884 containerd[1506]: time="2025-02-13T19:52:02.834851040Z" level=info msg="containerd successfully booted in 0.066258s" Feb 13 19:52:02.976683 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:52:03.002423 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:52:03.010142 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:52:03.031829 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:52:03.032023 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:52:03.039029 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:52:03.051541 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:52:03.059152 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:52:03.066186 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:52:03.068163 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:52:03.455900 systemd-networkd[1380]: eth1: Gained IPv6LL Feb 13 19:52:03.458926 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Feb 13 19:52:03.462500 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:52:03.465384 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:52:03.476175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:03.479827 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:52:03.508828 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:52:03.711996 systemd-networkd[1380]: eth0: Gained IPv6LL Feb 13 19:52:03.713033 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Feb 13 19:52:04.195817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:04.197150 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:52:04.201275 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:04.202807 systemd[1]: Startup finished in 770ms (kernel) + 4.974s (initrd) + 4.569s (userspace) = 10.315s. Feb 13 19:52:04.777946 kubelet[1596]: E0213 19:52:04.777819 1596 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:04.780572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:04.780835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:15.031163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:52:15.040164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:15.151857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:15.167301 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:15.219289 kubelet[1616]: E0213 19:52:15.219226 1616 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:15.223230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:15.223760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:25.474629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:52:25.481913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:25.599783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:25.604368 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:25.653014 kubelet[1632]: E0213 19:52:25.652954 1632 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:25.656479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:25.656731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:34.115847 systemd-timesyncd[1391]: Contacted time server 131.188.3.223:123 (2.flatcar.pool.ntp.org). Feb 13 19:52:34.115953 systemd-timesyncd[1391]: Initial clock synchronization to Thu 2025-02-13 19:52:33.934847 UTC. Feb 13 19:52:35.907379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:52:35.921136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:36.023804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:36.028773 (kubelet)[1647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:36.079858 kubelet[1647]: E0213 19:52:36.079816 1647 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:36.082267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:36.082454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:46.108119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 19:52:46.115981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:46.223882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:46.227595 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:46.277616 kubelet[1664]: E0213 19:52:46.277551 1664 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:46.280020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:46.280225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:47.932758 update_engine[1486]: I20250213 19:52:47.932111 1486 update_attempter.cc:509] Updating boot flags... Feb 13 19:52:47.973691 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1682) Feb 13 19:52:48.043517 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1684) Feb 13 19:52:56.357910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 19:52:56.363976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:56.479284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:56.491828 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:56.543062 kubelet[1699]: E0213 19:52:56.543014 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:56.545924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:56.546079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:53:06.608124 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 19:53:06.618063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:06.735040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:06.739172 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:53:06.784426 kubelet[1715]: E0213 19:53:06.784365 1715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:53:06.786782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:53:06.786941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:53:16.858395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 19:53:16.866115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:16.989975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:16.991588 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:53:17.042022 kubelet[1731]: E0213 19:53:17.041956 1731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:53:17.045378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:53:17.045620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:53:27.108332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 19:53:27.119043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:27.234629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:27.245248 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:53:27.292938 kubelet[1747]: E0213 19:53:27.292891 1747 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:53:27.295476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:53:27.295728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:53:37.358231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Feb 13 19:53:37.364993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:37.487899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:37.497256 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:53:37.543265 kubelet[1764]: E0213 19:53:37.543203 1764 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:53:37.546388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:53:37.546563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:53:47.608113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Feb 13 19:53:47.621046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:47.732340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:47.737438 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:53:47.784371 kubelet[1781]: E0213 19:53:47.784268 1781 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:53:47.787260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:53:47.787448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:53:53.599166 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:53:53.609126 systemd[1]: Started sshd@0-188.245.239.161:22-139.178.68.195:54850.service - OpenSSH per-connection server daemon (139.178.68.195:54850). Feb 13 19:53:54.609264 sshd[1790]: Accepted publickey for core from 139.178.68.195 port 54850 ssh2: RSA SHA256:jgGLROb1Jd+vKblLO1iumzrmTNJh/fegOVZ98c435jo Feb 13 19:53:54.613169 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:54.623306 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:53:54.633487 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:53:54.637632 systemd-logind[1483]: New session 1 of user core. Feb 13 19:53:54.646397 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:53:54.653251 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:53:54.668491 (systemd)[1794]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:53:54.784758 systemd[1794]: Queued start job for default target default.target. Feb 13 19:53:54.793626 systemd[1794]: Created slice app.slice - User Application Slice. Feb 13 19:53:54.794065 systemd[1794]: Reached target paths.target - Paths. Feb 13 19:53:54.794098 systemd[1794]: Reached target timers.target - Timers. Feb 13 19:53:54.795887 systemd[1794]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:53:54.810476 systemd[1794]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:53:54.810730 systemd[1794]: Reached target sockets.target - Sockets. Feb 13 19:53:54.810768 systemd[1794]: Reached target basic.target - Basic System. Feb 13 19:53:54.810852 systemd[1794]: Reached target default.target - Main User Target. Feb 13 19:53:54.810899 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:53:54.810916 systemd[1794]: Startup finished in 135ms. Feb 13 19:53:54.819016 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:53:55.516168 systemd[1]: Started sshd@1-188.245.239.161:22-139.178.68.195:54854.service - OpenSSH per-connection server daemon (139.178.68.195:54854). Feb 13 19:53:56.502259 sshd[1805]: Accepted publickey for core from 139.178.68.195 port 54854 ssh2: RSA SHA256:jgGLROb1Jd+vKblLO1iumzrmTNJh/fegOVZ98c435jo Feb 13 19:53:56.504154 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:56.509937 systemd-logind[1483]: New session 2 of user core. Feb 13 19:53:56.520974 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:53:57.184255 sshd[1807]: Connection closed by 139.178.68.195 port 54854 Feb 13 19:53:57.185295 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:57.190824 systemd[1]: sshd@1-188.245.239.161:22-139.178.68.195:54854.service: Deactivated successfully. Feb 13 19:53:57.192967 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:53:57.193831 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:53:57.195316 systemd-logind[1483]: Removed session 2. Feb 13 19:53:57.366010 systemd[1]: Started sshd@2-188.245.239.161:22-139.178.68.195:35442.service - OpenSSH per-connection server daemon (139.178.68.195:35442). Feb 13 19:53:57.858090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Feb 13 19:53:57.874149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:53:57.975232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:53:57.979789 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:53:58.021382 kubelet[1821]: E0213 19:53:58.021317 1821 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:53:58.025211 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:53:58.025371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:53:58.363338 sshd[1812]: Accepted publickey for core from 139.178.68.195 port 35442 ssh2: RSA SHA256:jgGLROb1Jd+vKblLO1iumzrmTNJh/fegOVZ98c435jo Feb 13 19:53:58.365943 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:58.371966 systemd-logind[1483]: New session 3 of user core. Feb 13 19:53:58.379979 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:53:59.047843 sshd[1831]: Connection closed by 139.178.68.195 port 35442 Feb 13 19:53:59.048981 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:59.053420 systemd[1]: sshd@2-188.245.239.161:22-139.178.68.195:35442.service: Deactivated successfully. Feb 13 19:53:59.055630 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:53:59.056901 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:53:59.057849 systemd-logind[1483]: Removed session 3. Feb 13 19:53:59.228234 systemd[1]: Started sshd@3-188.245.239.161:22-139.178.68.195:35446.service - OpenSSH per-connection server daemon (139.178.68.195:35446). Feb 13 19:54:00.206063 sshd[1836]: Accepted publickey for core from 139.178.68.195 port 35446 ssh2: RSA SHA256:jgGLROb1Jd+vKblLO1iumzrmTNJh/fegOVZ98c435jo Feb 13 19:54:00.208630 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:00.215651 systemd-logind[1483]: New session 4 of user core. Feb 13 19:54:00.222986 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:54:00.882899 sshd[1838]: Connection closed by 139.178.68.195 port 35446 Feb 13 19:54:00.883832 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:00.888824 systemd[1]: sshd@3-188.245.239.161:22-139.178.68.195:35446.service: Deactivated successfully. Feb 13 19:54:00.890881 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:54:00.892566 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:54:00.894299 systemd-logind[1483]: Removed session 4. Feb 13 19:54:01.053638 systemd[1]: Started sshd@4-188.245.239.161:22-139.178.68.195:35458.service - OpenSSH per-connection server daemon (139.178.68.195:35458). Feb 13 19:54:02.079026 sshd[1843]: Accepted publickey for core from 139.178.68.195 port 35458 ssh2: RSA SHA256:jgGLROb1Jd+vKblLO1iumzrmTNJh/fegOVZ98c435jo Feb 13 19:54:02.081852 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:02.088536 systemd-logind[1483]: New session 5 of user core. Feb 13 19:54:02.094971 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:54:02.612332 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:54:02.612631 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:54:02.628214 sudo[1846]: pam_unix(sudo:session): session closed for user root Feb 13 19:54:02.788896 sshd[1845]: Connection closed by 139.178.68.195 port 35458 Feb 13 19:54:02.789840 sshd-session[1843]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:02.794045 systemd[1]: sshd@4-188.245.239.161:22-139.178.68.195:35458.service: Deactivated successfully. Feb 13 19:54:02.796249 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:54:02.798083 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:54:02.800406 systemd-logind[1483]: Removed session 5. Feb 13 19:54:02.963080 systemd[1]: Started sshd@5-188.245.239.161:22-139.178.68.195:35462.service - OpenSSH per-connection server daemon (139.178.68.195:35462). Feb 13 19:54:03.955136 sshd[1851]: Accepted publickey for core from 139.178.68.195 port 35462 ssh2: RSA SHA256:jgGLROb1Jd+vKblLO1iumzrmTNJh/fegOVZ98c435jo Feb 13 19:54:03.957281 sshd-session[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:03.962313 systemd-logind[1483]: New session 6 of user core. Feb 13 19:54:03.971306 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:54:04.481506 sudo[1855]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:54:04.482029 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:54:04.486274 sudo[1855]: pam_unix(sudo:session): session closed for user root Feb 13 19:54:04.492206 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:54:04.492463 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:54:04.515366 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:54:04.549828 augenrules[1877]: No rules Feb 13 19:54:04.552548 sudo[1854]: pam_unix(sudo:session): session closed for user root Feb 13 19:54:04.550468 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:54:04.550653 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:54:04.712984 sshd[1853]: Connection closed by 139.178.68.195 port 35462 Feb 13 19:54:04.713810 sshd-session[1851]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:04.718634 systemd[1]: sshd@5-188.245.239.161:22-139.178.68.195:35462.service: Deactivated successfully. Feb 13 19:54:04.720449 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:54:04.721477 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:54:04.722621 systemd-logind[1483]: Removed session 6. Feb 13 19:54:04.884981 systemd[1]: Started sshd@6-188.245.239.161:22-139.178.68.195:35466.service - OpenSSH per-connection server daemon (139.178.68.195:35466). Feb 13 19:54:05.874383 sshd[1885]: Accepted publickey for core from 139.178.68.195 port 35466 ssh2: RSA SHA256:jgGLROb1Jd+vKblLO1iumzrmTNJh/fegOVZ98c435jo Feb 13 19:54:05.877879 sshd-session[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:05.883493 systemd-logind[1483]: New session 7 of user core. Feb 13 19:54:05.893022 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:54:06.398270 sudo[1888]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:54:06.398559 sudo[1888]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:54:07.040448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:54:07.049196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:54:07.069421 systemd[1]: Reloading requested from client PID 1925 ('systemctl') (unit session-7.scope)... Feb 13 19:54:07.069442 systemd[1]: Reloading... Feb 13 19:54:07.183706 zram_generator::config[1962]: No configuration found. Feb 13 19:54:07.287122 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:54:07.352737 systemd[1]: Reloading finished in 282 ms. Feb 13 19:54:07.403878 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:54:07.403965 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:54:07.404270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:54:07.410169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:54:07.532916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:54:07.536849 (kubelet)[2013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:54:07.581035 kubelet[2013]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:54:07.581035 kubelet[2013]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:54:07.581035 kubelet[2013]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:54:07.581483 kubelet[2013]: I0213 19:54:07.581210 2013 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:54:07.879359 kubelet[2013]: I0213 19:54:07.879289 2013 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:54:07.879359 kubelet[2013]: I0213 19:54:07.879331 2013 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:54:07.882687 kubelet[2013]: I0213 19:54:07.882605 2013 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:54:07.909015 kubelet[2013]: I0213 19:54:07.908844 2013 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:54:07.920112 kubelet[2013]: I0213 19:54:07.920071 2013 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:54:07.922386 kubelet[2013]: I0213 19:54:07.922309 2013 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:54:07.922572 kubelet[2013]: I0213 19:54:07.922376 2013 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:54:07.922683 kubelet[2013]: I0213 19:54:07.922627 2013 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:54:07.922683 kubelet[2013]: I0213 19:54:07.922638 2013 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:54:07.922970 kubelet[2013]: I0213 19:54:07.922937 2013 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:54:07.924335 kubelet[2013]: I0213 19:54:07.924102 2013 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:54:07.924335 kubelet[2013]: I0213 19:54:07.924127 2013 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:54:07.924335 kubelet[2013]: I0213 19:54:07.924336 2013 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:54:07.924548 kubelet[2013]: I0213 19:54:07.924409 2013 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:54:07.925404 kubelet[2013]: E0213 19:54:07.925368 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:07.925627 kubelet[2013]: E0213 19:54:07.925568 2013 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:07.926525 kubelet[2013]: I0213 19:54:07.926334 2013 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:54:07.926773 kubelet[2013]: I0213 19:54:07.926745 2013 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:54:07.926929 kubelet[2013]: W0213 19:54:07.926858 2013 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:54:07.927801 kubelet[2013]: I0213 19:54:07.927654 2013 server.go:1264] "Started kubelet" Feb 13 19:54:07.929003 kubelet[2013]: I0213 19:54:07.928971 2013 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:54:07.941884 kubelet[2013]: I0213 19:54:07.940342 2013 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:54:07.941884 kubelet[2013]: I0213 19:54:07.941455 2013 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:54:07.942190 kubelet[2013]: I0213 19:54:07.942176 2013 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:54:07.942750 kubelet[2013]: I0213 19:54:07.942730 2013 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:54:07.947552 kubelet[2013]: I0213 19:54:07.943109 2013 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:54:07.947727 kubelet[2013]: I0213 19:54:07.943521 2013 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:54:07.947983 kubelet[2013]: I0213 19:54:07.947967 2013 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:54:07.948385 kubelet[2013]: W0213 19:54:07.946989 2013 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:54:07.948495 kubelet[2013]: E0213 19:54:07.948482 2013 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:54:07.948550 kubelet[2013]: E0213 19:54:07.946607 2013 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:54:07.948655 kubelet[2013]: W0213 19:54:07.948644 2013 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:54:07.949247 kubelet[2013]: E0213 19:54:07.949228 2013 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:54:07.949364 kubelet[2013]: I0213 19:54:07.948925 2013 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:54:07.949532 kubelet[2013]: I0213 19:54:07.949498 2013 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:54:07.952461 kubelet[2013]: E0213 19:54:07.947708 2013 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.1823dc9cfde27aa8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-02-13 19:54:07.927630504 +0000 UTC m=+0.387468929,LastTimestamp:2025-02-13 19:54:07.927630504 +0000 UTC m=+0.387468929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Feb 13 19:54:07.952461 kubelet[2013]: W0213 19:54:07.949929 2013 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:54:07.952461 kubelet[2013]: E0213 19:54:07.952149 2013 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:54:07.955499 kubelet[2013]: I0213 19:54:07.955472 2013 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:54:07.969908 kubelet[2013]: E0213 19:54:07.969864 2013 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:54:07.970046 kubelet[2013]: E0213 19:54:07.969970 2013 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.1823dc9cff037dd2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-02-13 19:54:07.946571218 +0000 UTC m=+0.406409643,LastTimestamp:2025-02-13 19:54:07.946571218 +0000 UTC m=+0.406409643,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Feb 13 19:54:07.971471 kubelet[2013]: I0213 19:54:07.971447 2013 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:54:07.971580 kubelet[2013]: I0213 19:54:07.971569 2013 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:54:07.971826 kubelet[2013]: I0213 19:54:07.971628 2013 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:54:07.976249 kubelet[2013]: I0213 19:54:07.976229 2013 policy_none.go:49] "None policy: Start" Feb 13 19:54:07.979428 kubelet[2013]: I0213 19:54:07.979037 2013 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:54:07.979428 kubelet[2013]: I0213 19:54:07.979062 2013 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:54:07.986777 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:54:07.999008 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:54:08.006774 kubelet[2013]: I0213 19:54:08.006174 2013 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:54:08.009422 kubelet[2013]: I0213 19:54:08.009006 2013 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:54:08.009422 kubelet[2013]: I0213 19:54:08.009105 2013 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:54:08.009422 kubelet[2013]: I0213 19:54:08.009127 2013 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:54:08.009422 kubelet[2013]: E0213 19:54:08.009169 2013 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:54:08.009374 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:54:08.021425 kubelet[2013]: I0213 19:54:08.021375 2013 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:54:08.022152 kubelet[2013]: I0213 19:54:08.022062 2013 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:54:08.023989 kubelet[2013]: I0213 19:54:08.023044 2013 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:54:08.025147 kubelet[2013]: E0213 19:54:08.025075 2013 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" Feb 13 19:54:08.044174 kubelet[2013]: I0213 19:54:08.044145 2013 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.4" Feb 13 19:54:08.057572 kubelet[2013]: I0213 19:54:08.057510 2013 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.4" Feb 13 19:54:08.076503 kubelet[2013]: E0213 19:54:08.076455 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:08.096361 sudo[1888]: pam_unix(sudo:session): session closed for user root Feb 13 19:54:08.176791 kubelet[2013]: E0213 19:54:08.176592 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:08.257283 sshd[1887]: Connection closed by 139.178.68.195 port 35466 Feb 13 19:54:08.257151 sshd-session[1885]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:08.262106 systemd[1]: sshd@6-188.245.239.161:22-139.178.68.195:35466.service: Deactivated successfully. Feb 13 19:54:08.264337 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:54:08.265381 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:54:08.266753 systemd-logind[1483]: Removed session 7. Feb 13 19:54:08.277056 kubelet[2013]: E0213 19:54:08.276993 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:08.377927 kubelet[2013]: E0213 19:54:08.377863 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:08.478387 kubelet[2013]: E0213 19:54:08.478225 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:08.579181 kubelet[2013]: E0213 19:54:08.579119 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:08.679582 kubelet[2013]: E0213 19:54:08.679506 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:08.780823 kubelet[2013]: E0213 19:54:08.780559 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:08.881375 kubelet[2013]: E0213 19:54:08.881293 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:08.886551 kubelet[2013]: I0213 19:54:08.886492 2013 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:54:08.886893 kubelet[2013]: W0213 19:54:08.886777 2013 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:54:08.887257 kubelet[2013]: W0213 19:54:08.887064 2013 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:54:08.926101 kubelet[2013]: E0213 19:54:08.926050 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:08.982253 kubelet[2013]: E0213 19:54:08.982171 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:09.082613 kubelet[2013]: E0213 19:54:09.082388 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:09.183436 kubelet[2013]: E0213 19:54:09.183354 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:09.284445 kubelet[2013]: E0213 19:54:09.284385 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:09.385351 kubelet[2013]: E0213 19:54:09.385285 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Feb 13 19:54:09.486729 kubelet[2013]: I0213 19:54:09.486554 2013 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:54:09.487250 containerd[1506]: time="2025-02-13T19:54:09.487123746Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:54:09.487658 kubelet[2013]: I0213 19:54:09.487469 2013 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:54:09.926183 kubelet[2013]: I0213 19:54:09.926047 2013 apiserver.go:52] "Watching apiserver" Feb 13 19:54:09.926915 kubelet[2013]: E0213 19:54:09.926477 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:09.933638 kubelet[2013]: I0213 19:54:09.933588 2013 topology_manager.go:215] "Topology Admit Handler" podUID="5f6bb2c1-2c03-4fec-a099-9a02f486c584" podNamespace="calico-system" podName="calico-node-8h6m8" Feb 13 19:54:09.933803 kubelet[2013]: I0213 19:54:09.933705 2013 topology_manager.go:215] "Topology Admit Handler" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" podNamespace="calico-system" podName="csi-node-driver-dpmmk" Feb 13 19:54:09.933803 kubelet[2013]: I0213 19:54:09.933796 2013 topology_manager.go:215] "Topology Admit Handler" podUID="a7d7d7ee-8ed4-4d57-87eb-572ac86e287a" podNamespace="kube-system" podName="kube-proxy-m6mvw" Feb 13 19:54:09.934946 kubelet[2013]: E0213 19:54:09.934617 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:09.942800 systemd[1]: Created slice kubepods-besteffort-poda7d7d7ee_8ed4_4d57_87eb_572ac86e287a.slice - libcontainer container kubepods-besteffort-poda7d7d7ee_8ed4_4d57_87eb_572ac86e287a.slice. Feb 13 19:54:09.948264 kubelet[2013]: I0213 19:54:09.948239 2013 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:54:09.957253 systemd[1]: Created slice kubepods-besteffort-pod5f6bb2c1_2c03_4fec_a099_9a02f486c584.slice - libcontainer container kubepods-besteffort-pod5f6bb2c1_2c03_4fec_a099_9a02f486c584.slice. Feb 13 19:54:09.964223 kubelet[2013]: I0213 19:54:09.964187 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5f6bb2c1-2c03-4fec-a099-9a02f486c584-cni-net-dir\") pod \"calico-node-8h6m8\" (UID: \"5f6bb2c1-2c03-4fec-a099-9a02f486c584\") " pod="calico-system/calico-node-8h6m8" Feb 13 19:54:09.964626 kubelet[2013]: I0213 19:54:09.964405 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e434ee7b-e67f-4fda-aedd-10f6c7172fae-socket-dir\") pod \"csi-node-driver-dpmmk\" (UID: \"e434ee7b-e67f-4fda-aedd-10f6c7172fae\") " pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:09.964626 kubelet[2013]: I0213 19:54:09.964436 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7d7d7ee-8ed4-4d57-87eb-572ac86e287a-xtables-lock\") pod \"kube-proxy-m6mvw\" (UID: \"a7d7d7ee-8ed4-4d57-87eb-572ac86e287a\") " pod="kube-system/kube-proxy-m6mvw" Feb 13 19:54:09.964626 kubelet[2013]: I0213 19:54:09.964456 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7d7d7ee-8ed4-4d57-87eb-572ac86e287a-lib-modules\") pod \"kube-proxy-m6mvw\" (UID: \"a7d7d7ee-8ed4-4d57-87eb-572ac86e287a\") " pod="kube-system/kube-proxy-m6mvw" Feb 13 19:54:09.964626 kubelet[2013]: I0213 19:54:09.964472 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f6bb2c1-2c03-4fec-a099-9a02f486c584-lib-modules\") pod \"calico-node-8h6m8\" (UID: \"5f6bb2c1-2c03-4fec-a099-9a02f486c584\") " pod="calico-system/calico-node-8h6m8" Feb 13 19:54:09.964626 kubelet[2013]: I0213 19:54:09.964489 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f6bb2c1-2c03-4fec-a099-9a02f486c584-tigera-ca-bundle\") pod \"calico-node-8h6m8\" (UID: \"5f6bb2c1-2c03-4fec-a099-9a02f486c584\") " pod="calico-system/calico-node-8h6m8" Feb 13 19:54:09.964868 kubelet[2013]: I0213 19:54:09.964504 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5f6bb2c1-2c03-4fec-a099-9a02f486c584-var-lib-calico\") pod \"calico-node-8h6m8\" (UID: \"5f6bb2c1-2c03-4fec-a099-9a02f486c584\") " pod="calico-system/calico-node-8h6m8" Feb 13 19:54:09.964868 kubelet[2013]: I0213 19:54:09.964520 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5f6bb2c1-2c03-4fec-a099-9a02f486c584-cni-log-dir\") pod \"calico-node-8h6m8\" (UID: \"5f6bb2c1-2c03-4fec-a099-9a02f486c584\") " pod="calico-system/calico-node-8h6m8" Feb 13 19:54:09.964868 kubelet[2013]: I0213 19:54:09.964538 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5f6bb2c1-2c03-4fec-a099-9a02f486c584-flexvol-driver-host\") pod \"calico-node-8h6m8\" (UID: \"5f6bb2c1-2c03-4fec-a099-9a02f486c584\") " pod="calico-system/calico-node-8h6m8" Feb 13 19:54:09.964868 kubelet[2013]: I0213 19:54:09.964582 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmsmk\" (UniqueName: \"kubernetes.io/projected/5f6bb2c1-2c03-4fec-a099-9a02f486c584-kube-api-access-hmsmk\") pod \"calico-node-8h6m8\" (UID: \"5f6bb2c1-2c03-4fec-a099-9a02f486c584\") " pod="calico-system/calico-node-8h6m8" Feb 13 19:54:09.964868 kubelet[2013]: I0213 19:54:09.964600 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a7d7d7ee-8ed4-4d57-87eb-572ac86e287a-kube-proxy\") pod \"kube-proxy-m6mvw\" (UID: \"a7d7d7ee-8ed4-4d57-87eb-572ac86e287a\") " pod="kube-system/kube-proxy-m6mvw" Feb 13 19:54:09.965019 kubelet[2013]: I0213 19:54:09.964635 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5f6bb2c1-2c03-4fec-a099-9a02f486c584-policysync\") pod \"calico-node-8h6m8\" (UID: \"5f6bb2c1-2c03-4fec-a099-9a02f486c584\") " pod="calico-system/calico-node-8h6m8" Feb 13 19:54:09.965019 kubelet[2013]: I0213 19:54:09.964751 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5f6bb2c1-2c03-4fec-a099-9a02f486c584-node-certs\") pod \"calico-node-8h6m8\" (UID: \"5f6bb2c1-2c03-4fec-a099-9a02f486c584\") " pod="calico-system/calico-node-8h6m8" Feb 13 19:54:09.965019 kubelet[2013]: I0213 19:54:09.964789 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5f6bb2c1-2c03-4fec-a099-9a02f486c584-cni-bin-dir\") pod \"calico-node-8h6m8\" (UID: \"5f6bb2c1-2c03-4fec-a099-9a02f486c584\") " pod="calico-system/calico-node-8h6m8" Feb 13 19:54:09.965019 kubelet[2013]: I0213 19:54:09.964863 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e434ee7b-e67f-4fda-aedd-10f6c7172fae-varrun\") pod \"csi-node-driver-dpmmk\" (UID: \"e434ee7b-e67f-4fda-aedd-10f6c7172fae\") " pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:09.965019 kubelet[2013]: I0213 19:54:09.964891 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jv5x\" (UniqueName: \"kubernetes.io/projected/a7d7d7ee-8ed4-4d57-87eb-572ac86e287a-kube-api-access-7jv5x\") pod \"kube-proxy-m6mvw\" (UID: \"a7d7d7ee-8ed4-4d57-87eb-572ac86e287a\") " pod="kube-system/kube-proxy-m6mvw" Feb 13 19:54:09.965268 kubelet[2013]: I0213 19:54:09.964910 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f6bb2c1-2c03-4fec-a099-9a02f486c584-xtables-lock\") pod \"calico-node-8h6m8\" (UID: \"5f6bb2c1-2c03-4fec-a099-9a02f486c584\") " pod="calico-system/calico-node-8h6m8" Feb 13 19:54:09.965268 kubelet[2013]: I0213 19:54:09.964928 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5f6bb2c1-2c03-4fec-a099-9a02f486c584-var-run-calico\") pod \"calico-node-8h6m8\" (UID: \"5f6bb2c1-2c03-4fec-a099-9a02f486c584\") " pod="calico-system/calico-node-8h6m8" Feb 13 19:54:09.965268 kubelet[2013]: I0213 19:54:09.964984 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e434ee7b-e67f-4fda-aedd-10f6c7172fae-kubelet-dir\") pod \"csi-node-driver-dpmmk\" (UID: \"e434ee7b-e67f-4fda-aedd-10f6c7172fae\") " pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:09.965268 kubelet[2013]: I0213 19:54:09.965000 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e434ee7b-e67f-4fda-aedd-10f6c7172fae-registration-dir\") pod \"csi-node-driver-dpmmk\" (UID: \"e434ee7b-e67f-4fda-aedd-10f6c7172fae\") " pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:09.965268 kubelet[2013]: I0213 19:54:09.965022 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5vkk\" (UniqueName: \"kubernetes.io/projected/e434ee7b-e67f-4fda-aedd-10f6c7172fae-kube-api-access-f5vkk\") pod \"csi-node-driver-dpmmk\" (UID: \"e434ee7b-e67f-4fda-aedd-10f6c7172fae\") " pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:10.072911 kubelet[2013]: E0213 19:54:10.072818 2013 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:54:10.072911 kubelet[2013]: W0213 19:54:10.072845 2013 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:54:10.072911 kubelet[2013]: E0213 19:54:10.072865 2013 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:54:10.074647 kubelet[2013]: E0213 19:54:10.074602 2013 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:54:10.074647 kubelet[2013]: W0213 19:54:10.074631 2013 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:54:10.074647 kubelet[2013]: E0213 19:54:10.074647 2013 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:54:10.095859 kubelet[2013]: E0213 19:54:10.095829 2013 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:54:10.095859 kubelet[2013]: W0213 19:54:10.095850 2013 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:54:10.096212 kubelet[2013]: E0213 19:54:10.095874 2013 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:54:10.096212 kubelet[2013]: E0213 19:54:10.096130 2013 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:54:10.096212 kubelet[2013]: W0213 19:54:10.096140 2013 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:54:10.096212 kubelet[2013]: E0213 19:54:10.096153 2013 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:54:10.099353 kubelet[2013]: E0213 19:54:10.099320 2013 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:54:10.099353 kubelet[2013]: W0213 19:54:10.099349 2013 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:54:10.099509 kubelet[2013]: E0213 19:54:10.099370 2013 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:54:10.254110 containerd[1506]: time="2025-02-13T19:54:10.253501508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m6mvw,Uid:a7d7d7ee-8ed4-4d57-87eb-572ac86e287a,Namespace:kube-system,Attempt:0,}" Feb 13 19:54:10.261885 containerd[1506]: time="2025-02-13T19:54:10.261652339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8h6m8,Uid:5f6bb2c1-2c03-4fec-a099-9a02f486c584,Namespace:calico-system,Attempt:0,}" Feb 13 19:54:10.872826 containerd[1506]: time="2025-02-13T19:54:10.872764522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:54:10.875367 containerd[1506]: time="2025-02-13T19:54:10.875305455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 19:54:10.876305 containerd[1506]: time="2025-02-13T19:54:10.876007618Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:54:10.877114 containerd[1506]: time="2025-02-13T19:54:10.877071184Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:54:10.877979 containerd[1506]: time="2025-02-13T19:54:10.877929108Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:54:10.884482 containerd[1506]: time="2025-02-13T19:54:10.884095179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:54:10.886394 containerd[1506]: time="2025-02-13T19:54:10.886355751Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 624.572491ms" Feb 13 19:54:10.887596 containerd[1506]: time="2025-02-13T19:54:10.887545277Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 633.933929ms" Feb 13 19:54:10.927504 kubelet[2013]: E0213 19:54:10.927424 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:10.986623 containerd[1506]: time="2025-02-13T19:54:10.986477900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:54:10.986623 containerd[1506]: time="2025-02-13T19:54:10.986542820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:54:10.986977 containerd[1506]: time="2025-02-13T19:54:10.986805821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:10.988484 containerd[1506]: time="2025-02-13T19:54:10.988403989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:10.991796 containerd[1506]: time="2025-02-13T19:54:10.991618366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:54:10.991796 containerd[1506]: time="2025-02-13T19:54:10.991709206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:54:10.992025 containerd[1506]: time="2025-02-13T19:54:10.991975808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:10.992223 containerd[1506]: time="2025-02-13T19:54:10.992174649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:11.080123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189977709.mount: Deactivated successfully. Feb 13 19:54:11.098900 systemd[1]: Started cri-containerd-d7026d270c317e6aee301f7b6e9a82e86bff048630eed4fc5cac76c841f97760.scope - libcontainer container d7026d270c317e6aee301f7b6e9a82e86bff048630eed4fc5cac76c841f97760. Feb 13 19:54:11.103818 systemd[1]: Started cri-containerd-71e4cb92f6d84d0042eb6b3d889cecfd18e45a448b3bad1acfc1ce47962f624a.scope - libcontainer container 71e4cb92f6d84d0042eb6b3d889cecfd18e45a448b3bad1acfc1ce47962f624a. Feb 13 19:54:11.137918 containerd[1506]: time="2025-02-13T19:54:11.136648586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8h6m8,Uid:5f6bb2c1-2c03-4fec-a099-9a02f486c584,Namespace:calico-system,Attempt:0,} returns sandbox id \"71e4cb92f6d84d0042eb6b3d889cecfd18e45a448b3bad1acfc1ce47962f624a\"" Feb 13 19:54:11.139316 containerd[1506]: time="2025-02-13T19:54:11.139283064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:54:11.140946 containerd[1506]: time="2025-02-13T19:54:11.140810447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m6mvw,Uid:a7d7d7ee-8ed4-4d57-87eb-572ac86e287a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7026d270c317e6aee301f7b6e9a82e86bff048630eed4fc5cac76c841f97760\"" Feb 13 19:54:11.927902 kubelet[2013]: E0213 19:54:11.927702 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:12.010913 kubelet[2013]: E0213 19:54:12.010257 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:12.721865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622680963.mount: Deactivated successfully. Feb 13 19:54:12.807293 containerd[1506]: time="2025-02-13T19:54:12.806480543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:12.809153 containerd[1506]: time="2025-02-13T19:54:12.809098820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 19:54:12.811690 containerd[1506]: time="2025-02-13T19:54:12.810722603Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:12.813800 containerd[1506]: time="2025-02-13T19:54:12.813750166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:12.815324 containerd[1506]: time="2025-02-13T19:54:12.815281468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.675675919s" Feb 13 19:54:12.815462 containerd[1506]: time="2025-02-13T19:54:12.815446431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 19:54:12.817499 containerd[1506]: time="2025-02-13T19:54:12.817467379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:54:12.820372 containerd[1506]: time="2025-02-13T19:54:12.820334220Z" level=info msg="CreateContainer within sandbox \"71e4cb92f6d84d0042eb6b3d889cecfd18e45a448b3bad1acfc1ce47962f624a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:54:12.836828 containerd[1506]: time="2025-02-13T19:54:12.836777814Z" level=info msg="CreateContainer within sandbox \"71e4cb92f6d84d0042eb6b3d889cecfd18e45a448b3bad1acfc1ce47962f624a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c49ddb22b16153b5a6556ff401a86f508567d25ddf1c3d8edc2a09e55687bb69\"" Feb 13 19:54:12.837925 containerd[1506]: time="2025-02-13T19:54:12.837870349Z" level=info msg="StartContainer for \"c49ddb22b16153b5a6556ff401a86f508567d25ddf1c3d8edc2a09e55687bb69\"" Feb 13 19:54:12.879171 systemd[1]: Started cri-containerd-c49ddb22b16153b5a6556ff401a86f508567d25ddf1c3d8edc2a09e55687bb69.scope - libcontainer container c49ddb22b16153b5a6556ff401a86f508567d25ddf1c3d8edc2a09e55687bb69. Feb 13 19:54:12.914796 containerd[1506]: time="2025-02-13T19:54:12.914747362Z" level=info msg="StartContainer for \"c49ddb22b16153b5a6556ff401a86f508567d25ddf1c3d8edc2a09e55687bb69\" returns successfully" Feb 13 19:54:12.928361 kubelet[2013]: E0213 19:54:12.928294 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:12.929006 systemd[1]: cri-containerd-c49ddb22b16153b5a6556ff401a86f508567d25ddf1c3d8edc2a09e55687bb69.scope: Deactivated successfully. Feb 13 19:54:12.977918 containerd[1506]: time="2025-02-13T19:54:12.977335411Z" level=info msg="shim disconnected" id=c49ddb22b16153b5a6556ff401a86f508567d25ddf1c3d8edc2a09e55687bb69 namespace=k8s.io Feb 13 19:54:12.977918 containerd[1506]: time="2025-02-13T19:54:12.977397572Z" level=warning msg="cleaning up after shim disconnected" id=c49ddb22b16153b5a6556ff401a86f508567d25ddf1c3d8edc2a09e55687bb69 namespace=k8s.io Feb 13 19:54:12.977918 containerd[1506]: time="2025-02-13T19:54:12.977409932Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:54:13.693161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c49ddb22b16153b5a6556ff401a86f508567d25ddf1c3d8edc2a09e55687bb69-rootfs.mount: Deactivated successfully. Feb 13 19:54:13.928570 kubelet[2013]: E0213 19:54:13.928435 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:14.010808 kubelet[2013]: E0213 19:54:14.010225 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:14.256540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255011488.mount: Deactivated successfully. Feb 13 19:54:14.537421 containerd[1506]: time="2025-02-13T19:54:14.537293083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:14.538807 containerd[1506]: time="2025-02-13T19:54:14.538734862Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663396" Feb 13 19:54:14.540093 containerd[1506]: time="2025-02-13T19:54:14.540010120Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:14.543756 containerd[1506]: time="2025-02-13T19:54:14.542647275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:14.543756 containerd[1506]: time="2025-02-13T19:54:14.543575848Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.725941507s" Feb 13 19:54:14.543756 containerd[1506]: time="2025-02-13T19:54:14.543610329Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:54:14.545616 containerd[1506]: time="2025-02-13T19:54:14.545582675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:54:14.546574 containerd[1506]: time="2025-02-13T19:54:14.546546489Z" level=info msg="CreateContainer within sandbox \"d7026d270c317e6aee301f7b6e9a82e86bff048630eed4fc5cac76c841f97760\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:54:14.571602 containerd[1506]: time="2025-02-13T19:54:14.571552389Z" level=info msg="CreateContainer within sandbox \"d7026d270c317e6aee301f7b6e9a82e86bff048630eed4fc5cac76c841f97760\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d6c9bcc2a77be9f55c3b8e7a6d67d9232af72f36bb1b120253c9c2e13c165d4\"" Feb 13 19:54:14.572857 containerd[1506]: time="2025-02-13T19:54:14.572822486Z" level=info msg="StartContainer for \"2d6c9bcc2a77be9f55c3b8e7a6d67d9232af72f36bb1b120253c9c2e13c165d4\"" Feb 13 19:54:14.600886 systemd[1]: Started cri-containerd-2d6c9bcc2a77be9f55c3b8e7a6d67d9232af72f36bb1b120253c9c2e13c165d4.scope - libcontainer container 2d6c9bcc2a77be9f55c3b8e7a6d67d9232af72f36bb1b120253c9c2e13c165d4. Feb 13 19:54:14.636492 containerd[1506]: time="2025-02-13T19:54:14.636434233Z" level=info msg="StartContainer for \"2d6c9bcc2a77be9f55c3b8e7a6d67d9232af72f36bb1b120253c9c2e13c165d4\" returns successfully" Feb 13 19:54:14.928806 kubelet[2013]: E0213 19:54:14.928739 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:15.929535 kubelet[2013]: E0213 19:54:15.929457 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:16.010265 kubelet[2013]: E0213 19:54:16.009697 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:16.929775 kubelet[2013]: E0213 19:54:16.929692 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:17.930839 kubelet[2013]: E0213 19:54:17.930770 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:18.010367 kubelet[2013]: E0213 19:54:18.009571 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:18.930923 kubelet[2013]: E0213 19:54:18.930884 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:19.097778 containerd[1506]: time="2025-02-13T19:54:19.097697024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:19.099740 containerd[1506]: time="2025-02-13T19:54:19.099286284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 19:54:19.100724 containerd[1506]: time="2025-02-13T19:54:19.100690061Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:19.103492 containerd[1506]: time="2025-02-13T19:54:19.103454055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:19.104206 containerd[1506]: time="2025-02-13T19:54:19.104174384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.558363226s" Feb 13 19:54:19.104324 containerd[1506]: time="2025-02-13T19:54:19.104306946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 19:54:19.106966 containerd[1506]: time="2025-02-13T19:54:19.106914418Z" level=info msg="CreateContainer within sandbox \"71e4cb92f6d84d0042eb6b3d889cecfd18e45a448b3bad1acfc1ce47962f624a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:54:19.126867 containerd[1506]: time="2025-02-13T19:54:19.126814183Z" level=info msg="CreateContainer within sandbox \"71e4cb92f6d84d0042eb6b3d889cecfd18e45a448b3bad1acfc1ce47962f624a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3c4b8af88a4f66fad77bb84a95ac3e19ba545f3d61d55ec06d10dcb0443befd3\"" Feb 13 19:54:19.127814 containerd[1506]: time="2025-02-13T19:54:19.127750394Z" level=info msg="StartContainer for \"3c4b8af88a4f66fad77bb84a95ac3e19ba545f3d61d55ec06d10dcb0443befd3\"" Feb 13 19:54:19.161412 systemd[1]: Started cri-containerd-3c4b8af88a4f66fad77bb84a95ac3e19ba545f3d61d55ec06d10dcb0443befd3.scope - libcontainer container 3c4b8af88a4f66fad77bb84a95ac3e19ba545f3d61d55ec06d10dcb0443befd3. Feb 13 19:54:19.196120 containerd[1506]: time="2025-02-13T19:54:19.195289185Z" level=info msg="StartContainer for \"3c4b8af88a4f66fad77bb84a95ac3e19ba545f3d61d55ec06d10dcb0443befd3\" returns successfully" Feb 13 19:54:19.662522 containerd[1506]: time="2025-02-13T19:54:19.662415610Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:54:19.666507 systemd[1]: cri-containerd-3c4b8af88a4f66fad77bb84a95ac3e19ba545f3d61d55ec06d10dcb0443befd3.scope: Deactivated successfully. Feb 13 19:54:19.690970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c4b8af88a4f66fad77bb84a95ac3e19ba545f3d61d55ec06d10dcb0443befd3-rootfs.mount: Deactivated successfully. Feb 13 19:54:19.755388 kubelet[2013]: I0213 19:54:19.754445 2013 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:54:19.873459 containerd[1506]: time="2025-02-13T19:54:19.873357764Z" level=info msg="shim disconnected" id=3c4b8af88a4f66fad77bb84a95ac3e19ba545f3d61d55ec06d10dcb0443befd3 namespace=k8s.io Feb 13 19:54:19.874101 containerd[1506]: time="2025-02-13T19:54:19.873745489Z" level=warning msg="cleaning up after shim disconnected" id=3c4b8af88a4f66fad77bb84a95ac3e19ba545f3d61d55ec06d10dcb0443befd3 namespace=k8s.io Feb 13 19:54:19.874101 containerd[1506]: time="2025-02-13T19:54:19.873784130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:54:19.886331 containerd[1506]: time="2025-02-13T19:54:19.886283283Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:54:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:54:19.932439 kubelet[2013]: E0213 19:54:19.932285 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:20.018578 systemd[1]: Created slice kubepods-besteffort-pode434ee7b_e67f_4fda_aedd_10f6c7172fae.slice - libcontainer container kubepods-besteffort-pode434ee7b_e67f_4fda_aedd_10f6c7172fae.slice. Feb 13 19:54:20.022052 containerd[1506]: time="2025-02-13T19:54:20.021512984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:0,}" Feb 13 19:54:20.056583 containerd[1506]: time="2025-02-13T19:54:20.056548686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:54:20.074907 kubelet[2013]: I0213 19:54:20.074789 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m6mvw" podStartSLOduration=8.672537673 podStartE2EDuration="12.074761066s" podCreationTimestamp="2025-02-13 19:54:08 +0000 UTC" firstStartedPulling="2025-02-13 19:54:11.142639073 +0000 UTC m=+3.602477498" lastFinishedPulling="2025-02-13 19:54:14.544862386 +0000 UTC m=+7.004700891" observedRunningTime="2025-02-13 19:54:15.061443485 +0000 UTC m=+7.521281950" watchObservedRunningTime="2025-02-13 19:54:20.074761066 +0000 UTC m=+12.534599531" Feb 13 19:54:20.112230 containerd[1506]: time="2025-02-13T19:54:20.112168557Z" level=error msg="Failed to destroy network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:20.112569 containerd[1506]: time="2025-02-13T19:54:20.112545641Z" level=error msg="encountered an error cleaning up failed sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:20.112677 containerd[1506]: time="2025-02-13T19:54:20.112616922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:20.112928 kubelet[2013]: E0213 19:54:20.112888 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:20.112999 kubelet[2013]: E0213 19:54:20.112958 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:20.112999 kubelet[2013]: E0213 19:54:20.112978 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:20.113043 kubelet[2013]: E0213 19:54:20.113020 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:20.119848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2-shm.mount: Deactivated successfully. Feb 13 19:54:20.933133 kubelet[2013]: E0213 19:54:20.933067 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:21.058257 kubelet[2013]: I0213 19:54:21.058222 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2" Feb 13 19:54:21.058979 containerd[1506]: time="2025-02-13T19:54:21.058939162Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\"" Feb 13 19:54:21.059146 containerd[1506]: time="2025-02-13T19:54:21.059124724Z" level=info msg="Ensure that sandbox b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2 in task-service has been cleanup successfully" Feb 13 19:54:21.060813 containerd[1506]: time="2025-02-13T19:54:21.060761664Z" level=info msg="TearDown network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" successfully" Feb 13 19:54:21.060959 containerd[1506]: time="2025-02-13T19:54:21.060792104Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" returns successfully" Feb 13 19:54:21.061326 systemd[1]: run-netns-cni\x2d15f2d22d\x2d0354\x2d599c\x2d1f89\x2d89adcd1ef128.mount: Deactivated successfully. Feb 13 19:54:21.061906 containerd[1506]: time="2025-02-13T19:54:21.061514673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:1,}" Feb 13 19:54:21.128021 containerd[1506]: time="2025-02-13T19:54:21.127961419Z" level=error msg="Failed to destroy network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:21.129960 containerd[1506]: time="2025-02-13T19:54:21.129909282Z" level=error msg="encountered an error cleaning up failed sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:21.130056 containerd[1506]: time="2025-02-13T19:54:21.130000843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:21.130429 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579-shm.mount: Deactivated successfully. Feb 13 19:54:21.131572 kubelet[2013]: E0213 19:54:21.130699 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:21.131572 kubelet[2013]: E0213 19:54:21.130753 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:21.131572 kubelet[2013]: E0213 19:54:21.130772 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:21.132188 kubelet[2013]: E0213 19:54:21.130829 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:21.934145 kubelet[2013]: E0213 19:54:21.933841 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:22.066026 kubelet[2013]: I0213 19:54:22.065993 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579" Feb 13 19:54:22.066803 containerd[1506]: time="2025-02-13T19:54:22.066767308Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\"" Feb 13 19:54:22.066978 containerd[1506]: time="2025-02-13T19:54:22.066956791Z" level=info msg="Ensure that sandbox 70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579 in task-service has been cleanup successfully" Feb 13 19:54:22.068534 containerd[1506]: time="2025-02-13T19:54:22.068398727Z" level=info msg="TearDown network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" successfully" Feb 13 19:54:22.068534 containerd[1506]: time="2025-02-13T19:54:22.068515969Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" returns successfully" Feb 13 19:54:22.069136 containerd[1506]: time="2025-02-13T19:54:22.069099495Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\"" Feb 13 19:54:22.069262 containerd[1506]: time="2025-02-13T19:54:22.069205377Z" level=info msg="TearDown network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" successfully" Feb 13 19:54:22.069262 containerd[1506]: time="2025-02-13T19:54:22.069217297Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" returns successfully" Feb 13 19:54:22.069759 systemd[1]: run-netns-cni\x2d4a0090c5\x2d7a59\x2d855c\x2db8f2\x2ddc9f0f178302.mount: Deactivated successfully. Feb 13 19:54:22.071483 containerd[1506]: time="2025-02-13T19:54:22.071450643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:2,}" Feb 13 19:54:22.155470 containerd[1506]: time="2025-02-13T19:54:22.155340176Z" level=error msg="Failed to destroy network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:22.157540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb-shm.mount: Deactivated successfully. Feb 13 19:54:22.159378 containerd[1506]: time="2025-02-13T19:54:22.159290542Z" level=error msg="encountered an error cleaning up failed sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:22.159506 containerd[1506]: time="2025-02-13T19:54:22.159380663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:22.159648 kubelet[2013]: E0213 19:54:22.159614 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:22.159725 kubelet[2013]: E0213 19:54:22.159683 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:22.159725 kubelet[2013]: E0213 19:54:22.159706 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:22.159796 kubelet[2013]: E0213 19:54:22.159744 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:22.934726 kubelet[2013]: E0213 19:54:22.934645 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:23.070870 kubelet[2013]: I0213 19:54:23.070799 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb" Feb 13 19:54:23.071740 containerd[1506]: time="2025-02-13T19:54:23.071703353Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\"" Feb 13 19:54:23.072293 containerd[1506]: time="2025-02-13T19:54:23.072123478Z" level=info msg="Ensure that sandbox 3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb in task-service has been cleanup successfully" Feb 13 19:54:23.072529 containerd[1506]: time="2025-02-13T19:54:23.072426921Z" level=info msg="TearDown network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" successfully" Feb 13 19:54:23.072529 containerd[1506]: time="2025-02-13T19:54:23.072450842Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" returns successfully" Feb 13 19:54:23.073909 systemd[1]: run-netns-cni\x2da40c472b\x2d7db6\x2d4ad1\x2d2598\x2d4d92abf53d66.mount: Deactivated successfully. Feb 13 19:54:23.075946 containerd[1506]: time="2025-02-13T19:54:23.075916401Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\"" Feb 13 19:54:23.076134 containerd[1506]: time="2025-02-13T19:54:23.076116683Z" level=info msg="TearDown network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" successfully" Feb 13 19:54:23.076193 containerd[1506]: time="2025-02-13T19:54:23.076181124Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" returns successfully" Feb 13 19:54:23.076631 containerd[1506]: time="2025-02-13T19:54:23.076605329Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\"" Feb 13 19:54:23.076757 containerd[1506]: time="2025-02-13T19:54:23.076740691Z" level=info msg="TearDown network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" successfully" Feb 13 19:54:23.076757 containerd[1506]: time="2025-02-13T19:54:23.076755611Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" returns successfully" Feb 13 19:54:23.077639 containerd[1506]: time="2025-02-13T19:54:23.077320737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:3,}" Feb 13 19:54:23.156506 containerd[1506]: time="2025-02-13T19:54:23.156145314Z" level=error msg="Failed to destroy network for sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:23.157284 containerd[1506]: time="2025-02-13T19:54:23.156757881Z" level=error msg="encountered an error cleaning up failed sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:23.157284 containerd[1506]: time="2025-02-13T19:54:23.156856683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:23.158193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5-shm.mount: Deactivated successfully. Feb 13 19:54:23.161063 kubelet[2013]: E0213 19:54:23.160994 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:23.161582 kubelet[2013]: E0213 19:54:23.161087 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:23.161582 kubelet[2013]: E0213 19:54:23.161112 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:23.161582 kubelet[2013]: E0213 19:54:23.161155 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:23.935621 kubelet[2013]: E0213 19:54:23.935581 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:24.077073 kubelet[2013]: I0213 19:54:24.076412 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5" Feb 13 19:54:24.077262 containerd[1506]: time="2025-02-13T19:54:24.077156904Z" level=info msg="StopPodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\"" Feb 13 19:54:24.077422 containerd[1506]: time="2025-02-13T19:54:24.077359346Z" level=info msg="Ensure that sandbox 006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5 in task-service has been cleanup successfully" Feb 13 19:54:24.078964 systemd[1]: run-netns-cni\x2d2b21805f\x2d8ceb\x2d3ebd\x2d2561\x2dabb44cadc07b.mount: Deactivated successfully. Feb 13 19:54:24.080478 containerd[1506]: time="2025-02-13T19:54:24.079203247Z" level=info msg="TearDown network for sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" successfully" Feb 13 19:54:24.080478 containerd[1506]: time="2025-02-13T19:54:24.079230247Z" level=info msg="StopPodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" returns successfully" Feb 13 19:54:24.080478 containerd[1506]: time="2025-02-13T19:54:24.079880454Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\"" Feb 13 19:54:24.080478 containerd[1506]: time="2025-02-13T19:54:24.080000416Z" level=info msg="TearDown network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" successfully" Feb 13 19:54:24.080478 containerd[1506]: time="2025-02-13T19:54:24.080011576Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" returns successfully" Feb 13 19:54:24.080478 containerd[1506]: time="2025-02-13T19:54:24.080368660Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\"" Feb 13 19:54:24.080478 containerd[1506]: time="2025-02-13T19:54:24.080454301Z" level=info msg="TearDown network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" successfully" Feb 13 19:54:24.080478 containerd[1506]: time="2025-02-13T19:54:24.080465901Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" returns successfully" Feb 13 19:54:24.081373 containerd[1506]: time="2025-02-13T19:54:24.081340511Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\"" Feb 13 19:54:24.081482 containerd[1506]: time="2025-02-13T19:54:24.081442752Z" level=info msg="TearDown network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" successfully" Feb 13 19:54:24.081482 containerd[1506]: time="2025-02-13T19:54:24.081453432Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" returns successfully" Feb 13 19:54:24.082218 containerd[1506]: time="2025-02-13T19:54:24.082187920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:4,}" Feb 13 19:54:24.176257 containerd[1506]: time="2025-02-13T19:54:24.175711525Z" level=error msg="Failed to destroy network for sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:24.176257 containerd[1506]: time="2025-02-13T19:54:24.176117729Z" level=error msg="encountered an error cleaning up failed sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:24.176257 containerd[1506]: time="2025-02-13T19:54:24.176189770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:24.177792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb-shm.mount: Deactivated successfully. Feb 13 19:54:24.179583 kubelet[2013]: E0213 19:54:24.178651 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:24.179583 kubelet[2013]: E0213 19:54:24.178735 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:24.179583 kubelet[2013]: E0213 19:54:24.178771 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:24.179902 kubelet[2013]: E0213 19:54:24.178810 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:24.936754 kubelet[2013]: E0213 19:54:24.936702 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:25.082694 kubelet[2013]: I0213 19:54:25.082381 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb" Feb 13 19:54:25.083370 containerd[1506]: time="2025-02-13T19:54:25.083150886Z" level=info msg="StopPodSandbox for \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\"" Feb 13 19:54:25.083576 containerd[1506]: time="2025-02-13T19:54:25.083469569Z" level=info msg="Ensure that sandbox 278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb in task-service has been cleanup successfully" Feb 13 19:54:25.085543 containerd[1506]: time="2025-02-13T19:54:25.085500991Z" level=info msg="TearDown network for sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\" successfully" Feb 13 19:54:25.085543 containerd[1506]: time="2025-02-13T19:54:25.085535752Z" level=info msg="StopPodSandbox for \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\" returns successfully" Feb 13 19:54:25.086020 systemd[1]: run-netns-cni\x2dd1c4d1cd\x2d514c\x2db199\x2d82cd\x2ddabb26e26eb7.mount: Deactivated successfully. Feb 13 19:54:25.087905 containerd[1506]: time="2025-02-13T19:54:25.087503733Z" level=info msg="StopPodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\"" Feb 13 19:54:25.088037 containerd[1506]: time="2025-02-13T19:54:25.087943338Z" level=info msg="TearDown network for sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" successfully" Feb 13 19:54:25.088037 containerd[1506]: time="2025-02-13T19:54:25.087967379Z" level=info msg="StopPodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" returns successfully" Feb 13 19:54:25.088418 containerd[1506]: time="2025-02-13T19:54:25.088391703Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\"" Feb 13 19:54:25.088632 containerd[1506]: time="2025-02-13T19:54:25.088611986Z" level=info msg="TearDown network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" successfully" Feb 13 19:54:25.088776 containerd[1506]: time="2025-02-13T19:54:25.088756947Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" returns successfully" Feb 13 19:54:25.089368 containerd[1506]: time="2025-02-13T19:54:25.089338874Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\"" Feb 13 19:54:25.089609 containerd[1506]: time="2025-02-13T19:54:25.089539516Z" level=info msg="TearDown network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" successfully" Feb 13 19:54:25.089609 containerd[1506]: time="2025-02-13T19:54:25.089557356Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" returns successfully" Feb 13 19:54:25.090717 containerd[1506]: time="2025-02-13T19:54:25.090533127Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\"" Feb 13 19:54:25.090717 containerd[1506]: time="2025-02-13T19:54:25.090629528Z" level=info msg="TearDown network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" successfully" Feb 13 19:54:25.090717 containerd[1506]: time="2025-02-13T19:54:25.090639888Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" returns successfully" Feb 13 19:54:25.091747 containerd[1506]: time="2025-02-13T19:54:25.091713140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:5,}" Feb 13 19:54:25.200414 containerd[1506]: time="2025-02-13T19:54:25.200224770Z" level=error msg="Failed to destroy network for sandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:25.203219 containerd[1506]: time="2025-02-13T19:54:25.202143271Z" level=error msg="encountered an error cleaning up failed sandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:25.203219 containerd[1506]: time="2025-02-13T19:54:25.202898759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:25.204662 kubelet[2013]: E0213 19:54:25.203173 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:25.204662 kubelet[2013]: E0213 19:54:25.203227 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:25.204662 kubelet[2013]: E0213 19:54:25.203245 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:25.204403 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b-shm.mount: Deactivated successfully. Feb 13 19:54:25.205017 kubelet[2013]: E0213 19:54:25.203290 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:25.920275 kubelet[2013]: I0213 19:54:25.920233 2013 topology_manager.go:215] "Topology Admit Handler" podUID="8a1817b9-44ea-4f31-9227-c8e24dd0bfff" podNamespace="default" podName="nginx-deployment-85f456d6dd-sd4vl" Feb 13 19:54:25.928729 systemd[1]: Created slice kubepods-besteffort-pod8a1817b9_44ea_4f31_9227_c8e24dd0bfff.slice - libcontainer container kubepods-besteffort-pod8a1817b9_44ea_4f31_9227_c8e24dd0bfff.slice. Feb 13 19:54:25.937193 kubelet[2013]: E0213 19:54:25.937151 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:25.979360 kubelet[2013]: I0213 19:54:25.979002 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whdft\" (UniqueName: \"kubernetes.io/projected/8a1817b9-44ea-4f31-9227-c8e24dd0bfff-kube-api-access-whdft\") pod \"nginx-deployment-85f456d6dd-sd4vl\" (UID: \"8a1817b9-44ea-4f31-9227-c8e24dd0bfff\") " pod="default/nginx-deployment-85f456d6dd-sd4vl" Feb 13 19:54:26.088703 kubelet[2013]: I0213 19:54:26.088284 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b" Feb 13 19:54:26.089552 containerd[1506]: time="2025-02-13T19:54:26.089201542Z" level=info msg="StopPodSandbox for \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\"" Feb 13 19:54:26.089552 containerd[1506]: time="2025-02-13T19:54:26.089413304Z" level=info msg="Ensure that sandbox d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b in task-service has been cleanup successfully" Feb 13 19:54:26.091075 systemd[1]: run-netns-cni\x2ddf65f7ce\x2dc823\x2dad03\x2de586\x2d3a6bb3984107.mount: Deactivated successfully. Feb 13 19:54:26.092795 containerd[1506]: time="2025-02-13T19:54:26.092257175Z" level=info msg="TearDown network for sandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\" successfully" Feb 13 19:54:26.092795 containerd[1506]: time="2025-02-13T19:54:26.092288255Z" level=info msg="StopPodSandbox for \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\" returns successfully" Feb 13 19:54:26.093604 containerd[1506]: time="2025-02-13T19:54:26.093053423Z" level=info msg="StopPodSandbox for \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\"" Feb 13 19:54:26.093604 containerd[1506]: time="2025-02-13T19:54:26.093148384Z" level=info msg="TearDown network for sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\" successfully" Feb 13 19:54:26.093604 containerd[1506]: time="2025-02-13T19:54:26.093157864Z" level=info msg="StopPodSandbox for \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\" returns successfully" Feb 13 19:54:26.094305 containerd[1506]: time="2025-02-13T19:54:26.094146315Z" level=info msg="StopPodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\"" Feb 13 19:54:26.094305 containerd[1506]: time="2025-02-13T19:54:26.094240316Z" level=info msg="TearDown network for sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" successfully" Feb 13 19:54:26.094305 containerd[1506]: time="2025-02-13T19:54:26.094249636Z" level=info msg="StopPodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" returns successfully" Feb 13 19:54:26.098462 containerd[1506]: time="2025-02-13T19:54:26.097987956Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\"" Feb 13 19:54:26.098462 containerd[1506]: time="2025-02-13T19:54:26.098090757Z" level=info msg="TearDown network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" successfully" Feb 13 19:54:26.098462 containerd[1506]: time="2025-02-13T19:54:26.098100637Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" returns successfully" Feb 13 19:54:26.101143 containerd[1506]: time="2025-02-13T19:54:26.101113510Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\"" Feb 13 19:54:26.101645 containerd[1506]: time="2025-02-13T19:54:26.101568395Z" level=info msg="TearDown network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" successfully" Feb 13 19:54:26.101645 containerd[1506]: time="2025-02-13T19:54:26.101587635Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" returns successfully" Feb 13 19:54:26.103064 containerd[1506]: time="2025-02-13T19:54:26.102495565Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\"" Feb 13 19:54:26.103064 containerd[1506]: time="2025-02-13T19:54:26.102982730Z" level=info msg="TearDown network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" successfully" Feb 13 19:54:26.103064 containerd[1506]: time="2025-02-13T19:54:26.102999290Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" returns successfully" Feb 13 19:54:26.103797 containerd[1506]: time="2025-02-13T19:54:26.103652857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:6,}" Feb 13 19:54:26.201230 containerd[1506]: time="2025-02-13T19:54:26.201090626Z" level=error msg="Failed to destroy network for sandbox \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:26.203631 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2-shm.mount: Deactivated successfully. Feb 13 19:54:26.204566 containerd[1506]: time="2025-02-13T19:54:26.201543631Z" level=error msg="encountered an error cleaning up failed sandbox \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:26.205073 containerd[1506]: time="2025-02-13T19:54:26.204811707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:26.205303 kubelet[2013]: E0213 19:54:26.205270 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:26.205772 kubelet[2013]: E0213 19:54:26.205624 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:26.205772 kubelet[2013]: E0213 19:54:26.205651 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:26.205772 kubelet[2013]: E0213 19:54:26.205734 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:26.233442 containerd[1506]: time="2025-02-13T19:54:26.233070131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sd4vl,Uid:8a1817b9-44ea-4f31-9227-c8e24dd0bfff,Namespace:default,Attempt:0,}" Feb 13 19:54:26.349089 containerd[1506]: time="2025-02-13T19:54:26.349019339Z" level=error msg="Failed to destroy network for sandbox \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:26.350866 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d-shm.mount: Deactivated successfully. Feb 13 19:54:26.352224 containerd[1506]: time="2025-02-13T19:54:26.352175773Z" level=error msg="encountered an error cleaning up failed sandbox \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:26.352444 containerd[1506]: time="2025-02-13T19:54:26.352412856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sd4vl,Uid:8a1817b9-44ea-4f31-9227-c8e24dd0bfff,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:26.352678 kubelet[2013]: E0213 19:54:26.352637 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:26.352751 kubelet[2013]: E0213 19:54:26.352704 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sd4vl" Feb 13 19:54:26.352751 kubelet[2013]: E0213 19:54:26.352727 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sd4vl" Feb 13 19:54:26.353047 kubelet[2013]: E0213 19:54:26.352771 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-sd4vl_default(8a1817b9-44ea-4f31-9227-c8e24dd0bfff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-sd4vl_default(8a1817b9-44ea-4f31-9227-c8e24dd0bfff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-sd4vl" podUID="8a1817b9-44ea-4f31-9227-c8e24dd0bfff" Feb 13 19:54:26.938181 kubelet[2013]: E0213 19:54:26.938139 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:27.097806 kubelet[2013]: I0213 19:54:27.097344 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2" Feb 13 19:54:27.098722 containerd[1506]: time="2025-02-13T19:54:27.098642793Z" level=info msg="StopPodSandbox for \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\"" Feb 13 19:54:27.099456 containerd[1506]: time="2025-02-13T19:54:27.099157959Z" level=info msg="Ensure that sandbox b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2 in task-service has been cleanup successfully" Feb 13 19:54:27.099742 containerd[1506]: time="2025-02-13T19:54:27.099695924Z" level=info msg="TearDown network for sandbox \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\" successfully" Feb 13 19:54:27.101692 containerd[1506]: time="2025-02-13T19:54:27.100103209Z" level=info msg="StopPodSandbox for \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\" returns successfully" Feb 13 19:54:27.101692 containerd[1506]: time="2025-02-13T19:54:27.100494173Z" level=info msg="StopPodSandbox for \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\"" Feb 13 19:54:27.101692 containerd[1506]: time="2025-02-13T19:54:27.100627494Z" level=info msg="TearDown network for sandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\" successfully" Feb 13 19:54:27.101692 containerd[1506]: time="2025-02-13T19:54:27.100641614Z" level=info msg="StopPodSandbox for \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\" returns successfully" Feb 13 19:54:27.103297 containerd[1506]: time="2025-02-13T19:54:27.103151401Z" level=info msg="StopPodSandbox for \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\"" Feb 13 19:54:27.103297 containerd[1506]: time="2025-02-13T19:54:27.103239082Z" level=info msg="TearDown network for sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\" successfully" Feb 13 19:54:27.103297 containerd[1506]: time="2025-02-13T19:54:27.103250722Z" level=info msg="StopPodSandbox for \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\" returns successfully" Feb 13 19:54:27.103418 systemd[1]: run-netns-cni\x2d0f11b144\x2d5232\x2df5f0\x2dfb94\x2d96e33fc47755.mount: Deactivated successfully. Feb 13 19:54:27.104248 kubelet[2013]: I0213 19:54:27.103580 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d" Feb 13 19:54:27.104298 containerd[1506]: time="2025-02-13T19:54:27.104182772Z" level=info msg="StopPodSandbox for \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\"" Feb 13 19:54:27.104581 containerd[1506]: time="2025-02-13T19:54:27.104323773Z" level=info msg="Ensure that sandbox aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d in task-service has been cleanup successfully" Feb 13 19:54:27.104581 containerd[1506]: time="2025-02-13T19:54:27.104451175Z" level=info msg="StopPodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\"" Feb 13 19:54:27.104581 containerd[1506]: time="2025-02-13T19:54:27.104525015Z" level=info msg="TearDown network for sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" successfully" Feb 13 19:54:27.104581 containerd[1506]: time="2025-02-13T19:54:27.104535015Z" level=info msg="StopPodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" returns successfully" Feb 13 19:54:27.105772 containerd[1506]: time="2025-02-13T19:54:27.105092141Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\"" Feb 13 19:54:27.105772 containerd[1506]: time="2025-02-13T19:54:27.105181262Z" level=info msg="TearDown network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" successfully" Feb 13 19:54:27.105772 containerd[1506]: time="2025-02-13T19:54:27.105191902Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" returns successfully" Feb 13 19:54:27.107107 systemd[1]: run-netns-cni\x2d183138e0\x2d9d2f\x2d5ff7\x2d5745\x2d6271671eed79.mount: Deactivated successfully. Feb 13 19:54:27.107509 containerd[1506]: time="2025-02-13T19:54:27.106351395Z" level=info msg="TearDown network for sandbox \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\" successfully" Feb 13 19:54:27.107509 containerd[1506]: time="2025-02-13T19:54:27.107486687Z" level=info msg="StopPodSandbox for \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\" returns successfully" Feb 13 19:54:27.108777 containerd[1506]: time="2025-02-13T19:54:27.108741500Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\"" Feb 13 19:54:27.108865 containerd[1506]: time="2025-02-13T19:54:27.108830701Z" level=info msg="TearDown network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" successfully" Feb 13 19:54:27.108895 containerd[1506]: time="2025-02-13T19:54:27.108840021Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" returns successfully" Feb 13 19:54:27.109168 containerd[1506]: time="2025-02-13T19:54:27.108988703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sd4vl,Uid:8a1817b9-44ea-4f31-9227-c8e24dd0bfff,Namespace:default,Attempt:1,}" Feb 13 19:54:27.110331 containerd[1506]: time="2025-02-13T19:54:27.109716830Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\"" Feb 13 19:54:27.110723 containerd[1506]: time="2025-02-13T19:54:27.110501239Z" level=info msg="TearDown network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" successfully" Feb 13 19:54:27.110723 containerd[1506]: time="2025-02-13T19:54:27.110521519Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" returns successfully" Feb 13 19:54:27.111885 containerd[1506]: time="2025-02-13T19:54:27.111683931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:7,}" Feb 13 19:54:27.242191 containerd[1506]: time="2025-02-13T19:54:27.241563785Z" level=error msg="Failed to destroy network for sandbox \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:27.245731 containerd[1506]: time="2025-02-13T19:54:27.244966541Z" level=error msg="encountered an error cleaning up failed sandbox \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:27.245731 containerd[1506]: time="2025-02-13T19:54:27.245041582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sd4vl,Uid:8a1817b9-44ea-4f31-9227-c8e24dd0bfff,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:27.247161 kubelet[2013]: E0213 19:54:27.245246 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:27.247161 kubelet[2013]: E0213 19:54:27.245302 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sd4vl" Feb 13 19:54:27.247161 kubelet[2013]: E0213 19:54:27.245329 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-sd4vl" Feb 13 19:54:27.247289 kubelet[2013]: E0213 19:54:27.245383 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-sd4vl_default(8a1817b9-44ea-4f31-9227-c8e24dd0bfff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-sd4vl_default(8a1817b9-44ea-4f31-9227-c8e24dd0bfff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-sd4vl" podUID="8a1817b9-44ea-4f31-9227-c8e24dd0bfff" Feb 13 19:54:27.247611 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6-shm.mount: Deactivated successfully. Feb 13 19:54:27.248833 containerd[1506]: time="2025-02-13T19:54:27.248404017Z" level=error msg="Failed to destroy network for sandbox \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:27.250746 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad-shm.mount: Deactivated successfully. Feb 13 19:54:27.251823 containerd[1506]: time="2025-02-13T19:54:27.251774373Z" level=error msg="encountered an error cleaning up failed sandbox \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:27.252771 containerd[1506]: time="2025-02-13T19:54:27.252246058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:27.252909 kubelet[2013]: E0213 19:54:27.252509 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:54:27.252909 kubelet[2013]: E0213 19:54:27.252568 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:27.252909 kubelet[2013]: E0213 19:54:27.252590 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpmmk" Feb 13 19:54:27.253012 kubelet[2013]: E0213 19:54:27.252633 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dpmmk_calico-system(e434ee7b-e67f-4fda-aedd-10f6c7172fae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dpmmk" podUID="e434ee7b-e67f-4fda-aedd-10f6c7172fae" Feb 13 19:54:27.605907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906844722.mount: Deactivated successfully. Feb 13 19:54:27.647744 containerd[1506]: time="2025-02-13T19:54:27.646788391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 19:54:27.648582 containerd[1506]: time="2025-02-13T19:54:27.648548249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:27.650180 containerd[1506]: time="2025-02-13T19:54:27.650146386Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:27.650975 containerd[1506]: time="2025-02-13T19:54:27.650941834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:27.651735 containerd[1506]: time="2025-02-13T19:54:27.651710963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.594888394s" Feb 13 19:54:27.651885 containerd[1506]: time="2025-02-13T19:54:27.651828724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 19:54:27.662761 containerd[1506]: time="2025-02-13T19:54:27.662635878Z" level=info msg="CreateContainer within sandbox \"71e4cb92f6d84d0042eb6b3d889cecfd18e45a448b3bad1acfc1ce47962f624a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:54:27.676208 containerd[1506]: time="2025-02-13T19:54:27.676154421Z" level=info msg="CreateContainer within sandbox \"71e4cb92f6d84d0042eb6b3d889cecfd18e45a448b3bad1acfc1ce47962f624a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7bd11ffd3f8a2ab0111d2a95d1aecc0d3d00ca2851df690e4bee929a1edb6e66\"" Feb 13 19:54:27.676824 containerd[1506]: time="2025-02-13T19:54:27.676757067Z" level=info msg="StartContainer for \"7bd11ffd3f8a2ab0111d2a95d1aecc0d3d00ca2851df690e4bee929a1edb6e66\"" Feb 13 19:54:27.704890 systemd[1]: Started cri-containerd-7bd11ffd3f8a2ab0111d2a95d1aecc0d3d00ca2851df690e4bee929a1edb6e66.scope - libcontainer container 7bd11ffd3f8a2ab0111d2a95d1aecc0d3d00ca2851df690e4bee929a1edb6e66. Feb 13 19:54:27.737169 containerd[1506]: time="2025-02-13T19:54:27.737078105Z" level=info msg="StartContainer for \"7bd11ffd3f8a2ab0111d2a95d1aecc0d3d00ca2851df690e4bee929a1edb6e66\" returns successfully" Feb 13 19:54:27.842430 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:54:27.842566 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:54:27.925419 kubelet[2013]: E0213 19:54:27.925354 2013 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:27.938920 kubelet[2013]: E0213 19:54:27.938844 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:28.121495 kubelet[2013]: I0213 19:54:28.121425 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad" Feb 13 19:54:28.123031 containerd[1506]: time="2025-02-13T19:54:28.122652321Z" level=info msg="StopPodSandbox for \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\"" Feb 13 19:54:28.123031 containerd[1506]: time="2025-02-13T19:54:28.122883523Z" level=info msg="Ensure that sandbox d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad in task-service has been cleanup successfully" Feb 13 19:54:28.124010 containerd[1506]: time="2025-02-13T19:54:28.123752252Z" level=info msg="TearDown network for sandbox \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\" successfully" Feb 13 19:54:28.124201 containerd[1506]: time="2025-02-13T19:54:28.124089416Z" level=info msg="StopPodSandbox for \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\" returns successfully" Feb 13 19:54:28.126519 containerd[1506]: time="2025-02-13T19:54:28.126465481Z" level=info msg="StopPodSandbox for \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\"" Feb 13 19:54:28.126620 kubelet[2013]: I0213 19:54:28.126546 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6" Feb 13 19:54:28.128474 containerd[1506]: time="2025-02-13T19:54:28.127229609Z" level=info msg="StopPodSandbox for \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\"" Feb 13 19:54:28.128474 containerd[1506]: time="2025-02-13T19:54:28.127464811Z" level=info msg="Ensure that sandbox ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6 in task-service has been cleanup successfully" Feb 13 19:54:28.128474 containerd[1506]: time="2025-02-13T19:54:28.127690453Z" level=info msg="TearDown network for sandbox \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\" successfully" Feb 13 19:54:28.128474 containerd[1506]: time="2025-02-13T19:54:28.127719414Z" level=info msg="StopPodSandbox for \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\" returns successfully" Feb 13 19:54:28.128474 containerd[1506]: time="2025-02-13T19:54:28.128339820Z" level=info msg="StopPodSandbox for \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\"" Feb 13 19:54:28.128650 containerd[1506]: time="2025-02-13T19:54:28.128491982Z" level=info msg="TearDown network for sandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\" successfully" Feb 13 19:54:28.128650 containerd[1506]: time="2025-02-13T19:54:28.128506262Z" level=info msg="StopPodSandbox for \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\" returns successfully" Feb 13 19:54:28.128650 containerd[1506]: time="2025-02-13T19:54:28.128582783Z" level=info msg="TearDown network for sandbox \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\" successfully" Feb 13 19:54:28.128650 containerd[1506]: time="2025-02-13T19:54:28.128607103Z" level=info msg="StopPodSandbox for \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\" returns successfully" Feb 13 19:54:28.129197 containerd[1506]: time="2025-02-13T19:54:28.129064388Z" level=info msg="StopPodSandbox for \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\"" Feb 13 19:54:28.129267 containerd[1506]: time="2025-02-13T19:54:28.129256230Z" level=info msg="TearDown network for sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\" successfully" Feb 13 19:54:28.129298 containerd[1506]: time="2025-02-13T19:54:28.129267710Z" level=info msg="StopPodSandbox for \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\" returns successfully" Feb 13 19:54:28.129411 containerd[1506]: time="2025-02-13T19:54:28.129340510Z" level=info msg="StopPodSandbox for \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\"" Feb 13 19:54:28.130557 containerd[1506]: time="2025-02-13T19:54:28.129751235Z" level=info msg="TearDown network for sandbox \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\" successfully" Feb 13 19:54:28.130557 containerd[1506]: time="2025-02-13T19:54:28.129778955Z" level=info msg="StopPodSandbox for \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\" returns successfully" Feb 13 19:54:28.130813 kubelet[2013]: I0213 19:54:28.130758 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8h6m8" podStartSLOduration=3.616605007 podStartE2EDuration="20.130729325s" podCreationTimestamp="2025-02-13 19:54:08 +0000 UTC" firstStartedPulling="2025-02-13 19:54:11.138834298 +0000 UTC m=+3.598672723" lastFinishedPulling="2025-02-13 19:54:27.652958616 +0000 UTC m=+20.112797041" observedRunningTime="2025-02-13 19:54:28.130175479 +0000 UTC m=+20.590013904" watchObservedRunningTime="2025-02-13 19:54:28.130729325 +0000 UTC m=+20.590567750" Feb 13 19:54:28.131355 containerd[1506]: time="2025-02-13T19:54:28.131158609Z" level=info msg="StopPodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\"" Feb 13 19:54:28.131355 containerd[1506]: time="2025-02-13T19:54:28.131246570Z" level=info msg="TearDown network for sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" successfully" Feb 13 19:54:28.131355 containerd[1506]: time="2025-02-13T19:54:28.131256290Z" level=info msg="StopPodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" returns successfully" Feb 13 19:54:28.132055 containerd[1506]: time="2025-02-13T19:54:28.132030738Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\"" Feb 13 19:54:28.132214 containerd[1506]: time="2025-02-13T19:54:28.132069019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sd4vl,Uid:8a1817b9-44ea-4f31-9227-c8e24dd0bfff,Namespace:default,Attempt:2,}" Feb 13 19:54:28.132434 containerd[1506]: time="2025-02-13T19:54:28.132299741Z" level=info msg="TearDown network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" successfully" Feb 13 19:54:28.132434 containerd[1506]: time="2025-02-13T19:54:28.132317861Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" returns successfully" Feb 13 19:54:28.132865 containerd[1506]: time="2025-02-13T19:54:28.132708705Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\"" Feb 13 19:54:28.132865 containerd[1506]: time="2025-02-13T19:54:28.132792746Z" level=info msg="TearDown network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" successfully" Feb 13 19:54:28.132865 containerd[1506]: time="2025-02-13T19:54:28.132802306Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" returns successfully" Feb 13 19:54:28.133444 containerd[1506]: time="2025-02-13T19:54:28.133222951Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\"" Feb 13 19:54:28.133444 containerd[1506]: time="2025-02-13T19:54:28.133294512Z" level=info msg="TearDown network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" successfully" Feb 13 19:54:28.133444 containerd[1506]: time="2025-02-13T19:54:28.133303032Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" returns successfully" Feb 13 19:54:28.134245 containerd[1506]: time="2025-02-13T19:54:28.133981359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:8,}" Feb 13 19:54:28.214333 systemd[1]: run-netns-cni\x2d8561262c\x2dbccc\x2dd7dc\x2d1dae\x2daef8249486d6.mount: Deactivated successfully. Feb 13 19:54:28.214438 systemd[1]: run-netns-cni\x2de47b8ebe\x2d1a01\x2d1591\x2d9b07\x2dc51527540076.mount: Deactivated successfully. Feb 13 19:54:28.367724 systemd-networkd[1380]: cali6d32f26cf3d: Link UP Feb 13 19:54:28.368434 systemd-networkd[1380]: cali6d32f26cf3d: Gained carrier Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.208 [INFO][2840] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.240 [INFO][2840] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0 nginx-deployment-85f456d6dd- default 8a1817b9-44ea-4f31-9227-c8e24dd0bfff 1724 0 2025-02-13 19:54:25 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-85f456d6dd-sd4vl eth0 default [] [] [kns.default ksa.default.default] cali6d32f26cf3d [] []}} ContainerID="e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" Namespace="default" Pod="nginx-deployment-85f456d6dd-sd4vl" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-" Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.240 [INFO][2840] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" Namespace="default" Pod="nginx-deployment-85f456d6dd-sd4vl" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0" Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.294 [INFO][2871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" HandleID="k8s-pod-network.e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" Workload="10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0" Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.313 [INFO][2871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" HandleID="k8s-pod-network.e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" Workload="10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000300570), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-85f456d6dd-sd4vl", "timestamp":"2025-02-13 19:54:28.294172703 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.313 [INFO][2871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.313 [INFO][2871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.313 [INFO][2871] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.315 [INFO][2871] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" host="10.0.0.4" Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.322 [INFO][2871] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.329 [INFO][2871] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.332 [INFO][2871] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.336 [INFO][2871] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.336 [INFO][2871] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" host="10.0.0.4" Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.339 [INFO][2871] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.346 [INFO][2871] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" host="10.0.0.4" Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.355 [INFO][2871] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" host="10.0.0.4" Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.355 [INFO][2871] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" host="10.0.0.4" Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.355 [INFO][2871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:28.385580 containerd[1506]: 2025-02-13 19:54:28.355 [INFO][2871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" HandleID="k8s-pod-network.e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" Workload="10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0" Feb 13 19:54:28.386740 containerd[1506]: 2025-02-13 19:54:28.358 [INFO][2840] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" Namespace="default" Pod="nginx-deployment-85f456d6dd-sd4vl" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8a1817b9-44ea-4f31-9227-c8e24dd0bfff", ResourceVersion:"1724", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 54, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-sd4vl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali6d32f26cf3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:28.386740 containerd[1506]: 2025-02-13 19:54:28.359 [INFO][2840] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.193/32] ContainerID="e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" Namespace="default" Pod="nginx-deployment-85f456d6dd-sd4vl" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0" Feb 13 19:54:28.386740 containerd[1506]: 2025-02-13 19:54:28.359 [INFO][2840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d32f26cf3d ContainerID="e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" Namespace="default" Pod="nginx-deployment-85f456d6dd-sd4vl" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0" Feb 13 19:54:28.386740 containerd[1506]: 2025-02-13 19:54:28.368 [INFO][2840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" Namespace="default" Pod="nginx-deployment-85f456d6dd-sd4vl" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0" Feb 13 19:54:28.386740 containerd[1506]: 2025-02-13 19:54:28.371 [INFO][2840] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" Namespace="default" Pod="nginx-deployment-85f456d6dd-sd4vl" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8a1817b9-44ea-4f31-9227-c8e24dd0bfff", ResourceVersion:"1724", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 54, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a", Pod:"nginx-deployment-85f456d6dd-sd4vl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali6d32f26cf3d", MAC:"56:a7:87:48:3b:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:28.386740 containerd[1506]: 2025-02-13 19:54:28.382 [INFO][2840] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a" Namespace="default" Pod="nginx-deployment-85f456d6dd-sd4vl" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--85f456d6dd--sd4vl-eth0" Feb 13 19:54:28.412183 containerd[1506]: time="2025-02-13T19:54:28.409196578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:54:28.412183 containerd[1506]: time="2025-02-13T19:54:28.410125628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:54:28.412183 containerd[1506]: time="2025-02-13T19:54:28.410139068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:28.412183 containerd[1506]: time="2025-02-13T19:54:28.410225789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:28.419836 systemd-networkd[1380]: calibf952012d5a: Link UP Feb 13 19:54:28.420059 systemd-networkd[1380]: calibf952012d5a: Gained carrier Feb 13 19:54:28.433607 systemd[1]: run-containerd-runc-k8s.io-e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a-runc.VHYpQH.mount: Deactivated successfully. Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.230 [INFO][2851] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.250 [INFO][2851] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--dpmmk-eth0 csi-node-driver- calico-system e434ee7b-e67f-4fda-aedd-10f6c7172fae 1641 0 2025-02-13 19:54:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-dpmmk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibf952012d5a [] []}} ContainerID="b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" Namespace="calico-system" Pod="csi-node-driver-dpmmk" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dpmmk-" Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.250 [INFO][2851] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" Namespace="calico-system" Pod="csi-node-driver-dpmmk" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dpmmk-eth0" Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.294 [INFO][2875] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" HandleID="k8s-pod-network.b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" Workload="10.0.0.4-k8s-csi--node--driver--dpmmk-eth0" Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.318 [INFO][2875] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" HandleID="k8s-pod-network.b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" Workload="10.0.0.4-k8s-csi--node--driver--dpmmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004def0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-dpmmk", "timestamp":"2025-02-13 19:54:28.294176223 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.318 [INFO][2875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.355 [INFO][2875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.355 [INFO][2875] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.364 [INFO][2875] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" host="10.0.0.4" Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.375 [INFO][2875] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.384 [INFO][2875] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.387 [INFO][2875] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.391 [INFO][2875] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.391 [INFO][2875] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" host="10.0.0.4" Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.393 [INFO][2875] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.400 [INFO][2875] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" host="10.0.0.4" Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.409 [INFO][2875] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" host="10.0.0.4" Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.409 [INFO][2875] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" host="10.0.0.4" Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.409 [INFO][2875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:28.439264 containerd[1506]: 2025-02-13 19:54:28.409 [INFO][2875] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" HandleID="k8s-pod-network.b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" Workload="10.0.0.4-k8s-csi--node--driver--dpmmk-eth0" Feb 13 19:54:28.440004 containerd[1506]: 2025-02-13 19:54:28.413 [INFO][2851] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" Namespace="calico-system" Pod="csi-node-driver-dpmmk" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dpmmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--dpmmk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e434ee7b-e67f-4fda-aedd-10f6c7172fae", ResourceVersion:"1641", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 54, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-dpmmk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf952012d5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:28.440004 containerd[1506]: 2025-02-13 19:54:28.414 [INFO][2851] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.194/32] ContainerID="b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" Namespace="calico-system" Pod="csi-node-driver-dpmmk" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dpmmk-eth0" Feb 13 19:54:28.440004 containerd[1506]: 2025-02-13 19:54:28.414 [INFO][2851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf952012d5a ContainerID="b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" Namespace="calico-system" Pod="csi-node-driver-dpmmk" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dpmmk-eth0" Feb 13 19:54:28.440004 containerd[1506]: 2025-02-13 19:54:28.419 [INFO][2851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" Namespace="calico-system" Pod="csi-node-driver-dpmmk" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dpmmk-eth0" Feb 13 19:54:28.440004 containerd[1506]: 2025-02-13 19:54:28.419 [INFO][2851] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" Namespace="calico-system" Pod="csi-node-driver-dpmmk" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dpmmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--dpmmk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e434ee7b-e67f-4fda-aedd-10f6c7172fae", ResourceVersion:"1641", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 54, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf", Pod:"csi-node-driver-dpmmk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf952012d5a", MAC:"f6:08:78:75:3c:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:28.440004 containerd[1506]: 2025-02-13 19:54:28.436 [INFO][2851] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf" Namespace="calico-system" Pod="csi-node-driver-dpmmk" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dpmmk-eth0" Feb 13 19:54:28.445915 systemd[1]: Started cri-containerd-e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a.scope - libcontainer container e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a. Feb 13 19:54:28.468620 containerd[1506]: time="2025-02-13T19:54:28.468199351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:54:28.468620 containerd[1506]: time="2025-02-13T19:54:28.468269592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:54:28.468620 containerd[1506]: time="2025-02-13T19:54:28.468289872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:28.468620 containerd[1506]: time="2025-02-13T19:54:28.468366473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:28.495948 systemd[1]: Started cri-containerd-b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf.scope - libcontainer container b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf. Feb 13 19:54:28.497833 containerd[1506]: time="2025-02-13T19:54:28.497708458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-sd4vl,Uid:8a1817b9-44ea-4f31-9227-c8e24dd0bfff,Namespace:default,Attempt:2,} returns sandbox id \"e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a\"" Feb 13 19:54:28.501016 containerd[1506]: time="2025-02-13T19:54:28.500987292Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:54:28.525570 containerd[1506]: time="2025-02-13T19:54:28.525531947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpmmk,Uid:e434ee7b-e67f-4fda-aedd-10f6c7172fae,Namespace:calico-system,Attempt:8,} returns sandbox id \"b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf\"" Feb 13 19:54:28.940025 kubelet[2013]: E0213 19:54:28.939931 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:29.447729 kernel: bpftool[3128]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:54:29.628662 systemd-networkd[1380]: vxlan.calico: Link UP Feb 13 19:54:29.628680 systemd-networkd[1380]: vxlan.calico: Gained carrier Feb 13 19:54:29.760986 systemd-networkd[1380]: calibf952012d5a: Gained IPv6LL Feb 13 19:54:29.940322 kubelet[2013]: E0213 19:54:29.940260 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:30.336912 systemd-networkd[1380]: cali6d32f26cf3d: Gained IPv6LL Feb 13 19:54:30.655891 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Feb 13 19:54:30.941443 kubelet[2013]: E0213 19:54:30.941271 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:31.600269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621496829.mount: Deactivated successfully. Feb 13 19:54:31.942350 kubelet[2013]: E0213 19:54:31.942267 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:32.456771 containerd[1506]: time="2025-02-13T19:54:32.455568209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:32.457583 containerd[1506]: time="2025-02-13T19:54:32.457506548Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:54:32.458551 containerd[1506]: time="2025-02-13T19:54:32.458491277Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:32.463572 containerd[1506]: time="2025-02-13T19:54:32.463527206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:32.465013 containerd[1506]: time="2025-02-13T19:54:32.464961460Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 3.963787966s" Feb 13 19:54:32.465161 containerd[1506]: time="2025-02-13T19:54:32.465142062Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:54:32.467354 containerd[1506]: time="2025-02-13T19:54:32.467323483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:54:32.470095 containerd[1506]: time="2025-02-13T19:54:32.469913788Z" level=info msg="CreateContainer within sandbox \"e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:54:32.487276 containerd[1506]: time="2025-02-13T19:54:32.487157715Z" level=info msg="CreateContainer within sandbox \"e4a8f54ed4301e2dfbb8a52306d398daf9d45ba751456bdec717e566880be37a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ca27eeb9534592fe3fe0a359b7cb796fa2fd51eb17ccce83c03211d91ed080c6\"" Feb 13 19:54:32.488275 containerd[1506]: time="2025-02-13T19:54:32.488149805Z" level=info msg="StartContainer for \"ca27eeb9534592fe3fe0a359b7cb796fa2fd51eb17ccce83c03211d91ed080c6\"" Feb 13 19:54:32.526885 systemd[1]: Started cri-containerd-ca27eeb9534592fe3fe0a359b7cb796fa2fd51eb17ccce83c03211d91ed080c6.scope - libcontainer container ca27eeb9534592fe3fe0a359b7cb796fa2fd51eb17ccce83c03211d91ed080c6. Feb 13 19:54:32.558383 containerd[1506]: time="2025-02-13T19:54:32.557760040Z" level=info msg="StartContainer for \"ca27eeb9534592fe3fe0a359b7cb796fa2fd51eb17ccce83c03211d91ed080c6\" returns successfully" Feb 13 19:54:32.942770 kubelet[2013]: E0213 19:54:32.942699 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:33.943930 kubelet[2013]: E0213 19:54:33.943868 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:33.953747 containerd[1506]: time="2025-02-13T19:54:33.953049267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:33.954478 containerd[1506]: time="2025-02-13T19:54:33.954434200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 19:54:33.955744 containerd[1506]: time="2025-02-13T19:54:33.955719732Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:33.958408 containerd[1506]: time="2025-02-13T19:54:33.958365597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:33.959663 containerd[1506]: time="2025-02-13T19:54:33.959626849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.492090764s" Feb 13 19:54:33.959806 containerd[1506]: time="2025-02-13T19:54:33.959785811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 19:54:33.962385 containerd[1506]: time="2025-02-13T19:54:33.962331835Z" level=info msg="CreateContainer within sandbox \"b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:54:33.985922 containerd[1506]: time="2025-02-13T19:54:33.985861580Z" level=info msg="CreateContainer within sandbox \"b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ba962c79e90d43989b5a927aecbd98367f99eb0bee5cf089ec0e566894f16bdb\"" Feb 13 19:54:33.986725 containerd[1506]: time="2025-02-13T19:54:33.986598147Z" level=info msg="StartContainer for \"ba962c79e90d43989b5a927aecbd98367f99eb0bee5cf089ec0e566894f16bdb\"" Feb 13 19:54:34.018905 systemd[1]: Started cri-containerd-ba962c79e90d43989b5a927aecbd98367f99eb0bee5cf089ec0e566894f16bdb.scope - libcontainer container ba962c79e90d43989b5a927aecbd98367f99eb0bee5cf089ec0e566894f16bdb. Feb 13 19:54:34.056035 containerd[1506]: time="2025-02-13T19:54:34.055979681Z" level=info msg="StartContainer for \"ba962c79e90d43989b5a927aecbd98367f99eb0bee5cf089ec0e566894f16bdb\" returns successfully" Feb 13 19:54:34.058069 containerd[1506]: time="2025-02-13T19:54:34.058039620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:54:34.944749 kubelet[2013]: E0213 19:54:34.944645 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:35.513741 containerd[1506]: time="2025-02-13T19:54:35.512861004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:35.515748 containerd[1506]: time="2025-02-13T19:54:35.515229906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 19:54:35.517339 containerd[1506]: time="2025-02-13T19:54:35.517275644Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:35.521244 containerd[1506]: time="2025-02-13T19:54:35.521170720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:35.522843 containerd[1506]: time="2025-02-13T19:54:35.522618494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.464540434s" Feb 13 19:54:35.522843 containerd[1506]: time="2025-02-13T19:54:35.522712455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 19:54:35.526711 containerd[1506]: time="2025-02-13T19:54:35.526497850Z" level=info msg="CreateContainer within sandbox \"b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:54:35.543841 containerd[1506]: time="2025-02-13T19:54:35.543781649Z" level=info msg="CreateContainer within sandbox \"b94d976d37892635c11876d56500af01828dd5c626b8620e970db78cb108c1bf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2b67f2a109e451144fb9420cba71cd5a84ba76fda518fe32ee48628e00302726\"" Feb 13 19:54:35.544448 containerd[1506]: time="2025-02-13T19:54:35.544387255Z" level=info msg="StartContainer for \"2b67f2a109e451144fb9420cba71cd5a84ba76fda518fe32ee48628e00302726\"" Feb 13 19:54:35.580996 systemd[1]: Started cri-containerd-2b67f2a109e451144fb9420cba71cd5a84ba76fda518fe32ee48628e00302726.scope - libcontainer container 2b67f2a109e451144fb9420cba71cd5a84ba76fda518fe32ee48628e00302726. Feb 13 19:54:35.616341 containerd[1506]: time="2025-02-13T19:54:35.616216439Z" level=info msg="StartContainer for \"2b67f2a109e451144fb9420cba71cd5a84ba76fda518fe32ee48628e00302726\" returns successfully" Feb 13 19:54:35.945036 kubelet[2013]: E0213 19:54:35.944970 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:36.044577 kubelet[2013]: I0213 19:54:36.043815 2013 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:54:36.044577 kubelet[2013]: I0213 19:54:36.044021 2013 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:54:36.190771 kubelet[2013]: I0213 19:54:36.190658 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dpmmk" podStartSLOduration=21.193446414 podStartE2EDuration="28.19062976s" podCreationTimestamp="2025-02-13 19:54:08 +0000 UTC" firstStartedPulling="2025-02-13 19:54:28.526838601 +0000 UTC m=+20.986677026" lastFinishedPulling="2025-02-13 19:54:35.524021947 +0000 UTC m=+27.983860372" observedRunningTime="2025-02-13 19:54:36.190182595 +0000 UTC m=+28.650021060" watchObservedRunningTime="2025-02-13 19:54:36.19062976 +0000 UTC m=+28.650468225" Feb 13 19:54:36.191415 kubelet[2013]: I0213 19:54:36.191342 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-sd4vl" podStartSLOduration=7.224822333 podStartE2EDuration="11.191321126s" podCreationTimestamp="2025-02-13 19:54:25 +0000 UTC" firstStartedPulling="2025-02-13 19:54:28.499996082 +0000 UTC m=+20.959834467" lastFinishedPulling="2025-02-13 19:54:32.466494835 +0000 UTC m=+24.926333260" observedRunningTime="2025-02-13 19:54:33.167618491 +0000 UTC m=+25.627456956" watchObservedRunningTime="2025-02-13 19:54:36.191321126 +0000 UTC m=+28.651159591" Feb 13 19:54:36.945825 kubelet[2013]: E0213 19:54:36.945746 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:37.946271 kubelet[2013]: E0213 19:54:37.946209 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:38.947211 kubelet[2013]: E0213 19:54:38.947129 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:39.948107 kubelet[2013]: E0213 19:54:39.948026 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:40.829328 kubelet[2013]: I0213 19:54:40.829268 2013 topology_manager.go:215] "Topology Admit Handler" podUID="d661528d-dfa9-4522-9103-489eff2cd846" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 19:54:40.838605 systemd[1]: Created slice kubepods-besteffort-podd661528d_dfa9_4522_9103_489eff2cd846.slice - libcontainer container kubepods-besteffort-podd661528d_dfa9_4522_9103_489eff2cd846.slice. Feb 13 19:54:40.881076 kubelet[2013]: I0213 19:54:40.880959 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d661528d-dfa9-4522-9103-489eff2cd846-data\") pod \"nfs-server-provisioner-0\" (UID: \"d661528d-dfa9-4522-9103-489eff2cd846\") " pod="default/nfs-server-provisioner-0" Feb 13 19:54:40.881076 kubelet[2013]: I0213 19:54:40.881069 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfx4w\" (UniqueName: \"kubernetes.io/projected/d661528d-dfa9-4522-9103-489eff2cd846-kube-api-access-dfx4w\") pod \"nfs-server-provisioner-0\" (UID: \"d661528d-dfa9-4522-9103-489eff2cd846\") " pod="default/nfs-server-provisioner-0" Feb 13 19:54:40.948500 kubelet[2013]: E0213 19:54:40.948388 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:41.142555 containerd[1506]: time="2025-02-13T19:54:41.142492032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d661528d-dfa9-4522-9103-489eff2cd846,Namespace:default,Attempt:0,}" Feb 13 19:54:41.307500 systemd-networkd[1380]: cali60e51b789ff: Link UP Feb 13 19:54:41.309618 systemd-networkd[1380]: cali60e51b789ff: Gained carrier Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.200 [INFO][3408] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default d661528d-dfa9-4522-9103-489eff2cd846 1819 0 2025-02-13 19:54:40 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.201 [INFO][3408] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.238 [INFO][3415] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" HandleID="k8s-pod-network.5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.258 [INFO][3415] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" HandleID="k8s-pod-network.5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028ced0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:54:41.238279081 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.259 [INFO][3415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.259 [INFO][3415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.259 [INFO][3415] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.262 [INFO][3415] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" host="10.0.0.4" Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.269 [INFO][3415] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.275 [INFO][3415] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.278 [INFO][3415] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.282 [INFO][3415] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.282 [INFO][3415] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" host="10.0.0.4" Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.285 [INFO][3415] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6 Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.291 [INFO][3415] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" host="10.0.0.4" Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.299 [INFO][3415] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" host="10.0.0.4" Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.300 [INFO][3415] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" host="10.0.0.4" Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.300 [INFO][3415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:41.323800 containerd[1506]: 2025-02-13 19:54:41.300 [INFO][3415] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" HandleID="k8s-pod-network.5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:54:41.324804 containerd[1506]: 2025-02-13 19:54:41.303 [INFO][3408] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d661528d-dfa9-4522-9103-489eff2cd846", ResourceVersion:"1819", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 54, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:41.324804 containerd[1506]: 2025-02-13 19:54:41.303 [INFO][3408] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.195/32] ContainerID="5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:54:41.324804 containerd[1506]: 2025-02-13 19:54:41.304 [INFO][3408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:54:41.324804 containerd[1506]: 2025-02-13 19:54:41.309 [INFO][3408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:54:41.325007 containerd[1506]: 2025-02-13 19:54:41.309 [INFO][3408] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d661528d-dfa9-4522-9103-489eff2cd846", ResourceVersion:"1819", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 54, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"5e:0f:76:f4:b3:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:41.325007 containerd[1506]: 2025-02-13 19:54:41.320 [INFO][3408] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:54:41.353459 containerd[1506]: time="2025-02-13T19:54:41.353077009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:54:41.353459 containerd[1506]: time="2025-02-13T19:54:41.353142490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:54:41.353459 containerd[1506]: time="2025-02-13T19:54:41.353158570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:41.353459 containerd[1506]: time="2025-02-13T19:54:41.353235931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:41.381188 systemd[1]: Started cri-containerd-5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6.scope - libcontainer container 5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6. Feb 13 19:54:41.419943 containerd[1506]: time="2025-02-13T19:54:41.419880333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d661528d-dfa9-4522-9103-489eff2cd846,Namespace:default,Attempt:0,} returns sandbox id \"5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6\"" Feb 13 19:54:41.422193 containerd[1506]: time="2025-02-13T19:54:41.422078752Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:54:41.948852 kubelet[2013]: E0213 19:54:41.948780 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:42.949248 kubelet[2013]: E0213 19:54:42.949196 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:43.136313 systemd-networkd[1380]: cali60e51b789ff: Gained IPv6LL Feb 13 19:54:43.949773 kubelet[2013]: E0213 19:54:43.949738 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:44.287681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028159338.mount: Deactivated successfully. Feb 13 19:54:44.950352 kubelet[2013]: E0213 19:54:44.950224 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:45.789697 containerd[1506]: time="2025-02-13T19:54:45.787915933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:45.789697 containerd[1506]: time="2025-02-13T19:54:45.789392545Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373691" Feb 13 19:54:45.790883 containerd[1506]: time="2025-02-13T19:54:45.789915349Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:45.799741 containerd[1506]: time="2025-02-13T19:54:45.799644627Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 4.377522515s" Feb 13 19:54:45.799741 containerd[1506]: time="2025-02-13T19:54:45.799719267Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:54:45.801201 containerd[1506]: time="2025-02-13T19:54:45.801150239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:45.804693 containerd[1506]: time="2025-02-13T19:54:45.804573226Z" level=info msg="CreateContainer within sandbox \"5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:54:45.826985 containerd[1506]: time="2025-02-13T19:54:45.826888644Z" level=info msg="CreateContainer within sandbox \"5f5fc50eef9649c984322981367df905556eba79a3cde070722e2fea69f272c6\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9c600c0dca06aaa1ada70464dee1cb0af919f304ffbfb32a827b196c5ef0a2cb\"" Feb 13 19:54:45.827678 containerd[1506]: time="2025-02-13T19:54:45.827614290Z" level=info msg="StartContainer for \"9c600c0dca06aaa1ada70464dee1cb0af919f304ffbfb32a827b196c5ef0a2cb\"" Feb 13 19:54:45.862923 systemd[1]: Started cri-containerd-9c600c0dca06aaa1ada70464dee1cb0af919f304ffbfb32a827b196c5ef0a2cb.scope - libcontainer container 9c600c0dca06aaa1ada70464dee1cb0af919f304ffbfb32a827b196c5ef0a2cb. Feb 13 19:54:45.899116 containerd[1506]: time="2025-02-13T19:54:45.899073100Z" level=info msg="StartContainer for \"9c600c0dca06aaa1ada70464dee1cb0af919f304ffbfb32a827b196c5ef0a2cb\" returns successfully" Feb 13 19:54:45.950657 kubelet[2013]: E0213 19:54:45.950607 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:46.224815 kubelet[2013]: I0213 19:54:46.224637 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.844829103 podStartE2EDuration="6.224613316s" podCreationTimestamp="2025-02-13 19:54:40 +0000 UTC" firstStartedPulling="2025-02-13 19:54:41.421655788 +0000 UTC m=+33.881494253" lastFinishedPulling="2025-02-13 19:54:45.801440041 +0000 UTC m=+38.261278466" observedRunningTime="2025-02-13 19:54:46.223290025 +0000 UTC m=+38.683128490" watchObservedRunningTime="2025-02-13 19:54:46.224613316 +0000 UTC m=+38.684451741" Feb 13 19:54:46.817471 systemd[1]: run-containerd-runc-k8s.io-9c600c0dca06aaa1ada70464dee1cb0af919f304ffbfb32a827b196c5ef0a2cb-runc.S1SqRX.mount: Deactivated successfully. Feb 13 19:54:46.951965 kubelet[2013]: E0213 19:54:46.951872 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:47.924830 kubelet[2013]: E0213 19:54:47.924760 2013 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:47.952701 kubelet[2013]: E0213 19:54:47.952591 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:48.953567 kubelet[2013]: E0213 19:54:48.953505 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:49.954303 kubelet[2013]: E0213 19:54:49.954213 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:50.955241 kubelet[2013]: E0213 19:54:50.955159 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:51.956237 kubelet[2013]: E0213 19:54:51.956170 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:52.956902 kubelet[2013]: E0213 19:54:52.956815 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:53.957432 kubelet[2013]: E0213 19:54:53.957354 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:54.958039 kubelet[2013]: E0213 19:54:54.957975 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:55.402632 kubelet[2013]: I0213 19:54:55.402465 2013 topology_manager.go:215] "Topology Admit Handler" podUID="4e34ec8f-1f15-4e07-81b4-cbb1227bf6c5" podNamespace="default" podName="test-pod-1" Feb 13 19:54:55.410635 systemd[1]: Created slice kubepods-besteffort-pod4e34ec8f_1f15_4e07_81b4_cbb1227bf6c5.slice - libcontainer container kubepods-besteffort-pod4e34ec8f_1f15_4e07_81b4_cbb1227bf6c5.slice. Feb 13 19:54:55.488274 kubelet[2013]: I0213 19:54:55.488203 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-765tb\" (UniqueName: \"kubernetes.io/projected/4e34ec8f-1f15-4e07-81b4-cbb1227bf6c5-kube-api-access-765tb\") pod \"test-pod-1\" (UID: \"4e34ec8f-1f15-4e07-81b4-cbb1227bf6c5\") " pod="default/test-pod-1" Feb 13 19:54:55.488274 kubelet[2013]: I0213 19:54:55.488264 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4f94b361-c032-49eb-9075-6f7ef0b0f908\" (UniqueName: \"kubernetes.io/nfs/4e34ec8f-1f15-4e07-81b4-cbb1227bf6c5-pvc-4f94b361-c032-49eb-9075-6f7ef0b0f908\") pod \"test-pod-1\" (UID: \"4e34ec8f-1f15-4e07-81b4-cbb1227bf6c5\") " pod="default/test-pod-1" Feb 13 19:54:55.617839 kernel: FS-Cache: Loaded Feb 13 19:54:55.642848 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:54:55.643019 kernel: RPC: Registered udp transport module. Feb 13 19:54:55.643074 kernel: RPC: Registered tcp transport module. Feb 13 19:54:55.643108 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:54:55.643140 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:54:55.827733 kernel: NFS: Registering the id_resolver key type Feb 13 19:54:55.828729 kernel: Key type id_resolver registered Feb 13 19:54:55.828874 kernel: Key type id_legacy registered Feb 13 19:54:55.851589 nfsidmap[3601]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:54:55.856526 nfsidmap[3602]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:54:55.959048 kubelet[2013]: E0213 19:54:55.958935 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:56.014641 containerd[1506]: time="2025-02-13T19:54:56.014262406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4e34ec8f-1f15-4e07-81b4-cbb1227bf6c5,Namespace:default,Attempt:0,}" Feb 13 19:54:56.184095 systemd-networkd[1380]: cali5ec59c6bf6e: Link UP Feb 13 19:54:56.185655 systemd-networkd[1380]: cali5ec59c6bf6e: Gained carrier Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.075 [INFO][3604] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default 4e34ec8f-1f15-4e07-81b4-cbb1227bf6c5 1883 0 2025-02-13 19:54:42 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.075 [INFO][3604] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.113 [INFO][3615] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" HandleID="k8s-pod-network.9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" Workload="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.132 [INFO][3615] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" HandleID="k8s-pod-network.9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003849d0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2025-02-13 19:54:56.11360862 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.132 [INFO][3615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.133 [INFO][3615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.133 [INFO][3615] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.136 [INFO][3615] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" host="10.0.0.4" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.142 [INFO][3615] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.149 [INFO][3615] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.152 [INFO][3615] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.156 [INFO][3615] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.156 [INFO][3615] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" host="10.0.0.4" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.159 [INFO][3615] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2 Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.167 [INFO][3615] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" host="10.0.0.4" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.177 [INFO][3615] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" host="10.0.0.4" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.177 [INFO][3615] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" host="10.0.0.4" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.177 [INFO][3615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.177 [INFO][3615] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" HandleID="k8s-pod-network.9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" Workload="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:54:56.195989 containerd[1506]: 2025-02-13 19:54:56.180 [INFO][3604] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4e34ec8f-1f15-4e07-81b4-cbb1227bf6c5", ResourceVersion:"1883", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 54, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:56.197771 containerd[1506]: 2025-02-13 19:54:56.180 [INFO][3604] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.196/32] ContainerID="9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:54:56.197771 containerd[1506]: 2025-02-13 19:54:56.180 [INFO][3604] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:54:56.197771 containerd[1506]: 2025-02-13 19:54:56.185 [INFO][3604] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:54:56.197771 containerd[1506]: 2025-02-13 19:54:56.185 [INFO][3604] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4e34ec8f-1f15-4e07-81b4-cbb1227bf6c5", ResourceVersion:"1883", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 54, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"22:0c:77:6e:b5:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:54:56.197771 containerd[1506]: 2025-02-13 19:54:56.194 [INFO][3604] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Feb 13 19:54:56.222466 containerd[1506]: time="2025-02-13T19:54:56.221579775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:54:56.222466 containerd[1506]: time="2025-02-13T19:54:56.221730456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:54:56.222466 containerd[1506]: time="2025-02-13T19:54:56.221758936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:56.222466 containerd[1506]: time="2025-02-13T19:54:56.221915297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:54:56.245027 systemd[1]: Started cri-containerd-9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2.scope - libcontainer container 9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2. Feb 13 19:54:56.282995 containerd[1506]: time="2025-02-13T19:54:56.282934084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4e34ec8f-1f15-4e07-81b4-cbb1227bf6c5,Namespace:default,Attempt:0,} returns sandbox id \"9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2\"" Feb 13 19:54:56.286465 containerd[1506]: time="2025-02-13T19:54:56.286042026Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:54:56.692080 containerd[1506]: time="2025-02-13T19:54:56.692014824Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:54:56.696459 containerd[1506]: time="2025-02-13T19:54:56.696331614Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:54:56.700699 containerd[1506]: time="2025-02-13T19:54:56.700558324Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 414.476498ms" Feb 13 19:54:56.700699 containerd[1506]: time="2025-02-13T19:54:56.700611204Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:54:56.704128 containerd[1506]: time="2025-02-13T19:54:56.703836707Z" level=info msg="CreateContainer within sandbox \"9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:54:56.725360 containerd[1506]: time="2025-02-13T19:54:56.725295777Z" level=info msg="CreateContainer within sandbox \"9cfa815fb9d7b00fd23026ce574d0c4572faeb444f85e361b417196412e34fc2\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d261ef0dc0b040381075fc32efd248e80729cb0d4aec28bfa364360755bbf035\"" Feb 13 19:54:56.726944 containerd[1506]: time="2025-02-13T19:54:56.726246343Z" level=info msg="StartContainer for \"d261ef0dc0b040381075fc32efd248e80729cb0d4aec28bfa364360755bbf035\"" Feb 13 19:54:56.759988 systemd[1]: Started cri-containerd-d261ef0dc0b040381075fc32efd248e80729cb0d4aec28bfa364360755bbf035.scope - libcontainer container d261ef0dc0b040381075fc32efd248e80729cb0d4aec28bfa364360755bbf035. Feb 13 19:54:56.786443 containerd[1506]: time="2025-02-13T19:54:56.786401444Z" level=info msg="StartContainer for \"d261ef0dc0b040381075fc32efd248e80729cb0d4aec28bfa364360755bbf035\" returns successfully" Feb 13 19:54:56.960287 kubelet[2013]: E0213 19:54:56.959582 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:57.792217 systemd-networkd[1380]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:54:57.960474 kubelet[2013]: E0213 19:54:57.960388 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:58.961267 kubelet[2013]: E0213 19:54:58.961184 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:54:59.962419 kubelet[2013]: E0213 19:54:59.962345 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:00.963606 kubelet[2013]: E0213 19:55:00.963454 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:01.963804 kubelet[2013]: E0213 19:55:01.963708 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:02.964515 kubelet[2013]: E0213 19:55:02.964438 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:03.965457 kubelet[2013]: E0213 19:55:03.965381 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:04.966459 kubelet[2013]: E0213 19:55:04.966380 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:05.967371 kubelet[2013]: E0213 19:55:05.967282 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:06.968385 kubelet[2013]: E0213 19:55:06.968305 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:07.925579 kubelet[2013]: E0213 19:55:07.925509 2013 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:07.947728 containerd[1506]: time="2025-02-13T19:55:07.947614774Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\"" Feb 13 19:55:07.948168 containerd[1506]: time="2025-02-13T19:55:07.947812456Z" level=info msg="TearDown network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" successfully" Feb 13 19:55:07.948168 containerd[1506]: time="2025-02-13T19:55:07.947827656Z" level=info msg="StopPodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" returns successfully" Feb 13 19:55:07.948797 containerd[1506]: time="2025-02-13T19:55:07.948393779Z" level=info msg="RemovePodSandbox for \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\"" Feb 13 19:55:07.948797 containerd[1506]: time="2025-02-13T19:55:07.948431539Z" level=info msg="Forcibly stopping sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\"" Feb 13 19:55:07.948797 containerd[1506]: time="2025-02-13T19:55:07.948507100Z" level=info msg="TearDown network for sandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" successfully" Feb 13 19:55:07.952436 containerd[1506]: time="2025-02-13T19:55:07.952289004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:55:07.952436 containerd[1506]: time="2025-02-13T19:55:07.952394004Z" level=info msg="RemovePodSandbox \"b9f6a25940ad171baa81d0d3b081bbbfb47e2920bf7783b46c45f9cb76a81bb2\" returns successfully" Feb 13 19:55:07.953063 containerd[1506]: time="2025-02-13T19:55:07.953010688Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\"" Feb 13 19:55:07.953116 containerd[1506]: time="2025-02-13T19:55:07.953107169Z" level=info msg="TearDown network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" successfully" Feb 13 19:55:07.953157 containerd[1506]: time="2025-02-13T19:55:07.953117209Z" level=info msg="StopPodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" returns successfully" Feb 13 19:55:07.953405 containerd[1506]: time="2025-02-13T19:55:07.953379811Z" level=info msg="RemovePodSandbox for \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\"" Feb 13 19:55:07.953405 containerd[1506]: time="2025-02-13T19:55:07.953406171Z" level=info msg="Forcibly stopping sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\"" Feb 13 19:55:07.953545 containerd[1506]: time="2025-02-13T19:55:07.953465811Z" level=info msg="TearDown network for sandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" successfully" Feb 13 19:55:07.956466 containerd[1506]: time="2025-02-13T19:55:07.956420830Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:55:07.956585 containerd[1506]: time="2025-02-13T19:55:07.956493790Z" level=info msg="RemovePodSandbox \"70f33d95e8385592c4cb01e2961563720215b53116097c25c3f362cfdfb9b579\" returns successfully" Feb 13 19:55:07.957235 containerd[1506]: time="2025-02-13T19:55:07.957052154Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\"" Feb 13 19:55:07.957235 containerd[1506]: time="2025-02-13T19:55:07.957157914Z" level=info msg="TearDown network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" successfully" Feb 13 19:55:07.957235 containerd[1506]: time="2025-02-13T19:55:07.957170514Z" level=info msg="StopPodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" returns successfully" Feb 13 19:55:07.957659 containerd[1506]: time="2025-02-13T19:55:07.957629757Z" level=info msg="RemovePodSandbox for \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\"" Feb 13 19:55:07.957743 containerd[1506]: time="2025-02-13T19:55:07.957686838Z" level=info msg="Forcibly stopping sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\"" Feb 13 19:55:07.957801 containerd[1506]: time="2025-02-13T19:55:07.957780318Z" level=info msg="TearDown network for sandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" successfully" Feb 13 19:55:07.960649 containerd[1506]: time="2025-02-13T19:55:07.960616696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:55:07.960853 containerd[1506]: time="2025-02-13T19:55:07.960680536Z" level=info msg="RemovePodSandbox \"3758dd69b79e0729b219958f3e787c229562d886221a297803d170d75be25aeb\" returns successfully" Feb 13 19:55:07.961207 containerd[1506]: time="2025-02-13T19:55:07.961026899Z" level=info msg="StopPodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\"" Feb 13 19:55:07.961207 containerd[1506]: time="2025-02-13T19:55:07.961114579Z" level=info msg="TearDown network for sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" successfully" Feb 13 19:55:07.961207 containerd[1506]: time="2025-02-13T19:55:07.961125219Z" level=info msg="StopPodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" returns successfully" Feb 13 19:55:07.961761 containerd[1506]: time="2025-02-13T19:55:07.961587582Z" level=info msg="RemovePodSandbox for \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\"" Feb 13 19:55:07.961761 containerd[1506]: time="2025-02-13T19:55:07.961620542Z" level=info msg="Forcibly stopping sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\"" Feb 13 19:55:07.961761 containerd[1506]: time="2025-02-13T19:55:07.961703063Z" level=info msg="TearDown network for sandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" successfully" Feb 13 19:55:07.964723 containerd[1506]: time="2025-02-13T19:55:07.964596121Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:55:07.964723 containerd[1506]: time="2025-02-13T19:55:07.964691162Z" level=info msg="RemovePodSandbox \"006d97ccb680c5b5f28ed5732aca675edef57b7fdff2b03099e4276555614dd5\" returns successfully" Feb 13 19:55:07.965361 containerd[1506]: time="2025-02-13T19:55:07.965133244Z" level=info msg="StopPodSandbox for \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\"" Feb 13 19:55:07.965361 containerd[1506]: time="2025-02-13T19:55:07.965218645Z" level=info msg="TearDown network for sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\" successfully" Feb 13 19:55:07.965361 containerd[1506]: time="2025-02-13T19:55:07.965228085Z" level=info msg="StopPodSandbox for \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\" returns successfully" Feb 13 19:55:07.965845 containerd[1506]: time="2025-02-13T19:55:07.965819409Z" level=info msg="RemovePodSandbox for \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\"" Feb 13 19:55:07.965945 containerd[1506]: time="2025-02-13T19:55:07.965878769Z" level=info msg="Forcibly stopping sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\"" Feb 13 19:55:07.965981 containerd[1506]: time="2025-02-13T19:55:07.965963490Z" level=info msg="TearDown network for sandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\" successfully" Feb 13 19:55:07.968533 containerd[1506]: time="2025-02-13T19:55:07.968471506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:55:07.970150 containerd[1506]: time="2025-02-13T19:55:07.968542466Z" level=info msg="RemovePodSandbox \"278704e124a83a31552fa6c77d1194967c729f42febbf2920152e9bc7aff7fdb\" returns successfully" Feb 13 19:55:07.970262 kubelet[2013]: E0213 19:55:07.968808 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:07.970975 containerd[1506]: time="2025-02-13T19:55:07.970913721Z" level=info msg="StopPodSandbox for \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\"" Feb 13 19:55:07.971234 containerd[1506]: time="2025-02-13T19:55:07.971132842Z" level=info msg="TearDown network for sandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\" successfully" Feb 13 19:55:07.971234 containerd[1506]: time="2025-02-13T19:55:07.971158762Z" level=info msg="StopPodSandbox for \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\" returns successfully" Feb 13 19:55:07.971729 containerd[1506]: time="2025-02-13T19:55:07.971585325Z" level=info msg="RemovePodSandbox for \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\"" Feb 13 19:55:07.971729 containerd[1506]: time="2025-02-13T19:55:07.971618925Z" level=info msg="Forcibly stopping sandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\"" Feb 13 19:55:07.971933 containerd[1506]: time="2025-02-13T19:55:07.971845087Z" level=info msg="TearDown network for sandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\" successfully" Feb 13 19:55:07.975346 containerd[1506]: time="2025-02-13T19:55:07.975188068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:55:07.975346 containerd[1506]: time="2025-02-13T19:55:07.975255828Z" level=info msg="RemovePodSandbox \"d6fb0b957ceaed1b32d43d67f7db66894749321b9b2dffa8935577002180a22b\" returns successfully" Feb 13 19:55:07.975902 containerd[1506]: time="2025-02-13T19:55:07.975746191Z" level=info msg="StopPodSandbox for \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\"" Feb 13 19:55:07.975902 containerd[1506]: time="2025-02-13T19:55:07.975837312Z" level=info msg="TearDown network for sandbox \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\" successfully" Feb 13 19:55:07.975902 containerd[1506]: time="2025-02-13T19:55:07.975846872Z" level=info msg="StopPodSandbox for \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\" returns successfully" Feb 13 19:55:07.976698 containerd[1506]: time="2025-02-13T19:55:07.976266915Z" level=info msg="RemovePodSandbox for \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\"" Feb 13 19:55:07.976698 containerd[1506]: time="2025-02-13T19:55:07.976296715Z" level=info msg="Forcibly stopping sandbox \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\"" Feb 13 19:55:07.976698 containerd[1506]: time="2025-02-13T19:55:07.976356315Z" level=info msg="TearDown network for sandbox \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\" successfully" Feb 13 19:55:07.979191 containerd[1506]: time="2025-02-13T19:55:07.979157413Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:55:07.979392 containerd[1506]: time="2025-02-13T19:55:07.979317654Z" level=info msg="RemovePodSandbox \"b969261e1f359e00ecec357a7e76431cd1eefb9ace5dc92e2587ef0874d421a2\" returns successfully" Feb 13 19:55:07.979940 containerd[1506]: time="2025-02-13T19:55:07.979795697Z" level=info msg="StopPodSandbox for \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\"" Feb 13 19:55:07.979940 containerd[1506]: time="2025-02-13T19:55:07.979878417Z" level=info msg="TearDown network for sandbox \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\" successfully" Feb 13 19:55:07.979940 containerd[1506]: time="2025-02-13T19:55:07.979888497Z" level=info msg="StopPodSandbox for \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\" returns successfully" Feb 13 19:55:07.980534 containerd[1506]: time="2025-02-13T19:55:07.980312180Z" level=info msg="RemovePodSandbox for \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\"" Feb 13 19:55:07.980534 containerd[1506]: time="2025-02-13T19:55:07.980335380Z" level=info msg="Forcibly stopping sandbox \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\"" Feb 13 19:55:07.980534 containerd[1506]: time="2025-02-13T19:55:07.980403181Z" level=info msg="TearDown network for sandbox \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\" successfully" Feb 13 19:55:07.983046 containerd[1506]: time="2025-02-13T19:55:07.982913796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:55:07.983046 containerd[1506]: time="2025-02-13T19:55:07.982960797Z" level=info msg="RemovePodSandbox \"d437be26b94398998f38769076a9c7d9edea4c40c5576d6398812b52864f4bad\" returns successfully" Feb 13 19:55:07.983491 containerd[1506]: time="2025-02-13T19:55:07.983454320Z" level=info msg="StopPodSandbox for \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\"" Feb 13 19:55:07.983607 containerd[1506]: time="2025-02-13T19:55:07.983585881Z" level=info msg="TearDown network for sandbox \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\" successfully" Feb 13 19:55:07.983643 containerd[1506]: time="2025-02-13T19:55:07.983609921Z" level=info msg="StopPodSandbox for \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\" returns successfully" Feb 13 19:55:07.984343 containerd[1506]: time="2025-02-13T19:55:07.984154164Z" level=info msg="RemovePodSandbox for \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\"" Feb 13 19:55:07.984343 containerd[1506]: time="2025-02-13T19:55:07.984178124Z" level=info msg="Forcibly stopping sandbox \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\"" Feb 13 19:55:07.984343 containerd[1506]: time="2025-02-13T19:55:07.984236685Z" level=info msg="TearDown network for sandbox \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\" successfully" Feb 13 19:55:07.987035 containerd[1506]: time="2025-02-13T19:55:07.986971262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:55:07.987419 containerd[1506]: time="2025-02-13T19:55:07.987046822Z" level=info msg="RemovePodSandbox \"aaf7cb5e02bdeefd21efda786c62233b03b5a2a7696da9443fb21d9af9c4a57d\" returns successfully" Feb 13 19:55:07.988209 containerd[1506]: time="2025-02-13T19:55:07.987822347Z" level=info msg="StopPodSandbox for \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\"" Feb 13 19:55:07.988209 containerd[1506]: time="2025-02-13T19:55:07.987932508Z" level=info msg="TearDown network for sandbox \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\" successfully" Feb 13 19:55:07.988209 containerd[1506]: time="2025-02-13T19:55:07.987946508Z" level=info msg="StopPodSandbox for \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\" returns successfully" Feb 13 19:55:07.990042 containerd[1506]: time="2025-02-13T19:55:07.988719393Z" level=info msg="RemovePodSandbox for \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\"" Feb 13 19:55:07.990042 containerd[1506]: time="2025-02-13T19:55:07.988754713Z" level=info msg="Forcibly stopping sandbox \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\"" Feb 13 19:55:07.990042 containerd[1506]: time="2025-02-13T19:55:07.988839954Z" level=info msg="TearDown network for sandbox \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\" successfully" Feb 13 19:55:07.991680 containerd[1506]: time="2025-02-13T19:55:07.991630211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:55:07.991809 containerd[1506]: time="2025-02-13T19:55:07.991789452Z" level=info msg="RemovePodSandbox \"ba8e6e6c12f5e47e13aa7c3c589f067d7b0c549f006896852e2e1d12d2cc9de6\" returns successfully" Feb 13 19:55:08.969092 kubelet[2013]: E0213 19:55:08.969012 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:09.969984 kubelet[2013]: E0213 19:55:09.969900 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:10.970863 kubelet[2013]: E0213 19:55:10.970797 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:11.971781 kubelet[2013]: E0213 19:55:11.971717 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:12.972794 kubelet[2013]: E0213 19:55:12.972735 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:13.973817 kubelet[2013]: E0213 19:55:13.973726 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:14.974589 kubelet[2013]: E0213 19:55:14.974506 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:15.975090 kubelet[2013]: E0213 19:55:15.975036 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:16.976089 kubelet[2013]: E0213 19:55:16.976019 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:17.976709 kubelet[2013]: E0213 19:55:17.976601 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:18.892847 kubelet[2013]: E0213 19:55:18.892053 2013 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T19:55:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T19:55:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T19:55:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T19:55:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\\\",\\\"ghcr.io/flatcar/calico/node:v3.29.1\\\"],\\\"sizeBytes\\\":137671624},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\\\",\\\"ghcr.io/flatcar/calico/cni:v3.29.1\\\"],\\\"sizeBytes\\\":91072777},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":69692964},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\\\",\\\"registry.k8s.io/kube-proxy:v1.30.10\\\"],\\\"sizeBytes\\\":25662389},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\\\"],\\\"sizeBytes\\\":11252974},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\\\",\\\"ghcr.io/flatcar/calico/csi:v3.29.1\\\"],\\\"sizeBytes\\\":8834384},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\\\"],\\\"sizeBytes\\\":6487425},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":268403}]}}\" for node \"10.0.0.4\": Patch \"https://159.69.125.32:6443/api/v1/nodes/10.0.0.4/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:55:18.965511 kubelet[2013]: E0213 19:55:18.965404 2013 controller.go:195] "Failed to update lease" err="Put \"https://159.69.125.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:55:18.977809 kubelet[2013]: E0213 19:55:18.977732 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:19.977986 kubelet[2013]: E0213 19:55:19.977909 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:20.978418 kubelet[2013]: E0213 19:55:20.978329 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:21.979556 kubelet[2013]: E0213 19:55:21.979469 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:22.980465 kubelet[2013]: E0213 19:55:22.980398 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:23.981010 kubelet[2013]: E0213 19:55:23.980926 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:24.981795 kubelet[2013]: E0213 19:55:24.981740 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:25.982532 kubelet[2013]: E0213 19:55:25.982439 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:26.982707 kubelet[2013]: E0213 19:55:26.982605 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:27.924973 kubelet[2013]: E0213 19:55:27.924897 2013 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:27.983504 kubelet[2013]: E0213 19:55:27.983431 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:28.893365 kubelet[2013]: E0213 19:55:28.893284 2013 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": Get \"https://159.69.125.32:6443/api/v1/nodes/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:55:28.966749 kubelet[2013]: E0213 19:55:28.966494 2013 controller.go:195] "Failed to update lease" err="Put \"https://159.69.125.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:55:28.984713 kubelet[2013]: E0213 19:55:28.984610 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:29.985747 kubelet[2013]: E0213 19:55:29.985649 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:55:30.986757 kubelet[2013]: E0213 19:55:30.986605 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"