Apr 30 00:45:21.925398 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:45:21.925434 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:45:21.925449 kernel: KASLR enabled Apr 30 00:45:21.925459 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 30 00:45:21.925467 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 30 00:45:21.925475 kernel: random: crng init done Apr 30 00:45:21.925485 kernel: ACPI: Early table checksum verification disabled Apr 30 00:45:21.925495 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 30 00:45:21.925504 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 30 00:45:21.925514 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:21.925523 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:21.925532 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:21.925540 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:21.925547 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:21.925556 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:21.925566 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:21.925574 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:21.925582 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:21.925590 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 00:45:21.925598 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 30 00:45:21.925605 kernel: NUMA: Failed to initialise from firmware Apr 30 00:45:21.925613 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 00:45:21.925621 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 30 00:45:21.925628 kernel: Zone ranges: Apr 30 00:45:21.925636 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 00:45:21.925645 kernel: DMA32 empty Apr 30 00:45:21.925653 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 30 00:45:21.925661 kernel: Movable zone start for each node Apr 30 00:45:21.925668 kernel: Early memory node ranges Apr 30 00:45:21.925676 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 30 00:45:21.925684 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 30 00:45:21.925692 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 30 00:45:21.925700 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 30 00:45:21.925707 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 30 00:45:21.925715 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 30 00:45:21.925723 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 30 00:45:21.925730 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 00:45:21.925740 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 30 00:45:21.925748 kernel: psci: probing for conduit method from ACPI. Apr 30 00:45:21.925756 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:45:21.925767 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:45:21.925775 kernel: psci: Trusted OS migration not required Apr 30 00:45:21.925783 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:45:21.925793 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 00:45:21.925801 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:45:21.925810 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:45:21.925818 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:45:21.925826 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:45:21.925834 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:45:21.925843 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:45:21.925851 kernel: CPU features: detected: Spectre-v4 Apr 30 00:45:21.925859 kernel: CPU features: detected: Spectre-BHB Apr 30 00:45:21.925867 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:45:21.925877 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:45:21.925885 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:45:21.925910 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:45:21.925921 kernel: alternatives: applying boot alternatives Apr 30 00:45:21.925931 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:45:21.925939 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:45:21.925948 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:45:21.925956 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:45:21.925964 kernel: Fallback order for Node 0: 0 Apr 30 00:45:21.925973 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 30 00:45:21.925981 kernel: Policy zone: Normal Apr 30 00:45:21.925991 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:45:21.925999 kernel: software IO TLB: area num 2. Apr 30 00:45:21.926007 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 30 00:45:21.926016 kernel: Memory: 3882872K/4096000K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213128K reserved, 0K cma-reserved) Apr 30 00:45:21.926024 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:45:21.926044 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:45:21.926055 kernel: rcu: RCU event tracing is enabled. Apr 30 00:45:21.926064 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:45:21.926072 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:45:21.926080 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:45:21.926088 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:45:21.926099 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:45:21.926108 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:45:21.926116 kernel: GICv3: 256 SPIs implemented Apr 30 00:45:21.926124 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:45:21.926132 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:45:21.926140 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:45:21.926148 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 00:45:21.926156 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 00:45:21.926165 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:45:21.926173 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:45:21.926181 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 30 00:45:21.926190 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 30 00:45:21.926200 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:45:21.926208 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:45:21.926216 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:45:21.926225 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:45:21.926233 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:45:21.926241 kernel: Console: colour dummy device 80x25 Apr 30 00:45:21.926249 kernel: ACPI: Core revision 20230628 Apr 30 00:45:21.926258 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:45:21.926266 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:45:21.926274 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:45:21.926285 kernel: landlock: Up and running. Apr 30 00:45:21.926293 kernel: SELinux: Initializing. Apr 30 00:45:21.926301 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:45:21.926310 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:45:21.926319 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:45:21.926327 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:45:21.926336 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:45:21.926344 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:45:21.926353 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 00:45:21.926363 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 00:45:21.926371 kernel: Remapping and enabling EFI services. Apr 30 00:45:21.926380 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:45:21.926388 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:45:21.926396 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 00:45:21.926405 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 30 00:45:21.926414 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:45:21.926422 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:45:21.926430 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:45:21.926439 kernel: SMP: Total of 2 processors activated. Apr 30 00:45:21.926449 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:45:21.926457 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:45:21.926472 kernel: CPU features: detected: Common not Private translations Apr 30 00:45:21.926483 kernel: CPU features: detected: CRC32 instructions Apr 30 00:45:21.926492 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 00:45:21.926501 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:45:21.926510 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:45:21.926519 kernel: CPU features: detected: Privileged Access Never Apr 30 00:45:21.926528 kernel: CPU features: detected: RAS Extension Support Apr 30 00:45:21.926539 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 00:45:21.926548 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:45:21.926557 kernel: alternatives: applying system-wide alternatives Apr 30 00:45:21.926566 kernel: devtmpfs: initialized Apr 30 00:45:21.926576 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:45:21.926585 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:45:21.926593 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:45:21.926604 kernel: SMBIOS 3.0.0 present. Apr 30 00:45:21.926613 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 30 00:45:21.926622 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:45:21.926631 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:45:21.926640 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:45:21.926649 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:45:21.926658 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:45:21.926667 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Apr 30 00:45:21.926676 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:45:21.926686 kernel: cpuidle: using governor menu Apr 30 00:45:21.926695 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:45:21.926704 kernel: ASID allocator initialised with 32768 entries Apr 30 00:45:21.926713 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:45:21.926722 kernel: Serial: AMBA PL011 UART driver Apr 30 00:45:21.926731 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:45:21.926740 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:45:21.926749 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:45:21.926758 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:45:21.926768 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:45:21.926777 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:45:21.926786 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:45:21.926795 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:45:21.926807 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:45:21.926818 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:45:21.926827 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:45:21.926839 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:45:21.926848 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:45:21.926858 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:45:21.926867 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:45:21.926875 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:45:21.926884 kernel: ACPI: Interpreter enabled Apr 30 00:45:21.926926 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:45:21.926939 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:45:21.926949 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:45:21.926957 kernel: printk: console [ttyAMA0] enabled Apr 30 00:45:21.926966 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:45:21.927204 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:45:21.927315 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:45:21.927396 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:45:21.927473 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 00:45:21.927579 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 00:45:21.927593 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 00:45:21.927602 kernel: PCI host bridge to bus 0000:00 Apr 30 00:45:21.927699 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 00:45:21.927773 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:45:21.927858 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 00:45:21.927958 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:45:21.928084 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 00:45:21.928174 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 30 00:45:21.928242 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 30 00:45:21.928316 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 00:45:21.928390 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:21.928457 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 30 00:45:21.928529 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:21.928595 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 30 00:45:21.928668 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:21.928736 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 30 00:45:21.928810 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:21.928876 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 30 00:45:21.931139 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:21.931243 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 30 00:45:21.931319 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:21.931396 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 30 00:45:21.931470 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:21.931537 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 30 00:45:21.931615 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:21.931681 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 30 00:45:21.931754 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:21.931823 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 30 00:45:21.931927 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 30 00:45:21.932004 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 30 00:45:21.932101 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 00:45:21.932172 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 30 00:45:21.932240 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:45:21.932307 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 00:45:21.932388 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 00:45:21.932466 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 30 00:45:21.932564 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 00:45:21.932636 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 30 00:45:21.932705 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 30 00:45:21.932781 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 00:45:21.932853 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 30 00:45:21.934209 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 00:45:21.934309 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 30 00:45:21.934377 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 30 00:45:21.934466 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 00:45:21.934536 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 30 00:45:21.934612 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 00:45:21.934692 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 00:45:21.934761 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 30 00:45:21.934828 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 30 00:45:21.934968 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 00:45:21.935071 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 30 00:45:21.935142 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 30 00:45:21.935213 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 30 00:45:21.935284 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 30 00:45:21.935350 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 30 00:45:21.935415 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 30 00:45:21.935485 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 30 00:45:21.935557 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 30 00:45:21.935623 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 30 00:45:21.935696 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 30 00:45:21.935762 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 30 00:45:21.935827 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 30 00:45:21.936953 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 30 00:45:21.937146 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 30 00:45:21.937222 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 30 00:45:21.937295 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 30 00:45:21.937387 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 30 00:45:21.937465 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 30 00:45:21.937557 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 00:45:21.937625 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 30 00:45:21.937690 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 30 00:45:21.937761 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 00:45:21.937826 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 30 00:45:21.937941 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 30 00:45:21.938028 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 00:45:21.938110 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 30 00:45:21.938174 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 30 00:45:21.938245 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 30 00:45:21.938309 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 00:45:21.938378 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 30 00:45:21.938443 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 00:45:21.938515 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 30 00:45:21.938580 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 00:45:21.938648 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 30 00:45:21.938713 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 00:45:21.938779 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 30 00:45:21.938844 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 00:45:21.942082 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 30 00:45:21.942213 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 00:45:21.942285 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 30 00:45:21.942352 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 00:45:21.942421 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 30 00:45:21.942487 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 00:45:21.942555 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 30 00:45:21.942619 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 00:45:21.942712 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 30 00:45:21.942782 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 30 00:45:21.942851 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 30 00:45:21.942947 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 00:45:21.943019 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 30 00:45:21.943121 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 00:45:21.945156 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 30 00:45:21.945257 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 00:45:21.945333 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 30 00:45:21.945404 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 30 00:45:21.945477 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 30 00:45:21.945546 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 30 00:45:21.945618 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 30 00:45:21.945701 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 30 00:45:21.945775 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 30 00:45:21.945845 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 30 00:45:21.945965 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 30 00:45:21.946054 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 30 00:45:21.946135 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 30 00:45:21.946205 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 30 00:45:21.946277 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 30 00:45:21.946359 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 30 00:45:21.946435 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:45:21.946512 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 30 00:45:21.946584 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 00:45:21.946669 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 30 00:45:21.946742 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 30 00:45:21.946809 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 00:45:21.946886 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 30 00:45:21.948821 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 00:45:21.948908 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 30 00:45:21.948982 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 30 00:45:21.949066 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 00:45:21.949150 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 00:45:21.949219 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 30 00:45:21.949295 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 00:45:21.949361 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 30 00:45:21.949425 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 30 00:45:21.949491 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 00:45:21.949566 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 00:45:21.949636 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 00:45:21.949717 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 30 00:45:21.949794 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 30 00:45:21.949862 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 00:45:21.949996 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 30 00:45:21.950089 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 30 00:45:21.950161 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 00:45:21.950232 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 30 00:45:21.950299 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 30 00:45:21.950368 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 00:45:21.950453 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 30 00:45:21.950532 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 30 00:45:21.950601 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 00:45:21.950684 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 30 00:45:21.950752 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 30 00:45:21.950817 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 00:45:21.950947 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 30 00:45:21.951028 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 30 00:45:21.951117 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 30 00:45:21.951192 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 00:45:21.951259 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 30 00:45:21.951324 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 30 00:45:21.951390 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 00:45:21.951460 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 00:45:21.951528 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 30 00:45:21.951594 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 30 00:45:21.951667 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 00:45:21.951747 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 00:45:21.951815 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 30 00:45:21.951890 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 30 00:45:21.952177 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 00:45:21.952249 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 00:45:21.952306 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:45:21.952373 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 00:45:21.952447 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 30 00:45:21.952534 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 30 00:45:21.952598 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 00:45:21.952666 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 30 00:45:21.952726 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 30 00:45:21.952785 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 00:45:21.952854 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 30 00:45:21.952943 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 30 00:45:21.953011 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 00:45:21.953146 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 00:45:21.953216 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 30 00:45:21.953276 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 00:45:21.953345 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 30 00:45:21.953406 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 30 00:45:21.953476 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 00:45:21.953550 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 30 00:45:21.953611 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 30 00:45:21.953675 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 00:45:21.953756 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 30 00:45:21.953821 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 30 00:45:21.953882 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 00:45:21.953997 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 30 00:45:21.954081 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 30 00:45:21.954152 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 00:45:21.954224 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 30 00:45:21.954290 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 30 00:45:21.954351 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 00:45:21.954361 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:45:21.954369 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:45:21.954377 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:45:21.954385 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:45:21.954393 kernel: iommu: Default domain type: Translated Apr 30 00:45:21.954401 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:45:21.954412 kernel: efivars: Registered efivars operations Apr 30 00:45:21.954420 kernel: vgaarb: loaded Apr 30 00:45:21.954427 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:45:21.954435 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:45:21.954443 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:45:21.954451 kernel: pnp: PnP ACPI init Apr 30 00:45:21.954526 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 00:45:21.954537 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:45:21.954547 kernel: NET: Registered PF_INET protocol family Apr 30 00:45:21.954555 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:45:21.954563 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:45:21.954571 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:45:21.954580 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:45:21.954588 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:45:21.954596 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:45:21.954604 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:45:21.954612 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:45:21.954622 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:45:21.954711 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 30 00:45:21.954722 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:45:21.954730 kernel: kvm [1]: HYP mode not available Apr 30 00:45:21.954738 kernel: Initialise system trusted keyrings Apr 30 00:45:21.954746 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:45:21.954754 kernel: Key type asymmetric registered Apr 30 00:45:21.954762 kernel: Asymmetric key parser 'x509' registered Apr 30 00:45:21.954770 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:45:21.954781 kernel: io scheduler mq-deadline registered Apr 30 00:45:21.954788 kernel: io scheduler kyber registered Apr 30 00:45:21.954796 kernel: io scheduler bfq registered Apr 30 00:45:21.954807 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 00:45:21.954887 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 30 00:45:21.954972 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 30 00:45:21.955053 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:21.955131 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 30 00:45:21.955217 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 30 00:45:21.955290 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:21.955364 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 30 00:45:21.955432 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 30 00:45:21.955499 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:21.955571 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 30 00:45:21.955644 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 30 00:45:21.955711 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:21.955787 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 30 00:45:21.955860 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 30 00:45:21.955965 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:21.956086 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 30 00:45:21.956171 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 30 00:45:21.956240 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:21.956313 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 30 00:45:21.956384 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 30 00:45:21.956450 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:21.956523 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 30 00:45:21.956590 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 30 00:45:21.956656 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:21.956667 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 30 00:45:21.956735 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 30 00:45:21.956801 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 30 00:45:21.956867 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:21.956880 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:45:21.956921 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:45:21.956930 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:45:21.957028 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 30 00:45:21.957123 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 30 00:45:21.957136 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:45:21.957145 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 00:45:21.957215 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 30 00:45:21.957231 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 30 00:45:21.957239 kernel: thunder_xcv, ver 1.0 Apr 30 00:45:21.957247 kernel: thunder_bgx, ver 1.0 Apr 30 00:45:21.957254 kernel: nicpf, ver 1.0 Apr 30 00:45:21.957262 kernel: nicvf, ver 1.0 Apr 30 00:45:21.957341 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:45:21.957404 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:45:21 UTC (1745973921) Apr 30 00:45:21.957414 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:45:21.957424 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 00:45:21.957432 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:45:21.957440 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:45:21.957448 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:45:21.957455 kernel: Segment Routing with IPv6 Apr 30 00:45:21.957463 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:45:21.957471 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:45:21.957485 kernel: Key type dns_resolver registered Apr 30 00:45:21.957494 kernel: registered taskstats version 1 Apr 30 00:45:21.957504 kernel: Loading compiled-in X.509 certificates Apr 30 00:45:21.957512 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:45:21.957520 kernel: Key type .fscrypt registered Apr 30 00:45:21.957527 kernel: Key type fscrypt-provisioning registered Apr 30 00:45:21.957535 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:45:21.957543 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:45:21.957550 kernel: ima: No architecture policies found Apr 30 00:45:21.957558 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:45:21.957566 kernel: clk: Disabling unused clocks Apr 30 00:45:21.957575 kernel: Freeing unused kernel memory: 39424K Apr 30 00:45:21.957583 kernel: Run /init as init process Apr 30 00:45:21.957591 kernel: with arguments: Apr 30 00:45:21.957599 kernel: /init Apr 30 00:45:21.957606 kernel: with environment: Apr 30 00:45:21.957614 kernel: HOME=/ Apr 30 00:45:21.957621 kernel: TERM=linux Apr 30 00:45:21.957628 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:45:21.957638 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:45:21.957656 systemd[1]: Detected virtualization kvm. Apr 30 00:45:21.957665 systemd[1]: Detected architecture arm64. Apr 30 00:45:21.957673 systemd[1]: Running in initrd. Apr 30 00:45:21.957682 systemd[1]: No hostname configured, using default hostname. Apr 30 00:45:21.957690 systemd[1]: Hostname set to . Apr 30 00:45:21.957699 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:45:21.957709 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:45:21.957719 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:45:21.957728 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:45:21.957740 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:45:21.957749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:45:21.957758 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:45:21.957766 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:45:21.957776 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:45:21.957786 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:45:21.957795 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:45:21.957804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:45:21.957812 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:45:21.957820 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:45:21.957828 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:45:21.957837 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:45:21.957845 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:45:21.957854 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:45:21.957863 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:45:21.957871 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:45:21.957880 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:45:21.957888 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:45:21.958003 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:45:21.958013 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:45:21.958022 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:45:21.958040 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:45:21.958055 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:45:21.958063 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:45:21.958071 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:45:21.958080 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:45:21.958088 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:45:21.958096 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:45:21.958135 systemd-journald[235]: Collecting audit messages is disabled. Apr 30 00:45:21.958158 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:45:21.958167 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:45:21.958176 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:45:21.958186 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:45:21.958194 kernel: Bridge firewalling registered Apr 30 00:45:21.958202 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:45:21.958211 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:45:21.958220 systemd-journald[235]: Journal started Apr 30 00:45:21.958242 systemd-journald[235]: Runtime Journal (/run/log/journal/a1b4b846f2454b1a9885961b26f43866) is 8.0M, max 76.6M, 68.6M free. Apr 30 00:45:21.916781 systemd-modules-load[236]: Inserted module 'overlay' Apr 30 00:45:21.959666 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:45:21.947330 systemd-modules-load[236]: Inserted module 'br_netfilter' Apr 30 00:45:21.961511 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:45:21.961843 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:45:21.973403 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:45:21.975606 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:45:21.981126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:45:21.983775 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:45:21.994836 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:45:22.000277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:45:22.006207 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:45:22.012973 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:45:22.021019 dracut-cmdline[270]: dracut-dracut-053 Apr 30 00:45:22.021794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:45:22.024925 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:45:22.060322 systemd-resolved[279]: Positive Trust Anchors: Apr 30 00:45:22.060343 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:45:22.060381 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:45:22.068022 systemd-resolved[279]: Defaulting to hostname 'linux'. Apr 30 00:45:22.069264 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:45:22.070003 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:45:22.117046 kernel: SCSI subsystem initialized Apr 30 00:45:22.121934 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:45:22.130954 kernel: iscsi: registered transport (tcp) Apr 30 00:45:22.145543 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:45:22.145632 kernel: QLogic iSCSI HBA Driver Apr 30 00:45:22.203684 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:45:22.211306 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:45:22.235048 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:45:22.235187 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:45:22.235212 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:45:22.286993 kernel: raid6: neonx8 gen() 15682 MB/s Apr 30 00:45:22.303964 kernel: raid6: neonx4 gen() 15514 MB/s Apr 30 00:45:22.320957 kernel: raid6: neonx2 gen() 13119 MB/s Apr 30 00:45:22.337984 kernel: raid6: neonx1 gen() 10395 MB/s Apr 30 00:45:22.354985 kernel: raid6: int64x8 gen() 6874 MB/s Apr 30 00:45:22.372018 kernel: raid6: int64x4 gen() 7293 MB/s Apr 30 00:45:22.388961 kernel: raid6: int64x2 gen() 6084 MB/s Apr 30 00:45:22.405964 kernel: raid6: int64x1 gen() 5033 MB/s Apr 30 00:45:22.406070 kernel: raid6: using algorithm neonx8 gen() 15682 MB/s Apr 30 00:45:22.422960 kernel: raid6: .... xor() 11838 MB/s, rmw enabled Apr 30 00:45:22.423044 kernel: raid6: using neon recovery algorithm Apr 30 00:45:22.428166 kernel: xor: measuring software checksum speed Apr 30 00:45:22.428235 kernel: 8regs : 19769 MB/sec Apr 30 00:45:22.428256 kernel: 32regs : 19707 MB/sec Apr 30 00:45:22.428289 kernel: arm64_neon : 26831 MB/sec Apr 30 00:45:22.428933 kernel: xor: using function: arm64_neon (26831 MB/sec) Apr 30 00:45:22.481965 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:45:22.497851 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:45:22.505169 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:45:22.521267 systemd-udevd[456]: Using default interface naming scheme 'v255'. Apr 30 00:45:22.525285 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:45:22.538189 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:45:22.556941 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Apr 30 00:45:22.598464 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:45:22.606257 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:45:22.657998 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:45:22.668139 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:45:22.686952 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:45:22.690924 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:45:22.693249 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:45:22.695139 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:45:22.705866 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:45:22.723383 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:45:22.757938 kernel: scsi host0: Virtio SCSI HBA Apr 30 00:45:22.761958 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 00:45:22.762058 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 00:45:22.798252 kernel: ACPI: bus type USB registered Apr 30 00:45:22.798315 kernel: usbcore: registered new interface driver usbfs Apr 30 00:45:22.798327 kernel: usbcore: registered new interface driver hub Apr 30 00:45:22.804026 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 30 00:45:22.808942 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 30 00:45:22.809110 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:45:22.809123 kernel: usbcore: registered new device driver usb Apr 30 00:45:22.809133 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 30 00:45:22.807777 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:45:22.807903 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:45:22.811001 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:45:22.812122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:45:22.812301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:45:22.812892 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:45:22.821229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:45:22.834136 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 30 00:45:22.844990 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 00:45:22.845531 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 30 00:45:22.845669 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 30 00:45:22.845754 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 00:45:22.845835 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:45:22.845846 kernel: GPT:17805311 != 80003071 Apr 30 00:45:22.845856 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:45:22.845865 kernel: GPT:17805311 != 80003071 Apr 30 00:45:22.845874 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:45:22.845886 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:45:22.845932 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 30 00:45:22.853342 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:45:22.857377 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 00:45:22.865667 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 00:45:22.865799 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 00:45:22.865883 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 00:45:22.865989 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 00:45:22.866106 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 00:45:22.866189 kernel: hub 1-0:1.0: USB hub found Apr 30 00:45:22.866305 kernel: hub 1-0:1.0: 4 ports detected Apr 30 00:45:22.866385 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 00:45:22.866480 kernel: hub 2-0:1.0: USB hub found Apr 30 00:45:22.866570 kernel: hub 2-0:1.0: 4 ports detected Apr 30 00:45:22.863323 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:45:22.893553 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:45:22.905927 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (504) Apr 30 00:45:22.910314 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 00:45:22.916938 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (507) Apr 30 00:45:22.927813 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 00:45:22.934380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 00:45:22.941016 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 00:45:22.941704 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 00:45:22.954187 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:45:22.964323 disk-uuid[573]: Primary Header is updated. Apr 30 00:45:22.964323 disk-uuid[573]: Secondary Entries is updated. Apr 30 00:45:22.964323 disk-uuid[573]: Secondary Header is updated. Apr 30 00:45:22.971081 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:45:23.104958 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 00:45:23.347056 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 30 00:45:23.482522 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 30 00:45:23.482583 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 00:45:23.484134 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 30 00:45:23.539294 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 30 00:45:23.539876 kernel: usbcore: registered new interface driver usbhid Apr 30 00:45:23.539946 kernel: usbhid: USB HID core driver Apr 30 00:45:23.990004 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:45:23.990526 disk-uuid[575]: The operation has completed successfully. Apr 30 00:45:24.042784 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:45:24.042943 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:45:24.052131 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:45:24.067538 sh[593]: Success Apr 30 00:45:24.081133 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:45:24.135356 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:45:24.149747 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:45:24.151954 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:45:24.178370 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:45:24.178439 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:45:24.178455 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:45:24.178994 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:45:24.179919 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:45:24.187983 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 00:45:24.190070 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:45:24.191407 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:45:24.198223 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:45:24.201683 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:45:24.215246 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:45:24.215306 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:45:24.215318 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:45:24.218951 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:45:24.219035 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:45:24.230924 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:45:24.230860 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:45:24.238527 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:45:24.246165 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:45:24.329859 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:45:24.339888 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:45:24.343116 ignition[681]: Ignition 2.19.0 Apr 30 00:45:24.343123 ignition[681]: Stage: fetch-offline Apr 30 00:45:24.345469 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:45:24.343162 ignition[681]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:24.343170 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:24.343362 ignition[681]: parsed url from cmdline: "" Apr 30 00:45:24.343365 ignition[681]: no config URL provided Apr 30 00:45:24.343369 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:45:24.343375 ignition[681]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:45:24.343380 ignition[681]: failed to fetch config: resource requires networking Apr 30 00:45:24.343580 ignition[681]: Ignition finished successfully Apr 30 00:45:24.372403 systemd-networkd[781]: lo: Link UP Apr 30 00:45:24.372416 systemd-networkd[781]: lo: Gained carrier Apr 30 00:45:24.374563 systemd-networkd[781]: Enumeration completed Apr 30 00:45:24.375373 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:45:24.376162 systemd[1]: Reached target network.target - Network. Apr 30 00:45:24.377241 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:24.377244 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:45:24.378680 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:24.378683 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:45:24.380520 systemd-networkd[781]: eth0: Link UP Apr 30 00:45:24.380524 systemd-networkd[781]: eth0: Gained carrier Apr 30 00:45:24.380534 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:24.384227 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:45:24.384685 systemd-networkd[781]: eth1: Link UP Apr 30 00:45:24.384691 systemd-networkd[781]: eth1: Gained carrier Apr 30 00:45:24.384738 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:24.413104 ignition[785]: Ignition 2.19.0 Apr 30 00:45:24.413860 ignition[785]: Stage: fetch Apr 30 00:45:24.414495 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:24.414991 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:45:24.414508 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:24.414621 ignition[785]: parsed url from cmdline: "" Apr 30 00:45:24.414628 ignition[785]: no config URL provided Apr 30 00:45:24.414633 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:45:24.414641 ignition[785]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:45:24.414662 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 00:45:24.415386 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 00:45:24.458043 systemd-networkd[781]: eth0: DHCPv4 address 91.99.89.231/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 00:45:24.616127 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 00:45:24.623116 ignition[785]: GET result: OK Apr 30 00:45:24.623262 ignition[785]: parsing config with SHA512: 95b721ae71bee3cd285ebc77f80fb40ce6daec082126162d81e6be218b9d57a7eff66568b9b5f4c2c878c8a9c62800a69592e868efb5094be6f5ba6156daaeb6 Apr 30 00:45:24.628692 unknown[785]: fetched base config from "system" Apr 30 00:45:24.628704 unknown[785]: fetched base config from "system" Apr 30 00:45:24.629239 ignition[785]: fetch: fetch complete Apr 30 00:45:24.628711 unknown[785]: fetched user config from "hetzner" Apr 30 00:45:24.629246 ignition[785]: fetch: fetch passed Apr 30 00:45:24.629313 ignition[785]: Ignition finished successfully Apr 30 00:45:24.633378 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:45:24.641169 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:45:24.657299 ignition[792]: Ignition 2.19.0 Apr 30 00:45:24.657313 ignition[792]: Stage: kargs Apr 30 00:45:24.657545 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:24.657556 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:24.660559 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:45:24.658637 ignition[792]: kargs: kargs passed Apr 30 00:45:24.658701 ignition[792]: Ignition finished successfully Apr 30 00:45:24.666166 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:45:24.680949 ignition[799]: Ignition 2.19.0 Apr 30 00:45:24.680960 ignition[799]: Stage: disks Apr 30 00:45:24.681214 ignition[799]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:24.681228 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:24.682223 ignition[799]: disks: disks passed Apr 30 00:45:24.682293 ignition[799]: Ignition finished successfully Apr 30 00:45:24.684343 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:45:24.685762 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:45:24.687066 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:45:24.688111 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:45:24.689147 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:45:24.690249 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:45:24.698226 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:45:24.718639 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 00:45:24.723013 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:45:24.730155 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:45:24.782999 kernel: EXT4-fs (sda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:45:24.784685 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:45:24.786748 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:45:24.794114 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:45:24.797571 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:45:24.800103 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:45:24.808244 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (815) Apr 30 00:45:24.810058 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:45:24.810112 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:45:24.810124 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:45:24.810146 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:45:24.810184 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:45:24.814767 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:45:24.824637 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:45:24.824696 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:45:24.826250 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:45:24.831197 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:45:24.871685 coreos-metadata[817]: Apr 30 00:45:24.871 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 00:45:24.873847 coreos-metadata[817]: Apr 30 00:45:24.873 INFO Fetch successful Apr 30 00:45:24.876680 coreos-metadata[817]: Apr 30 00:45:24.874 INFO wrote hostname ci-4081-3-3-c-89ff891e34 to /sysroot/etc/hostname Apr 30 00:45:24.877679 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:45:24.879791 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:45:24.884147 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:45:24.889508 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:45:24.894703 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:45:25.009728 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:45:25.015062 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:45:25.017092 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:45:25.030938 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:45:25.049965 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:45:25.057845 ignition[933]: INFO : Ignition 2.19.0 Apr 30 00:45:25.058832 ignition[933]: INFO : Stage: mount Apr 30 00:45:25.059918 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:25.059918 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:25.061105 ignition[933]: INFO : mount: mount passed Apr 30 00:45:25.062474 ignition[933]: INFO : Ignition finished successfully Apr 30 00:45:25.063211 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:45:25.068053 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:45:25.179218 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:45:25.188259 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:45:25.199959 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (944) Apr 30 00:45:25.202188 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:45:25.202267 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:45:25.202290 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:45:25.205973 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:45:25.206090 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:45:25.208406 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:45:25.235676 ignition[961]: INFO : Ignition 2.19.0 Apr 30 00:45:25.238046 ignition[961]: INFO : Stage: files Apr 30 00:45:25.238046 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:25.238046 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:25.240988 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:45:25.240988 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:45:25.240988 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:45:25.244491 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:45:25.245488 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:45:25.246613 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:45:25.245808 unknown[961]: wrote ssh authorized keys file for user: core Apr 30 00:45:25.248381 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:45:25.248381 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 00:45:25.331665 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:45:25.620081 systemd-networkd[781]: eth1: Gained IPv6LL Apr 30 00:45:25.642048 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:45:25.643257 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:45:25.643257 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:45:25.643257 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:45:25.643257 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:45:25.643257 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:45:25.643257 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:45:25.643257 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:45:25.643257 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:45:25.643257 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:45:25.651532 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:45:25.651532 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Apr 30 00:45:25.651532 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Apr 30 00:45:25.651532 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Apr 30 00:45:25.651532 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Apr 30 00:45:26.209092 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 00:45:26.388247 systemd-networkd[781]: eth0: Gained IPv6LL Apr 30 00:45:26.437566 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Apr 30 00:45:26.437566 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 00:45:26.441118 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:45:26.441118 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:45:26.441118 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 00:45:26.441118 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 30 00:45:26.444743 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 00:45:26.444743 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 00:45:26.444743 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 30 00:45:26.444743 ignition[961]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:45:26.444743 ignition[961]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:45:26.444743 ignition[961]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:45:26.444743 ignition[961]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:45:26.444743 ignition[961]: INFO : files: files passed Apr 30 00:45:26.444743 ignition[961]: INFO : Ignition finished successfully Apr 30 00:45:26.443610 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:45:26.452188 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:45:26.456733 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:45:26.461600 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:45:26.461892 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:45:26.488529 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:45:26.488529 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:45:26.492377 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:45:26.493510 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:45:26.496292 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:45:26.503152 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:45:26.534132 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:45:26.534921 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:45:26.535854 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:45:26.536600 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:45:26.537982 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:45:26.540926 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:45:26.568313 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:45:26.582243 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:45:26.598628 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:45:26.599380 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:45:26.601445 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:45:26.603120 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:45:26.603255 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:45:26.604525 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:45:26.605202 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:45:26.607082 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:45:26.608270 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:45:26.609370 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:45:26.610532 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:45:26.611539 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:45:26.613880 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:45:26.614671 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:45:26.615913 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:45:26.616870 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:45:26.617084 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:45:26.618317 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:45:26.618992 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:45:26.620048 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:45:26.620487 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:45:26.621232 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:45:26.621356 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:45:26.622802 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:45:26.622942 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:45:26.624259 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:45:26.624369 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:45:26.625531 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:45:26.625641 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:45:26.636124 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:45:26.640963 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:45:26.645790 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:45:26.645970 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:45:26.647094 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:45:26.647198 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:45:26.653706 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:45:26.653821 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:45:26.657977 ignition[1013]: INFO : Ignition 2.19.0 Apr 30 00:45:26.657977 ignition[1013]: INFO : Stage: umount Apr 30 00:45:26.657977 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:26.657977 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:26.661711 ignition[1013]: INFO : umount: umount passed Apr 30 00:45:26.661711 ignition[1013]: INFO : Ignition finished successfully Apr 30 00:45:26.660092 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:45:26.662190 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:45:26.663796 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:45:26.665455 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:45:26.668430 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:45:26.668516 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:45:26.671475 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:45:26.671538 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:45:26.672252 systemd[1]: Stopped target network.target - Network. Apr 30 00:45:26.672708 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:45:26.672773 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:45:26.674091 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:45:26.674571 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:45:26.677985 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:45:26.679431 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:45:26.679978 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:45:26.680862 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:45:26.680939 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:45:26.682022 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:45:26.682080 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:45:26.683110 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:45:26.683185 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:45:26.684248 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:45:26.684310 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:45:26.685567 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:45:26.686552 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:45:26.688438 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:45:26.689049 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:45:26.689152 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:45:26.690301 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:45:26.690395 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:45:26.691262 systemd-networkd[781]: eth1: DHCPv6 lease lost Apr 30 00:45:26.693981 systemd-networkd[781]: eth0: DHCPv6 lease lost Apr 30 00:45:26.695859 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:45:26.696010 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:45:26.699360 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:45:26.699487 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:45:26.702565 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:45:26.702616 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:45:26.707054 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:45:26.707674 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:45:26.707747 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:45:26.708763 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:45:26.708821 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:45:26.709925 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:45:26.709981 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:45:26.712053 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:45:26.712101 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:45:26.713278 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:45:26.732628 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:45:26.732867 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:45:26.734836 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:45:26.734957 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:45:26.738334 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:45:26.738451 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:45:26.740503 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:45:26.740541 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:45:26.741462 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:45:26.741515 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:45:26.742870 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:45:26.742942 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:45:26.744376 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:45:26.744432 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:45:26.762382 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:45:26.763926 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:45:26.764069 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:45:26.765708 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 00:45:26.765795 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:45:26.767235 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:45:26.767324 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:45:26.770144 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:45:26.770281 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:45:26.775324 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:45:26.775467 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:45:26.777542 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:45:26.784141 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:45:26.794805 systemd[1]: Switching root. Apr 30 00:45:26.831125 systemd-journald[235]: Journal stopped Apr 30 00:45:27.733932 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Apr 30 00:45:27.734028 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:45:27.734042 kernel: SELinux: policy capability open_perms=1 Apr 30 00:45:27.734052 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:45:27.734061 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:45:27.734076 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:45:27.734087 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:45:27.734096 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:45:27.734105 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:45:27.734118 kernel: audit: type=1403 audit(1745973926.983:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:45:27.734129 systemd[1]: Successfully loaded SELinux policy in 37.799ms. Apr 30 00:45:27.734152 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.497ms. Apr 30 00:45:27.734163 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:45:27.734174 systemd[1]: Detected virtualization kvm. Apr 30 00:45:27.734184 systemd[1]: Detected architecture arm64. Apr 30 00:45:27.734194 systemd[1]: Detected first boot. Apr 30 00:45:27.734205 systemd[1]: Hostname set to . Apr 30 00:45:27.734217 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:45:27.734229 zram_generator::config[1056]: No configuration found. Apr 30 00:45:27.734240 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:45:27.734250 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:45:27.734263 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:45:27.734273 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:45:27.734285 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:45:27.734295 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:45:27.734307 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:45:27.734317 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:45:27.734328 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:45:27.734338 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:45:27.734348 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:45:27.734358 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:45:27.734369 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:45:27.734379 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:45:27.734389 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:45:27.734401 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:45:27.734412 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:45:27.734422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:45:27.734432 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:45:27.734442 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:45:27.734452 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:45:27.734463 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:45:27.734475 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:45:27.734486 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:45:27.734497 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:45:27.734513 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:45:27.734524 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:45:27.734534 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:45:27.734544 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:45:27.734555 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:45:27.734567 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:45:27.734577 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:45:27.734587 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:45:27.734598 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:45:27.734608 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:45:27.734618 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:45:27.734629 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:45:27.734639 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:45:27.734649 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:45:27.734661 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:45:27.734672 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:45:27.734682 systemd[1]: Reached target machines.target - Containers. Apr 30 00:45:27.734693 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:45:27.734707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:45:27.734719 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:45:27.734731 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:45:27.734742 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:45:27.734752 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:45:27.734763 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:45:27.734774 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:45:27.734784 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:45:27.734795 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:45:27.734806 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:45:27.734818 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:45:27.734828 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:45:27.734838 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:45:27.734849 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:45:27.734859 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:45:27.734870 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:45:27.734880 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:45:27.734890 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:45:27.738848 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:45:27.738882 systemd[1]: Stopped verity-setup.service. Apr 30 00:45:27.738917 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:45:27.738931 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:45:27.738942 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:45:27.738952 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:45:27.738965 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:45:27.738976 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:45:27.738987 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:45:27.739001 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:45:27.739051 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:45:27.739065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:45:27.739076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:45:27.739087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:45:27.739098 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:45:27.739111 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:45:27.739145 kernel: loop: module loaded Apr 30 00:45:27.739159 kernel: fuse: init (API version 7.39) Apr 30 00:45:27.739169 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:45:27.739180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:45:27.739227 systemd-journald[1116]: Collecting audit messages is disabled. Apr 30 00:45:27.739255 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:45:27.739267 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:45:27.739277 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:45:27.739288 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:45:27.739298 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:45:27.739308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:45:27.739320 systemd-journald[1116]: Journal started Apr 30 00:45:27.739344 systemd-journald[1116]: Runtime Journal (/run/log/journal/a1b4b846f2454b1a9885961b26f43866) is 8.0M, max 76.6M, 68.6M free. Apr 30 00:45:27.479685 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:45:27.500059 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 00:45:27.500581 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:45:27.740355 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:45:27.742426 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:45:27.744180 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:45:27.758102 kernel: ACPI: bus type drm_connector registered Apr 30 00:45:27.764830 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:45:27.765072 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:45:27.766690 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:45:27.778052 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:45:27.780987 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:45:27.781052 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:45:27.782673 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:45:27.796233 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:45:27.801156 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:45:27.802124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:45:27.804793 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:45:27.807990 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:45:27.809161 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:45:27.813398 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:45:27.815078 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:45:27.817936 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:45:27.822943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:45:27.823811 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:45:27.831472 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:45:27.849173 systemd-tmpfiles[1137]: ACLs are not supported, ignoring. Apr 30 00:45:27.849185 systemd-tmpfiles[1137]: ACLs are not supported, ignoring. Apr 30 00:45:27.855826 systemd-journald[1116]: Time spent on flushing to /var/log/journal/a1b4b846f2454b1a9885961b26f43866 is 106.696ms for 1129 entries. Apr 30 00:45:27.855826 systemd-journald[1116]: System Journal (/var/log/journal/a1b4b846f2454b1a9885961b26f43866) is 8.0M, max 584.8M, 576.8M free. Apr 30 00:45:27.998161 systemd-journald[1116]: Received client request to flush runtime journal. Apr 30 00:45:27.998208 kernel: loop0: detected capacity change from 0 to 114328 Apr 30 00:45:27.998221 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:45:27.998234 kernel: loop1: detected capacity change from 0 to 114432 Apr 30 00:45:27.854711 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:45:27.864399 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:45:27.875937 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:45:27.885468 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:45:27.888254 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:45:27.899196 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:45:27.951441 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:45:27.968114 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:45:27.973104 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:45:27.979949 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:45:27.998270 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:45:28.009256 kernel: loop2: detected capacity change from 0 to 8 Apr 30 00:45:28.009121 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:45:28.011936 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:45:28.026191 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:45:28.033060 kernel: loop3: detected capacity change from 0 to 189592 Apr 30 00:45:28.043047 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Apr 30 00:45:28.043065 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Apr 30 00:45:28.052739 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:45:28.086940 kernel: loop4: detected capacity change from 0 to 114328 Apr 30 00:45:28.104038 kernel: loop5: detected capacity change from 0 to 114432 Apr 30 00:45:28.125861 kernel: loop6: detected capacity change from 0 to 8 Apr 30 00:45:28.125989 kernel: loop7: detected capacity change from 0 to 189592 Apr 30 00:45:28.144889 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 00:45:28.145429 (sd-merge)[1200]: Merged extensions into '/usr'. Apr 30 00:45:28.154673 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:45:28.154694 systemd[1]: Reloading... Apr 30 00:45:28.260002 zram_generator::config[1229]: No configuration found. Apr 30 00:45:28.406982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:45:28.415937 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:45:28.453849 systemd[1]: Reloading finished in 298 ms. Apr 30 00:45:28.490793 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:45:28.495593 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:45:28.509269 systemd[1]: Starting ensure-sysext.service... Apr 30 00:45:28.517447 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:45:28.532104 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:45:28.532122 systemd[1]: Reloading... Apr 30 00:45:28.545179 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:45:28.545874 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:45:28.547683 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:45:28.553052 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Apr 30 00:45:28.553117 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Apr 30 00:45:28.557779 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:45:28.557795 systemd-tmpfiles[1264]: Skipping /boot Apr 30 00:45:28.567946 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:45:28.567956 systemd-tmpfiles[1264]: Skipping /boot Apr 30 00:45:28.612939 zram_generator::config[1291]: No configuration found. Apr 30 00:45:28.716441 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:45:28.763087 systemd[1]: Reloading finished in 230 ms. Apr 30 00:45:28.783687 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:45:28.790660 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:45:28.805209 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:45:28.809276 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:45:28.818975 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:45:28.823577 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:45:28.829247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:45:28.835207 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:45:28.839835 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:45:28.845396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:45:28.851186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:45:28.857173 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:45:28.858524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:45:28.860646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:45:28.860804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:45:28.864720 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:45:28.869189 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:45:28.870055 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:45:28.871780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:45:28.873482 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:45:28.877479 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:45:28.881554 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:45:28.890226 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:45:28.890605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:45:28.892313 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Apr 30 00:45:28.894982 systemd[1]: Finished ensure-sysext.service. Apr 30 00:45:28.904188 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:45:28.905221 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:45:28.905773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:45:28.914872 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:45:28.920249 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:45:28.926165 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:45:28.927973 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:45:28.934409 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:45:28.944775 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:45:28.947366 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:45:28.947546 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:45:28.953588 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:45:28.958512 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:45:28.986258 augenrules[1384]: No rules Apr 30 00:45:28.988970 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:45:28.998341 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:45:29.020564 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:45:29.025399 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 00:45:29.135888 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:45:29.138936 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:45:29.148042 systemd-networkd[1367]: lo: Link UP Apr 30 00:45:29.148053 systemd-networkd[1367]: lo: Gained carrier Apr 30 00:45:29.149657 systemd-networkd[1367]: Enumeration completed Apr 30 00:45:29.149763 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:45:29.157275 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:29.157287 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:45:29.159768 systemd-networkd[1367]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:29.159781 systemd-networkd[1367]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:45:29.160405 systemd-networkd[1367]: eth0: Link UP Apr 30 00:45:29.160410 systemd-networkd[1367]: eth0: Gained carrier Apr 30 00:45:29.160428 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:29.164126 systemd-resolved[1340]: Positive Trust Anchors: Apr 30 00:45:29.165962 systemd-resolved[1340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:45:29.166042 systemd-resolved[1340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:45:29.171349 systemd-resolved[1340]: Using system hostname 'ci-4081-3-3-c-89ff891e34'. Apr 30 00:45:29.176557 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:45:29.177415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:45:29.178210 systemd[1]: Reached target network.target - Network. Apr 30 00:45:29.178695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:45:29.180869 systemd-networkd[1367]: eth1: Link UP Apr 30 00:45:29.180883 systemd-networkd[1367]: eth1: Gained carrier Apr 30 00:45:29.180931 systemd-networkd[1367]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:29.186348 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:29.198424 systemd-networkd[1367]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:29.208068 systemd-networkd[1367]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:45:29.209135 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Apr 30 00:45:29.228951 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1374) Apr 30 00:45:29.235100 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:45:29.235265 systemd-networkd[1367]: eth0: DHCPv4 address 91.99.89.231/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 00:45:29.235642 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Apr 30 00:45:29.236298 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Apr 30 00:45:29.275804 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 00:45:29.278861 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 00:45:29.279030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:45:29.286152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:45:29.290338 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:45:29.295168 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:45:29.295793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:45:29.306223 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:45:29.306936 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:45:29.307356 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:45:29.307849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:45:29.311435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:45:29.311596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:45:29.315374 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:45:29.319587 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:45:29.320030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:45:29.320866 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:45:29.327633 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:45:29.352918 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 30 00:45:29.352986 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 00:45:29.353037 kernel: [drm] features: -context_init Apr 30 00:45:29.356921 kernel: [drm] number of scanouts: 1 Apr 30 00:45:29.357031 kernel: [drm] number of cap sets: 0 Apr 30 00:45:29.362921 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 00:45:29.372957 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 00:45:29.385244 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:45:29.387094 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 00:45:29.410301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:45:29.411962 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:45:29.418495 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:45:29.490891 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:45:29.535628 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:45:29.543319 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:45:29.570932 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:45:29.599024 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:45:29.600134 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:45:29.600822 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:45:29.603165 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:45:29.603844 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:45:29.604792 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:45:29.605710 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:45:29.606465 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:45:29.607147 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:45:29.607186 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:45:29.607673 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:45:29.611198 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:45:29.614625 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:45:29.627921 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:45:29.631319 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:45:29.632736 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:45:29.633657 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:45:29.634415 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:45:29.635290 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:45:29.635400 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:45:29.642091 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:45:29.647185 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:45:29.648326 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:45:29.653422 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:45:29.658142 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:45:29.662677 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:45:29.665118 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:45:29.667151 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:45:29.674025 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:45:29.675834 jq[1452]: false Apr 30 00:45:29.676293 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 00:45:29.692177 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:45:29.698146 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:45:29.704125 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:45:29.706768 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:45:29.707852 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:45:29.710125 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:45:29.718493 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:45:29.721566 dbus-daemon[1451]: [system] SELinux support is enabled Apr 30 00:45:29.721805 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:45:29.722698 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:45:29.728437 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:45:29.730987 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:45:29.745421 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:45:29.745770 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:45:29.755516 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:45:29.755576 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:45:29.759275 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:45:29.759310 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:45:29.766467 coreos-metadata[1450]: Apr 30 00:45:29.766 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 00:45:29.777424 jq[1464]: true Apr 30 00:45:29.777681 coreos-metadata[1450]: Apr 30 00:45:29.774 INFO Fetch successful Apr 30 00:45:29.777681 coreos-metadata[1450]: Apr 30 00:45:29.774 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 00:45:29.777759 extend-filesystems[1453]: Found loop4 Apr 30 00:45:29.777759 extend-filesystems[1453]: Found loop5 Apr 30 00:45:29.777759 extend-filesystems[1453]: Found loop6 Apr 30 00:45:29.777759 extend-filesystems[1453]: Found loop7 Apr 30 00:45:29.777759 extend-filesystems[1453]: Found sda Apr 30 00:45:29.777759 extend-filesystems[1453]: Found sda1 Apr 30 00:45:29.777759 extend-filesystems[1453]: Found sda2 Apr 30 00:45:29.777759 extend-filesystems[1453]: Found sda3 Apr 30 00:45:29.777759 extend-filesystems[1453]: Found usr Apr 30 00:45:29.777759 extend-filesystems[1453]: Found sda4 Apr 30 00:45:29.777759 extend-filesystems[1453]: Found sda6 Apr 30 00:45:29.777759 extend-filesystems[1453]: Found sda7 Apr 30 00:45:29.777759 extend-filesystems[1453]: Found sda9 Apr 30 00:45:29.777759 extend-filesystems[1453]: Checking size of /dev/sda9 Apr 30 00:45:29.830010 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 00:45:29.830117 tar[1468]: linux-arm64/helm Apr 30 00:45:29.792878 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:45:29.830488 coreos-metadata[1450]: Apr 30 00:45:29.781 INFO Fetch successful Apr 30 00:45:29.830522 extend-filesystems[1453]: Resized partition /dev/sda9 Apr 30 00:45:29.831271 update_engine[1463]: I20250430 00:45:29.779611 1463 main.cc:92] Flatcar Update Engine starting Apr 30 00:45:29.831271 update_engine[1463]: I20250430 00:45:29.788733 1463 update_check_scheduler.cc:74] Next update check in 3m39s Apr 30 00:45:29.793711 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:45:29.831680 jq[1482]: true Apr 30 00:45:29.833996 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:45:29.800131 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:45:29.815503 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:45:29.815680 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:45:29.913645 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:45:29.922335 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:45:29.937236 systemd[1]: Starting sshkeys.service... Apr 30 00:45:29.946799 systemd-logind[1462]: New seat seat0. Apr 30 00:45:29.947986 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1376) Apr 30 00:45:29.957940 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 00:45:29.980406 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:45:29.982218 extend-filesystems[1493]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 00:45:29.982218 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 00:45:29.982218 extend-filesystems[1493]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 00:45:29.980525 systemd-logind[1462]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 30 00:45:29.998683 extend-filesystems[1453]: Resized filesystem in /dev/sda9 Apr 30 00:45:29.998683 extend-filesystems[1453]: Found sr0 Apr 30 00:45:29.980723 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:45:29.983360 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:45:29.983967 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:45:29.988461 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:45:29.997532 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 00:45:30.002030 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 00:45:30.004105 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:45:30.084173 coreos-metadata[1529]: Apr 30 00:45:30.083 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 00:45:30.088417 coreos-metadata[1529]: Apr 30 00:45:30.088 INFO Fetch successful Apr 30 00:45:30.090069 unknown[1529]: wrote ssh authorized keys file for user: core Apr 30 00:45:30.137815 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:45:30.139937 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 00:45:30.145419 systemd[1]: Finished sshkeys.service. Apr 30 00:45:30.221561 containerd[1476]: time="2025-04-30T00:45:30.220609760Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:45:30.230957 systemd-networkd[1367]: eth0: Gained IPv6LL Apr 30 00:45:30.231502 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Apr 30 00:45:30.237947 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:45:30.239957 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:45:30.248346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:45:30.252462 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:45:30.264079 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:45:30.309173 containerd[1476]: time="2025-04-30T00:45:30.309116440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:30.318512 containerd[1476]: time="2025-04-30T00:45:30.318444560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:45:30.318512 containerd[1476]: time="2025-04-30T00:45:30.318501320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:45:30.318512 containerd[1476]: time="2025-04-30T00:45:30.318521560Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:45:30.318770 containerd[1476]: time="2025-04-30T00:45:30.318691720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:45:30.318807 containerd[1476]: time="2025-04-30T00:45:30.318775120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:30.318920 containerd[1476]: time="2025-04-30T00:45:30.318860320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:45:30.318920 containerd[1476]: time="2025-04-30T00:45:30.318884200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:30.321952 containerd[1476]: time="2025-04-30T00:45:30.319109920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:45:30.321952 containerd[1476]: time="2025-04-30T00:45:30.319135040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:30.321952 containerd[1476]: time="2025-04-30T00:45:30.319150320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:45:30.321952 containerd[1476]: time="2025-04-30T00:45:30.319159720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:30.321952 containerd[1476]: time="2025-04-30T00:45:30.319251880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:30.321952 containerd[1476]: time="2025-04-30T00:45:30.319464840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:30.321952 containerd[1476]: time="2025-04-30T00:45:30.319579000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:45:30.321952 containerd[1476]: time="2025-04-30T00:45:30.319594560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:45:30.321952 containerd[1476]: time="2025-04-30T00:45:30.319668680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:45:30.321952 containerd[1476]: time="2025-04-30T00:45:30.319709920Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:45:30.325121 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:45:30.327391 containerd[1476]: time="2025-04-30T00:45:30.327350560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:45:30.327561 containerd[1476]: time="2025-04-30T00:45:30.327544280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:45:30.328604 containerd[1476]: time="2025-04-30T00:45:30.328145560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:45:30.328604 containerd[1476]: time="2025-04-30T00:45:30.328175120Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:45:30.328604 containerd[1476]: time="2025-04-30T00:45:30.328191440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:45:30.328604 containerd[1476]: time="2025-04-30T00:45:30.328368880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330193640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330361880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330378680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330417680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330435480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330448960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330461800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330477280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330493320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330505320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330516600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330529800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330550720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.330931 containerd[1476]: time="2025-04-30T00:45:30.330565200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330577640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330590240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330601960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330615560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330627440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330642960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330655280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330674880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330686640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330702640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330715760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330736960Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330760800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330775600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.331275 containerd[1476]: time="2025-04-30T00:45:30.330786720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:45:30.335920 containerd[1476]: time="2025-04-30T00:45:30.333938680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:45:30.335920 containerd[1476]: time="2025-04-30T00:45:30.333978680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:45:30.335920 containerd[1476]: time="2025-04-30T00:45:30.333992520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:45:30.335920 containerd[1476]: time="2025-04-30T00:45:30.334146320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:45:30.335920 containerd[1476]: time="2025-04-30T00:45:30.334158040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.335920 containerd[1476]: time="2025-04-30T00:45:30.334172160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:45:30.335920 containerd[1476]: time="2025-04-30T00:45:30.334187440Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:45:30.335920 containerd[1476]: time="2025-04-30T00:45:30.334204760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:45:30.336197 containerd[1476]: time="2025-04-30T00:45:30.334644640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:45:30.336197 containerd[1476]: time="2025-04-30T00:45:30.334710200Z" level=info msg="Connect containerd service" Apr 30 00:45:30.336197 containerd[1476]: time="2025-04-30T00:45:30.334747600Z" level=info msg="using legacy CRI server" Apr 30 00:45:30.336197 containerd[1476]: time="2025-04-30T00:45:30.334755400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:45:30.336197 containerd[1476]: time="2025-04-30T00:45:30.334986080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:45:30.337900 containerd[1476]: time="2025-04-30T00:45:30.336561160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:45:30.337900 containerd[1476]: time="2025-04-30T00:45:30.337213280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:45:30.337900 containerd[1476]: time="2025-04-30T00:45:30.337262800Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:45:30.337900 containerd[1476]: time="2025-04-30T00:45:30.337429520Z" level=info msg="Start subscribing containerd event" Apr 30 00:45:30.337900 containerd[1476]: time="2025-04-30T00:45:30.337747840Z" level=info msg="Start recovering state" Apr 30 00:45:30.337900 containerd[1476]: time="2025-04-30T00:45:30.337843720Z" level=info msg="Start event monitor" Apr 30 00:45:30.337900 containerd[1476]: time="2025-04-30T00:45:30.337857920Z" level=info msg="Start snapshots syncer" Apr 30 00:45:30.337900 containerd[1476]: time="2025-04-30T00:45:30.337879080Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:45:30.337900 containerd[1476]: time="2025-04-30T00:45:30.337887600Z" level=info msg="Start streaming server" Apr 30 00:45:30.347920 containerd[1476]: time="2025-04-30T00:45:30.346165400Z" level=info msg="containerd successfully booted in 0.128571s" Apr 30 00:45:30.346267 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:45:30.646399 tar[1468]: linux-arm64/LICENSE Apr 30 00:45:30.646399 tar[1468]: linux-arm64/README.md Apr 30 00:45:30.672130 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:45:30.914862 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:45:30.937719 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:45:30.946088 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:45:30.956432 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:45:30.956629 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:45:30.972240 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:45:30.986337 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:45:30.998833 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:45:31.003006 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:45:31.004200 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:45:31.035753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:45:31.038363 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:45:31.040043 systemd[1]: Startup finished in 808ms (kernel) + 5.286s (initrd) + 4.093s (userspace) = 10.188s. Apr 30 00:45:31.046649 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:45:31.188056 systemd-networkd[1367]: eth1: Gained IPv6LL Apr 30 00:45:31.188846 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Apr 30 00:45:31.590380 kubelet[1582]: E0430 00:45:31.590255 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:45:31.593386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:45:31.593606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:45:41.844351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:45:41.854272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:45:41.964966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:45:41.970938 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:45:42.021075 kubelet[1601]: E0430 00:45:42.020943 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:45:42.025662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:45:42.025837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:45:52.083676 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:45:52.091431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:45:52.214167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:45:52.215831 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:45:52.257520 kubelet[1617]: E0430 00:45:52.257445 1617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:45:52.260809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:45:52.261083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:46:01.561219 systemd-timesyncd[1356]: Contacted time server 129.70.132.33:123 (2.flatcar.pool.ntp.org). Apr 30 00:46:01.561333 systemd-timesyncd[1356]: Initial clock synchronization to Wed 2025-04-30 00:46:01.591941 UTC. Apr 30 00:46:02.333361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:46:02.344188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:46:02.468277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:46:02.469793 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:46:02.512700 kubelet[1632]: E0430 00:46:02.512631 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:46:02.516569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:46:02.516997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:46:12.583436 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 00:46:12.591238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:46:12.702280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:46:12.713449 (kubelet)[1647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:46:12.758790 kubelet[1647]: E0430 00:46:12.758725 1647 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:46:12.761853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:46:12.762213 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:46:14.897818 update_engine[1463]: I20250430 00:46:14.897001 1463 update_attempter.cc:509] Updating boot flags... Apr 30 00:46:14.945732 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1662) Apr 30 00:46:14.990925 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1661) Apr 30 00:46:22.833189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 00:46:22.841237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:46:22.968154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:46:22.970710 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:46:23.016657 kubelet[1679]: E0430 00:46:23.016578 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:46:23.020457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:46:23.020878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:46:33.083564 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 00:46:33.093659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:46:33.201008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:46:33.215644 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:46:33.261826 kubelet[1694]: E0430 00:46:33.261763 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:46:33.265210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:46:33.265770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:46:43.333434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 00:46:43.341216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:46:43.461587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:46:43.466695 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:46:43.510199 kubelet[1709]: E0430 00:46:43.510142 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:46:43.512921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:46:43.513223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:46:53.583883 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 00:46:53.591268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:46:53.729149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:46:53.732170 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:46:53.771913 kubelet[1724]: E0430 00:46:53.771822 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:46:53.774640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:46:53.774791 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:47:03.833574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 00:47:03.840335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:03.981582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:03.986648 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:47:04.030925 kubelet[1739]: E0430 00:47:04.030845 1739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:47:04.033697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:47:04.034090 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:47:11.681656 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:47:11.693593 systemd[1]: Started sshd@0-91.99.89.231:22-139.178.68.195:60464.service - OpenSSH per-connection server daemon (139.178.68.195:60464). Apr 30 00:47:12.676880 sshd[1747]: Accepted publickey for core from 139.178.68.195 port 60464 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:12.680772 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:12.692708 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:47:12.701236 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:47:12.705519 systemd-logind[1462]: New session 1 of user core. Apr 30 00:47:12.717500 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:47:12.728342 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:47:12.732583 (systemd)[1751]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:47:12.861642 systemd[1751]: Queued start job for default target default.target. Apr 30 00:47:12.871617 systemd[1751]: Created slice app.slice - User Application Slice. Apr 30 00:47:12.871685 systemd[1751]: Reached target paths.target - Paths. Apr 30 00:47:12.871713 systemd[1751]: Reached target timers.target - Timers. Apr 30 00:47:12.874671 systemd[1751]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:47:12.891984 systemd[1751]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:47:12.892110 systemd[1751]: Reached target sockets.target - Sockets. Apr 30 00:47:12.892123 systemd[1751]: Reached target basic.target - Basic System. Apr 30 00:47:12.892170 systemd[1751]: Reached target default.target - Main User Target. Apr 30 00:47:12.892198 systemd[1751]: Startup finished in 152ms. Apr 30 00:47:12.892385 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:47:12.904282 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:47:13.601499 systemd[1]: Started sshd@1-91.99.89.231:22-139.178.68.195:60478.service - OpenSSH per-connection server daemon (139.178.68.195:60478). Apr 30 00:47:14.083527 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 00:47:14.094307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:14.237300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:14.237621 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:47:14.286094 kubelet[1772]: E0430 00:47:14.286000 1772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:47:14.289842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:47:14.290009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:47:14.581306 sshd[1762]: Accepted publickey for core from 139.178.68.195 port 60478 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:14.584162 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:14.592278 systemd-logind[1462]: New session 2 of user core. Apr 30 00:47:14.597321 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:47:15.264403 sshd[1762]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:15.269660 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:47:15.270707 systemd[1]: sshd@1-91.99.89.231:22-139.178.68.195:60478.service: Deactivated successfully. Apr 30 00:47:15.273658 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:47:15.276257 systemd-logind[1462]: Removed session 2. Apr 30 00:47:15.442446 systemd[1]: Started sshd@2-91.99.89.231:22-139.178.68.195:52434.service - OpenSSH per-connection server daemon (139.178.68.195:52434). Apr 30 00:47:16.423952 sshd[1784]: Accepted publickey for core from 139.178.68.195 port 52434 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:16.426092 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:16.433851 systemd-logind[1462]: New session 3 of user core. Apr 30 00:47:16.437114 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:47:17.109695 sshd[1784]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:17.115872 systemd[1]: sshd@2-91.99.89.231:22-139.178.68.195:52434.service: Deactivated successfully. Apr 30 00:47:17.118576 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:47:17.120935 systemd-logind[1462]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:47:17.123652 systemd-logind[1462]: Removed session 3. Apr 30 00:47:17.294365 systemd[1]: Started sshd@3-91.99.89.231:22-139.178.68.195:52440.service - OpenSSH per-connection server daemon (139.178.68.195:52440). Apr 30 00:47:18.285514 sshd[1791]: Accepted publickey for core from 139.178.68.195 port 52440 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:18.288036 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:18.292874 systemd-logind[1462]: New session 4 of user core. Apr 30 00:47:18.301230 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:47:18.977757 sshd[1791]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:18.985341 systemd[1]: sshd@3-91.99.89.231:22-139.178.68.195:52440.service: Deactivated successfully. Apr 30 00:47:18.988163 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:47:18.989281 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:47:18.991313 systemd-logind[1462]: Removed session 4. Apr 30 00:47:19.153880 systemd[1]: Started sshd@4-91.99.89.231:22-139.178.68.195:52448.service - OpenSSH per-connection server daemon (139.178.68.195:52448). Apr 30 00:47:20.165740 sshd[1798]: Accepted publickey for core from 139.178.68.195 port 52448 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:20.168210 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:20.173850 systemd-logind[1462]: New session 5 of user core. Apr 30 00:47:20.182347 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:47:20.710317 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:47:20.710677 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:47:20.727322 sudo[1801]: pam_unix(sudo:session): session closed for user root Apr 30 00:47:20.890536 sshd[1798]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:20.897201 systemd[1]: sshd@4-91.99.89.231:22-139.178.68.195:52448.service: Deactivated successfully. Apr 30 00:47:20.899742 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:47:20.900880 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:47:20.902221 systemd-logind[1462]: Removed session 5. Apr 30 00:47:21.064411 systemd[1]: Started sshd@5-91.99.89.231:22-139.178.68.195:52456.service - OpenSSH per-connection server daemon (139.178.68.195:52456). Apr 30 00:47:22.068754 sshd[1806]: Accepted publickey for core from 139.178.68.195 port 52456 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:22.070689 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:22.078032 systemd-logind[1462]: New session 6 of user core. Apr 30 00:47:22.082177 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:47:22.601611 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:47:22.601942 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:47:22.606689 sudo[1810]: pam_unix(sudo:session): session closed for user root Apr 30 00:47:22.613526 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:47:22.613951 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:47:22.637591 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:47:22.640527 auditctl[1813]: No rules Apr 30 00:47:22.641012 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:47:22.641268 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:47:22.650524 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:47:22.679592 augenrules[1831]: No rules Apr 30 00:47:22.681414 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:47:22.683476 sudo[1809]: pam_unix(sudo:session): session closed for user root Apr 30 00:47:22.845320 sshd[1806]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:22.851264 systemd[1]: sshd@5-91.99.89.231:22-139.178.68.195:52456.service: Deactivated successfully. Apr 30 00:47:22.854232 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:47:22.857858 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:47:22.860384 systemd-logind[1462]: Removed session 6. Apr 30 00:47:23.026244 systemd[1]: Started sshd@6-91.99.89.231:22-139.178.68.195:52468.service - OpenSSH per-connection server daemon (139.178.68.195:52468). Apr 30 00:47:24.014109 sshd[1839]: Accepted publickey for core from 139.178.68.195 port 52468 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:24.016435 sshd[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:24.022262 systemd-logind[1462]: New session 7 of user core. Apr 30 00:47:24.039348 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:47:24.333114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 00:47:24.340190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:24.469210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:24.469371 (kubelet)[1850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:47:24.513288 kubelet[1850]: E0430 00:47:24.513233 1850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:47:24.517794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:47:24.517989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:47:24.541437 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:47:24.541797 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:47:24.848382 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:47:24.853988 (dockerd)[1872]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:47:25.111338 dockerd[1872]: time="2025-04-30T00:47:25.111048910Z" level=info msg="Starting up" Apr 30 00:47:25.196208 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3411189790-merged.mount: Deactivated successfully. Apr 30 00:47:25.210559 systemd[1]: var-lib-docker-metacopy\x2dcheck1155923726-merged.mount: Deactivated successfully. Apr 30 00:47:25.220940 dockerd[1872]: time="2025-04-30T00:47:25.220641215Z" level=info msg="Loading containers: start." Apr 30 00:47:25.324962 kernel: Initializing XFRM netlink socket Apr 30 00:47:25.425401 systemd-networkd[1367]: docker0: Link UP Apr 30 00:47:25.447407 dockerd[1872]: time="2025-04-30T00:47:25.447325498Z" level=info msg="Loading containers: done." Apr 30 00:47:25.465355 dockerd[1872]: time="2025-04-30T00:47:25.465287862Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:47:25.465539 dockerd[1872]: time="2025-04-30T00:47:25.465425825Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:47:25.465609 dockerd[1872]: time="2025-04-30T00:47:25.465585308Z" level=info msg="Daemon has completed initialization" Apr 30 00:47:25.503950 dockerd[1872]: time="2025-04-30T00:47:25.503588320Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:47:25.504321 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:47:26.190267 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck170455573-merged.mount: Deactivated successfully. Apr 30 00:47:26.547109 containerd[1476]: time="2025-04-30T00:47:26.545207349Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Apr 30 00:47:27.178705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501914657.mount: Deactivated successfully. Apr 30 00:47:28.176406 containerd[1476]: time="2025-04-30T00:47:28.176280470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:28.177919 containerd[1476]: time="2025-04-30T00:47:28.177717977Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554700" Apr 30 00:47:28.180469 containerd[1476]: time="2025-04-30T00:47:28.180397108Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:28.185916 containerd[1476]: time="2025-04-30T00:47:28.185781209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:28.187754 containerd[1476]: time="2025-04-30T00:47:28.187436240Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.64217893s" Apr 30 00:47:28.187754 containerd[1476]: time="2025-04-30T00:47:28.187498441Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" Apr 30 00:47:28.188650 containerd[1476]: time="2025-04-30T00:47:28.188611622Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Apr 30 00:47:29.284189 containerd[1476]: time="2025-04-30T00:47:29.284046846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:29.286056 containerd[1476]: time="2025-04-30T00:47:29.285998961Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458998" Apr 30 00:47:29.286933 containerd[1476]: time="2025-04-30T00:47:29.286801976Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:29.291596 containerd[1476]: time="2025-04-30T00:47:29.291531543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:29.293454 containerd[1476]: time="2025-04-30T00:47:29.293394497Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.104607232s" Apr 30 00:47:29.297030 containerd[1476]: time="2025-04-30T00:47:29.296129827Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" Apr 30 00:47:29.297423 containerd[1476]: time="2025-04-30T00:47:29.297391690Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Apr 30 00:47:30.247944 containerd[1476]: time="2025-04-30T00:47:30.247421994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:30.249291 containerd[1476]: time="2025-04-30T00:47:30.249224507Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125833" Apr 30 00:47:30.250608 containerd[1476]: time="2025-04-30T00:47:30.250520810Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:30.254460 containerd[1476]: time="2025-04-30T00:47:30.254399119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:30.256081 containerd[1476]: time="2025-04-30T00:47:30.255948187Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 958.444535ms" Apr 30 00:47:30.256081 containerd[1476]: time="2025-04-30T00:47:30.255992028Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" Apr 30 00:47:30.257047 containerd[1476]: time="2025-04-30T00:47:30.257003846Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Apr 30 00:47:31.366850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount658007554.mount: Deactivated successfully. Apr 30 00:47:31.681103 containerd[1476]: time="2025-04-30T00:47:31.680229151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:31.682654 containerd[1476]: time="2025-04-30T00:47:31.682587992Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871943" Apr 30 00:47:31.683742 containerd[1476]: time="2025-04-30T00:47:31.683680851Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:31.686279 containerd[1476]: time="2025-04-30T00:47:31.686197335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:31.687317 containerd[1476]: time="2025-04-30T00:47:31.687170952Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.4298779s" Apr 30 00:47:31.687317 containerd[1476]: time="2025-04-30T00:47:31.687213352Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" Apr 30 00:47:31.688288 containerd[1476]: time="2025-04-30T00:47:31.688247330Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:47:32.310433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999900184.mount: Deactivated successfully. Apr 30 00:47:32.911290 containerd[1476]: time="2025-04-30T00:47:32.911173306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:32.912943 containerd[1476]: time="2025-04-30T00:47:32.912658051Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Apr 30 00:47:32.914461 containerd[1476]: time="2025-04-30T00:47:32.914363800Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:32.918359 containerd[1476]: time="2025-04-30T00:47:32.918285667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:32.920433 containerd[1476]: time="2025-04-30T00:47:32.920087297Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.231674604s" Apr 30 00:47:32.920433 containerd[1476]: time="2025-04-30T00:47:32.920134538Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 00:47:32.920561 containerd[1476]: time="2025-04-30T00:47:32.920524425Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 00:47:33.423555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2290758644.mount: Deactivated successfully. Apr 30 00:47:33.431932 containerd[1476]: time="2025-04-30T00:47:33.430453001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:33.431932 containerd[1476]: time="2025-04-30T00:47:33.431864465Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:33.432128 containerd[1476]: time="2025-04-30T00:47:33.431952706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 30 00:47:33.435811 containerd[1476]: time="2025-04-30T00:47:33.435748929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:33.437219 containerd[1476]: time="2025-04-30T00:47:33.437159913Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 516.600487ms" Apr 30 00:47:33.437219 containerd[1476]: time="2025-04-30T00:47:33.437216834Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 30 00:47:33.438455 containerd[1476]: time="2025-04-30T00:47:33.438321532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Apr 30 00:47:33.997192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1822795838.mount: Deactivated successfully. Apr 30 00:47:34.583199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 00:47:34.592296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:34.752207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:34.754814 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:47:34.822131 kubelet[2188]: E0430 00:47:34.821812 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:47:34.825831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:47:34.827227 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:47:35.495927 containerd[1476]: time="2025-04-30T00:47:35.493865497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:35.495927 containerd[1476]: time="2025-04-30T00:47:35.495359361Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406533" Apr 30 00:47:35.496473 containerd[1476]: time="2025-04-30T00:47:35.496426778Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:35.499769 containerd[1476]: time="2025-04-30T00:47:35.499714830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:35.501734 containerd[1476]: time="2025-04-30T00:47:35.501665901Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.062976682s" Apr 30 00:47:35.501949 containerd[1476]: time="2025-04-30T00:47:35.501925025Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Apr 30 00:47:40.454265 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:40.464350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:40.506040 systemd[1]: Reloading requested from client PID 2223 ('systemctl') (unit session-7.scope)... Apr 30 00:47:40.506230 systemd[1]: Reloading... Apr 30 00:47:40.622926 zram_generator::config[2263]: No configuration found. Apr 30 00:47:40.743993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:47:40.814530 systemd[1]: Reloading finished in 307 ms. Apr 30 00:47:40.868726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:40.873659 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:40.876158 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:47:40.877131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:40.888492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:41.005168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:41.005361 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:47:41.055451 kubelet[2313]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:47:41.055451 kubelet[2313]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:47:41.055451 kubelet[2313]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:47:41.055876 kubelet[2313]: I0430 00:47:41.055608 2313 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:47:42.462645 kubelet[2313]: I0430 00:47:42.462590 2313 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 00:47:42.462645 kubelet[2313]: I0430 00:47:42.462627 2313 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:47:42.463315 kubelet[2313]: I0430 00:47:42.463095 2313 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 00:47:42.491554 kubelet[2313]: E0430 00:47:42.491500 2313 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.89.231:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.89.231:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:42.492993 kubelet[2313]: I0430 00:47:42.492724 2313 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:47:42.502642 kubelet[2313]: E0430 00:47:42.502585 2313 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:47:42.502642 kubelet[2313]: I0430 00:47:42.502640 2313 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:47:42.507793 kubelet[2313]: I0430 00:47:42.507756 2313 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:47:42.508135 kubelet[2313]: I0430 00:47:42.508115 2313 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 00:47:42.508368 kubelet[2313]: I0430 00:47:42.508326 2313 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:47:42.508664 kubelet[2313]: I0430 00:47:42.508374 2313 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-c-89ff891e34","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:47:42.509106 kubelet[2313]: I0430 00:47:42.508892 2313 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:47:42.509106 kubelet[2313]: I0430 00:47:42.508956 2313 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 00:47:42.509221 kubelet[2313]: I0430 00:47:42.509204 2313 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:47:42.513963 kubelet[2313]: I0430 00:47:42.512861 2313 kubelet.go:408] "Attempting to sync node with API server" Apr 30 00:47:42.513963 kubelet[2313]: I0430 00:47:42.512960 2313 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:47:42.513963 kubelet[2313]: I0430 00:47:42.513077 2313 kubelet.go:314] "Adding apiserver pod source" Apr 30 00:47:42.513963 kubelet[2313]: I0430 00:47:42.513095 2313 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:47:42.518243 kubelet[2313]: W0430 00:47:42.517487 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.89.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-c-89ff891e34&limit=500&resourceVersion=0": dial tcp 91.99.89.231:6443: connect: connection refused Apr 30 00:47:42.518243 kubelet[2313]: E0430 00:47:42.517558 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.89.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-c-89ff891e34&limit=500&resourceVersion=0\": dial tcp 91.99.89.231:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:42.520700 kubelet[2313]: W0430 00:47:42.519979 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.89.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.89.231:6443: connect: connection refused Apr 30 00:47:42.520700 kubelet[2313]: E0430 00:47:42.520056 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.89.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.89.231:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:42.520700 kubelet[2313]: I0430 00:47:42.520357 2313 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:47:42.522555 kubelet[2313]: I0430 00:47:42.522525 2313 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:47:42.524450 kubelet[2313]: W0430 00:47:42.524429 2313 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:47:42.526281 kubelet[2313]: I0430 00:47:42.526255 2313 server.go:1269] "Started kubelet" Apr 30 00:47:42.527941 kubelet[2313]: I0430 00:47:42.527890 2313 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:47:42.532271 kubelet[2313]: E0430 00:47:42.530954 2313 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.89.231:6443/api/v1/namespaces/default/events\": dial tcp 91.99.89.231:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-c-89ff891e34.183af228c7ebba03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-c-89ff891e34,UID:ci-4081-3-3-c-89ff891e34,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-c-89ff891e34,},FirstTimestamp:2025-04-30 00:47:42.526216707 +0000 UTC m=+1.515986159,LastTimestamp:2025-04-30 00:47:42.526216707 +0000 UTC m=+1.515986159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-c-89ff891e34,}" Apr 30 00:47:42.534973 kubelet[2313]: I0430 00:47:42.534648 2313 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:47:42.535666 kubelet[2313]: I0430 00:47:42.535639 2313 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 00:47:42.536826 kubelet[2313]: I0430 00:47:42.536624 2313 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:47:42.537502 kubelet[2313]: E0430 00:47:42.537474 2313 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-89ff891e34\" not found" Apr 30 00:47:42.540374 kubelet[2313]: E0430 00:47:42.539115 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.89.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-c-89ff891e34?timeout=10s\": dial tcp 91.99.89.231:6443: connect: connection refused" interval="200ms" Apr 30 00:47:42.540374 kubelet[2313]: I0430 00:47:42.539230 2313 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 00:47:42.540791 kubelet[2313]: I0430 00:47:42.540768 2313 server.go:460] "Adding debug handlers to kubelet server" Apr 30 00:47:42.542919 kubelet[2313]: I0430 00:47:42.536073 2313 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:47:42.543303 kubelet[2313]: I0430 00:47:42.543283 2313 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:47:42.543470 kubelet[2313]: I0430 00:47:42.543459 2313 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:47:42.545454 kubelet[2313]: W0430 00:47:42.545394 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.89.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.89.231:6443: connect: connection refused Apr 30 00:47:42.545590 kubelet[2313]: E0430 00:47:42.545574 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.89.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.89.231:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:42.546186 kubelet[2313]: I0430 00:47:42.546164 2313 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:47:42.546409 kubelet[2313]: I0430 00:47:42.546389 2313 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:47:42.549881 kubelet[2313]: E0430 00:47:42.549852 2313 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:47:42.551210 kubelet[2313]: I0430 00:47:42.551179 2313 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:47:42.568131 kubelet[2313]: I0430 00:47:42.568066 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:47:42.571637 kubelet[2313]: I0430 00:47:42.571542 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:47:42.571637 kubelet[2313]: I0430 00:47:42.571575 2313 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:47:42.571637 kubelet[2313]: I0430 00:47:42.571600 2313 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 00:47:42.572126 kubelet[2313]: E0430 00:47:42.571651 2313 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:47:42.573370 kubelet[2313]: I0430 00:47:42.573327 2313 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:47:42.573370 kubelet[2313]: I0430 00:47:42.573359 2313 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:47:42.573370 kubelet[2313]: I0430 00:47:42.573378 2313 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:47:42.574615 kubelet[2313]: W0430 00:47:42.574241 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.89.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.89.231:6443: connect: connection refused Apr 30 00:47:42.574615 kubelet[2313]: E0430 00:47:42.574294 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.89.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.89.231:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:42.577358 kubelet[2313]: I0430 00:47:42.577317 2313 policy_none.go:49] "None policy: Start" Apr 30 00:47:42.579458 kubelet[2313]: I0430 00:47:42.579425 2313 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:47:42.579458 kubelet[2313]: I0430 00:47:42.579475 2313 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:47:42.587991 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:47:42.598892 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:47:42.603621 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:47:42.615380 kubelet[2313]: I0430 00:47:42.615246 2313 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:47:42.615549 kubelet[2313]: I0430 00:47:42.615537 2313 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:47:42.615614 kubelet[2313]: I0430 00:47:42.615551 2313 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:47:42.619116 kubelet[2313]: I0430 00:47:42.618948 2313 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:47:42.620039 kubelet[2313]: E0430 00:47:42.619863 2313 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-c-89ff891e34\" not found" Apr 30 00:47:42.685839 systemd[1]: Created slice kubepods-burstable-pod8759560743e730f4550112cf8a9c6cf5.slice - libcontainer container kubepods-burstable-pod8759560743e730f4550112cf8a9c6cf5.slice. Apr 30 00:47:42.702436 systemd[1]: Created slice kubepods-burstable-podfed15795d9462771ea116f57faabd26f.slice - libcontainer container kubepods-burstable-podfed15795d9462771ea116f57faabd26f.slice. Apr 30 00:47:42.707540 systemd[1]: Created slice kubepods-burstable-pod84f9f465b6dbeb37d8232a525e955baf.slice - libcontainer container kubepods-burstable-pod84f9f465b6dbeb37d8232a525e955baf.slice. Apr 30 00:47:42.719939 kubelet[2313]: I0430 00:47:42.717574 2313 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:42.719939 kubelet[2313]: E0430 00:47:42.720048 2313 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.89.231:6443/api/v1/nodes\": dial tcp 91.99.89.231:6443: connect: connection refused" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:42.740391 kubelet[2313]: E0430 00:47:42.739976 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.89.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-c-89ff891e34?timeout=10s\": dial tcp 91.99.89.231:6443: connect: connection refused" interval="400ms" Apr 30 00:47:42.744555 kubelet[2313]: I0430 00:47:42.744464 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8759560743e730f4550112cf8a9c6cf5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-c-89ff891e34\" (UID: \"8759560743e730f4550112cf8a9c6cf5\") " pod="kube-system/kube-apiserver-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:42.744555 kubelet[2313]: I0430 00:47:42.744549 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fed15795d9462771ea116f57faabd26f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-c-89ff891e34\" (UID: \"fed15795d9462771ea116f57faabd26f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:42.744833 kubelet[2313]: I0430 00:47:42.744609 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fed15795d9462771ea116f57faabd26f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-c-89ff891e34\" (UID: \"fed15795d9462771ea116f57faabd26f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:42.744833 kubelet[2313]: I0430 00:47:42.744648 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84f9f465b6dbeb37d8232a525e955baf-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-c-89ff891e34\" (UID: \"84f9f465b6dbeb37d8232a525e955baf\") " pod="kube-system/kube-scheduler-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:42.744833 kubelet[2313]: I0430 00:47:42.744708 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8759560743e730f4550112cf8a9c6cf5-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-c-89ff891e34\" (UID: \"8759560743e730f4550112cf8a9c6cf5\") " pod="kube-system/kube-apiserver-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:42.744833 kubelet[2313]: I0430 00:47:42.744747 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fed15795d9462771ea116f57faabd26f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-c-89ff891e34\" (UID: \"fed15795d9462771ea116f57faabd26f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:42.744833 kubelet[2313]: I0430 00:47:42.744784 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fed15795d9462771ea116f57faabd26f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-c-89ff891e34\" (UID: \"fed15795d9462771ea116f57faabd26f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:42.745215 kubelet[2313]: I0430 00:47:42.744818 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fed15795d9462771ea116f57faabd26f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-c-89ff891e34\" (UID: \"fed15795d9462771ea116f57faabd26f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:42.745215 kubelet[2313]: I0430 00:47:42.744852 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8759560743e730f4550112cf8a9c6cf5-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-c-89ff891e34\" (UID: \"8759560743e730f4550112cf8a9c6cf5\") " pod="kube-system/kube-apiserver-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:42.923867 kubelet[2313]: I0430 00:47:42.923296 2313 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:42.923867 kubelet[2313]: E0430 00:47:42.923809 2313 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.89.231:6443/api/v1/nodes\": dial tcp 91.99.89.231:6443: connect: connection refused" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:43.002258 containerd[1476]: time="2025-04-30T00:47:43.002068608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-c-89ff891e34,Uid:8759560743e730f4550112cf8a9c6cf5,Namespace:kube-system,Attempt:0,}" Apr 30 00:47:43.007012 containerd[1476]: time="2025-04-30T00:47:43.006618908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-c-89ff891e34,Uid:fed15795d9462771ea116f57faabd26f,Namespace:kube-system,Attempt:0,}" Apr 30 00:47:43.010959 containerd[1476]: time="2025-04-30T00:47:43.010882443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-c-89ff891e34,Uid:84f9f465b6dbeb37d8232a525e955baf,Namespace:kube-system,Attempt:0,}" Apr 30 00:47:43.141419 kubelet[2313]: E0430 00:47:43.141363 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.89.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-c-89ff891e34?timeout=10s\": dial tcp 91.99.89.231:6443: connect: connection refused" interval="800ms" Apr 30 00:47:43.327246 kubelet[2313]: I0430 00:47:43.326653 2313 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:43.327246 kubelet[2313]: E0430 00:47:43.327109 2313 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.89.231:6443/api/v1/nodes\": dial tcp 91.99.89.231:6443: connect: connection refused" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:43.559291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176141853.mount: Deactivated successfully. Apr 30 00:47:43.561483 kubelet[2313]: W0430 00:47:43.561422 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.89.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.89.231:6443: connect: connection refused Apr 30 00:47:43.561867 kubelet[2313]: E0430 00:47:43.561496 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.89.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.89.231:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:43.569133 containerd[1476]: time="2025-04-30T00:47:43.568035587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:47:43.570740 containerd[1476]: time="2025-04-30T00:47:43.569843491Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:47:43.571596 containerd[1476]: time="2025-04-30T00:47:43.571552713Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 30 00:47:43.573138 containerd[1476]: time="2025-04-30T00:47:43.572889691Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:47:43.573830 containerd[1476]: time="2025-04-30T00:47:43.573685381Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:47:43.577352 containerd[1476]: time="2025-04-30T00:47:43.576176134Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:47:43.577641 containerd[1476]: time="2025-04-30T00:47:43.577518072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:47:43.581873 containerd[1476]: time="2025-04-30T00:47:43.581824688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:47:43.583918 containerd[1476]: time="2025-04-30T00:47:43.583836074Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.765268ms" Apr 30 00:47:43.586575 containerd[1476]: time="2025-04-30T00:47:43.586533910Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 584.32902ms" Apr 30 00:47:43.587463 containerd[1476]: time="2025-04-30T00:47:43.587421241Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 580.665332ms" Apr 30 00:47:43.730341 containerd[1476]: time="2025-04-30T00:47:43.730137472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:47:43.730341 containerd[1476]: time="2025-04-30T00:47:43.730204073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:47:43.730341 containerd[1476]: time="2025-04-30T00:47:43.730215913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:43.730872 containerd[1476]: time="2025-04-30T00:47:43.730308794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:43.733477 containerd[1476]: time="2025-04-30T00:47:43.731761774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:47:43.733477 containerd[1476]: time="2025-04-30T00:47:43.731819014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:47:43.733477 containerd[1476]: time="2025-04-30T00:47:43.731847495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:43.733477 containerd[1476]: time="2025-04-30T00:47:43.731992337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:43.735175 containerd[1476]: time="2025-04-30T00:47:43.734818614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:47:43.735175 containerd[1476]: time="2025-04-30T00:47:43.734875854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:47:43.735175 containerd[1476]: time="2025-04-30T00:47:43.734891615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:43.735175 containerd[1476]: time="2025-04-30T00:47:43.734991776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:43.745964 kubelet[2313]: W0430 00:47:43.745833 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.89.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.89.231:6443: connect: connection refused Apr 30 00:47:43.745964 kubelet[2313]: E0430 00:47:43.745888 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.89.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.89.231:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:43.752678 kubelet[2313]: W0430 00:47:43.752541 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.89.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-c-89ff891e34&limit=500&resourceVersion=0": dial tcp 91.99.89.231:6443: connect: connection refused Apr 30 00:47:43.752678 kubelet[2313]: E0430 00:47:43.752614 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.89.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-c-89ff891e34&limit=500&resourceVersion=0\": dial tcp 91.99.89.231:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:43.762767 systemd[1]: Started cri-containerd-cb25da1b933599878e8b72dc01abee9d78c8c2d56e237910ee84a9ad0143257d.scope - libcontainer container cb25da1b933599878e8b72dc01abee9d78c8c2d56e237910ee84a9ad0143257d. Apr 30 00:47:43.770642 systemd[1]: Started cri-containerd-1527fc0b38c87bd25ec5c3d706ed66b71b2fcb6160d4f1d456deffccccdf1a64.scope - libcontainer container 1527fc0b38c87bd25ec5c3d706ed66b71b2fcb6160d4f1d456deffccccdf1a64. Apr 30 00:47:43.773549 systemd[1]: Started cri-containerd-58335e59eeea2ad6ea891ca7aca188bedbdfa33e795f5e6f2e7f24bedc50cf1e.scope - libcontainer container 58335e59eeea2ad6ea891ca7aca188bedbdfa33e795f5e6f2e7f24bedc50cf1e. Apr 30 00:47:43.827290 containerd[1476]: time="2025-04-30T00:47:43.827242745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-c-89ff891e34,Uid:fed15795d9462771ea116f57faabd26f,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb25da1b933599878e8b72dc01abee9d78c8c2d56e237910ee84a9ad0143257d\"" Apr 30 00:47:43.836351 containerd[1476]: time="2025-04-30T00:47:43.836302784Z" level=info msg="CreateContainer within sandbox \"cb25da1b933599878e8b72dc01abee9d78c8c2d56e237910ee84a9ad0143257d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:47:43.854429 containerd[1476]: time="2025-04-30T00:47:43.854027336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-c-89ff891e34,Uid:8759560743e730f4550112cf8a9c6cf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"58335e59eeea2ad6ea891ca7aca188bedbdfa33e795f5e6f2e7f24bedc50cf1e\"" Apr 30 00:47:43.868227 containerd[1476]: time="2025-04-30T00:47:43.868183842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-c-89ff891e34,Uid:84f9f465b6dbeb37d8232a525e955baf,Namespace:kube-system,Attempt:0,} returns sandbox id \"1527fc0b38c87bd25ec5c3d706ed66b71b2fcb6160d4f1d456deffccccdf1a64\"" Apr 30 00:47:43.873348 containerd[1476]: time="2025-04-30T00:47:43.873255388Z" level=info msg="CreateContainer within sandbox \"58335e59eeea2ad6ea891ca7aca188bedbdfa33e795f5e6f2e7f24bedc50cf1e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:47:43.874236 containerd[1476]: time="2025-04-30T00:47:43.873943477Z" level=info msg="CreateContainer within sandbox \"1527fc0b38c87bd25ec5c3d706ed66b71b2fcb6160d4f1d456deffccccdf1a64\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:47:43.878498 containerd[1476]: time="2025-04-30T00:47:43.878090452Z" level=info msg="CreateContainer within sandbox \"cb25da1b933599878e8b72dc01abee9d78c8c2d56e237910ee84a9ad0143257d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fffd061513cfb2fe6c25528b46e155202540585b4e35fe93b701ca736e5887fc\"" Apr 30 00:47:43.878938 containerd[1476]: time="2025-04-30T00:47:43.878886422Z" level=info msg="StartContainer for \"fffd061513cfb2fe6c25528b46e155202540585b4e35fe93b701ca736e5887fc\"" Apr 30 00:47:43.897848 containerd[1476]: time="2025-04-30T00:47:43.897796510Z" level=info msg="CreateContainer within sandbox \"1527fc0b38c87bd25ec5c3d706ed66b71b2fcb6160d4f1d456deffccccdf1a64\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f8c12e58741bd638ebe48386c03282e86d36aadf3405fdd441eeb38f33e18cd3\"" Apr 30 00:47:43.898838 containerd[1476]: time="2025-04-30T00:47:43.898800323Z" level=info msg="StartContainer for \"f8c12e58741bd638ebe48386c03282e86d36aadf3405fdd441eeb38f33e18cd3\"" Apr 30 00:47:43.903072 containerd[1476]: time="2025-04-30T00:47:43.903023499Z" level=info msg="CreateContainer within sandbox \"58335e59eeea2ad6ea891ca7aca188bedbdfa33e795f5e6f2e7f24bedc50cf1e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8181086e169746150851537a48fff3de2cf403b2a8139dde206cd8ae8a1580c9\"" Apr 30 00:47:43.904317 containerd[1476]: time="2025-04-30T00:47:43.904228354Z" level=info msg="StartContainer for \"8181086e169746150851537a48fff3de2cf403b2a8139dde206cd8ae8a1580c9\"" Apr 30 00:47:43.916155 systemd[1]: Started cri-containerd-fffd061513cfb2fe6c25528b46e155202540585b4e35fe93b701ca736e5887fc.scope - libcontainer container fffd061513cfb2fe6c25528b46e155202540585b4e35fe93b701ca736e5887fc. Apr 30 00:47:43.944252 kubelet[2313]: E0430 00:47:43.944167 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.89.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-c-89ff891e34?timeout=10s\": dial tcp 91.99.89.231:6443: connect: connection refused" interval="1.6s" Apr 30 00:47:43.945130 systemd[1]: Started cri-containerd-f8c12e58741bd638ebe48386c03282e86d36aadf3405fdd441eeb38f33e18cd3.scope - libcontainer container f8c12e58741bd638ebe48386c03282e86d36aadf3405fdd441eeb38f33e18cd3. Apr 30 00:47:43.962703 systemd[1]: Started cri-containerd-8181086e169746150851537a48fff3de2cf403b2a8139dde206cd8ae8a1580c9.scope - libcontainer container 8181086e169746150851537a48fff3de2cf403b2a8139dde206cd8ae8a1580c9. Apr 30 00:47:43.997905 containerd[1476]: time="2025-04-30T00:47:43.996537205Z" level=info msg="StartContainer for \"fffd061513cfb2fe6c25528b46e155202540585b4e35fe93b701ca736e5887fc\" returns successfully" Apr 30 00:47:44.031071 containerd[1476]: time="2025-04-30T00:47:44.030802645Z" level=info msg="StartContainer for \"f8c12e58741bd638ebe48386c03282e86d36aadf3405fdd441eeb38f33e18cd3\" returns successfully" Apr 30 00:47:44.043810 kubelet[2313]: W0430 00:47:44.043651 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.89.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.89.231:6443: connect: connection refused Apr 30 00:47:44.046629 kubelet[2313]: E0430 00:47:44.044188 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.89.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.89.231:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:44.052308 containerd[1476]: time="2025-04-30T00:47:44.052175879Z" level=info msg="StartContainer for \"8181086e169746150851537a48fff3de2cf403b2a8139dde206cd8ae8a1580c9\" returns successfully" Apr 30 00:47:44.130382 kubelet[2313]: I0430 00:47:44.129840 2313 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:44.132960 kubelet[2313]: E0430 00:47:44.131586 2313 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.89.231:6443/api/v1/nodes\": dial tcp 91.99.89.231:6443: connect: connection refused" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:45.735074 kubelet[2313]: I0430 00:47:45.734360 2313 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:46.423511 kubelet[2313]: I0430 00:47:46.423472 2313 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:46.426207 kubelet[2313]: E0430 00:47:46.426174 2313 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-c-89ff891e34\": node \"ci-4081-3-3-c-89ff891e34\" not found" Apr 30 00:47:46.461723 kubelet[2313]: E0430 00:47:46.461670 2313 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-89ff891e34\" not found" Apr 30 00:47:46.562631 kubelet[2313]: E0430 00:47:46.562578 2313 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-89ff891e34\" not found" Apr 30 00:47:46.663323 kubelet[2313]: E0430 00:47:46.663275 2313 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-89ff891e34\" not found" Apr 30 00:47:46.764138 kubelet[2313]: E0430 00:47:46.763412 2313 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-89ff891e34\" not found" Apr 30 00:47:47.520270 kubelet[2313]: I0430 00:47:47.520010 2313 apiserver.go:52] "Watching apiserver" Apr 30 00:47:47.540398 kubelet[2313]: I0430 00:47:47.540297 2313 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 00:47:48.467183 systemd[1]: Reloading requested from client PID 2585 ('systemctl') (unit session-7.scope)... Apr 30 00:47:48.467668 systemd[1]: Reloading... Apr 30 00:47:48.577939 zram_generator::config[2625]: No configuration found. Apr 30 00:47:48.698603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:47:48.783339 systemd[1]: Reloading finished in 315 ms. Apr 30 00:47:48.826197 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:48.843673 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:47:48.844192 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:48.844349 systemd[1]: kubelet.service: Consumed 1.966s CPU time, 116.4M memory peak, 0B memory swap peak. Apr 30 00:47:48.852300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:48.983061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:48.991554 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:47:49.046103 kubelet[2670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:47:49.046103 kubelet[2670]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:47:49.046103 kubelet[2670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:47:49.046103 kubelet[2670]: I0430 00:47:49.045770 2670 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:47:49.055939 kubelet[2670]: I0430 00:47:49.054035 2670 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 00:47:49.055939 kubelet[2670]: I0430 00:47:49.054071 2670 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:47:49.055939 kubelet[2670]: I0430 00:47:49.054461 2670 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 00:47:49.058179 kubelet[2670]: I0430 00:47:49.058138 2670 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:47:49.063590 kubelet[2670]: I0430 00:47:49.063550 2670 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:47:49.069242 kubelet[2670]: E0430 00:47:49.069197 2670 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:47:49.069441 kubelet[2670]: I0430 00:47:49.069427 2670 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:47:49.072610 kubelet[2670]: I0430 00:47:49.072584 2670 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:47:49.072946 kubelet[2670]: I0430 00:47:49.072930 2670 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 00:47:49.073246 kubelet[2670]: I0430 00:47:49.073219 2670 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:47:49.073528 kubelet[2670]: I0430 00:47:49.073336 2670 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-c-89ff891e34","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:47:49.073666 kubelet[2670]: I0430 00:47:49.073651 2670 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:47:49.073720 kubelet[2670]: I0430 00:47:49.073712 2670 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 00:47:49.073853 kubelet[2670]: I0430 00:47:49.073842 2670 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:47:49.074117 kubelet[2670]: I0430 00:47:49.074092 2670 kubelet.go:408] "Attempting to sync node with API server" Apr 30 00:47:49.074231 kubelet[2670]: I0430 00:47:49.074219 2670 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:47:49.074318 kubelet[2670]: I0430 00:47:49.074309 2670 kubelet.go:314] "Adding apiserver pod source" Apr 30 00:47:49.074388 kubelet[2670]: I0430 00:47:49.074380 2670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:47:49.076479 kubelet[2670]: I0430 00:47:49.076448 2670 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:47:49.077242 kubelet[2670]: I0430 00:47:49.077107 2670 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:47:49.077874 kubelet[2670]: I0430 00:47:49.077676 2670 server.go:1269] "Started kubelet" Apr 30 00:47:49.084881 kubelet[2670]: I0430 00:47:49.084850 2670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:47:49.090918 kubelet[2670]: I0430 00:47:49.089051 2670 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:47:49.108934 kubelet[2670]: I0430 00:47:49.108603 2670 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:47:49.111584 kubelet[2670]: I0430 00:47:49.110348 2670 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 00:47:49.111584 kubelet[2670]: E0430 00:47:49.110617 2670 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-c-89ff891e34\" not found" Apr 30 00:47:49.111584 kubelet[2670]: I0430 00:47:49.111457 2670 server.go:460] "Adding debug handlers to kubelet server" Apr 30 00:47:49.114944 kubelet[2670]: I0430 00:47:49.114548 2670 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 00:47:49.114944 kubelet[2670]: I0430 00:47:49.114711 2670 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:47:49.117921 kubelet[2670]: I0430 00:47:49.117752 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:47:49.120231 kubelet[2670]: I0430 00:47:49.120181 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:47:49.120231 kubelet[2670]: I0430 00:47:49.120221 2670 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:47:49.120231 kubelet[2670]: I0430 00:47:49.120239 2670 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 00:47:49.120392 kubelet[2670]: E0430 00:47:49.120284 2670 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:47:49.122596 kubelet[2670]: I0430 00:47:49.122446 2670 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:47:49.123259 kubelet[2670]: I0430 00:47:49.123239 2670 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:47:49.126153 kubelet[2670]: I0430 00:47:49.126127 2670 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:47:49.126393 kubelet[2670]: I0430 00:47:49.126370 2670 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:47:49.126783 kubelet[2670]: E0430 00:47:49.126723 2670 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:47:49.130984 kubelet[2670]: I0430 00:47:49.130956 2670 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:47:49.192245 kubelet[2670]: I0430 00:47:49.191826 2670 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:47:49.192245 kubelet[2670]: I0430 00:47:49.191850 2670 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:47:49.192245 kubelet[2670]: I0430 00:47:49.191875 2670 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:47:49.192245 kubelet[2670]: I0430 00:47:49.192077 2670 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:47:49.192245 kubelet[2670]: I0430 00:47:49.192089 2670 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:47:49.192245 kubelet[2670]: I0430 00:47:49.192145 2670 policy_none.go:49] "None policy: Start" Apr 30 00:47:49.193592 kubelet[2670]: I0430 00:47:49.193558 2670 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:47:49.193592 kubelet[2670]: I0430 00:47:49.193595 2670 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:47:49.194011 kubelet[2670]: I0430 00:47:49.193849 2670 state_mem.go:75] "Updated machine memory state" Apr 30 00:47:49.199340 kubelet[2670]: I0430 00:47:49.198965 2670 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:47:49.199340 kubelet[2670]: I0430 00:47:49.199333 2670 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:47:49.199494 kubelet[2670]: I0430 00:47:49.199347 2670 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:47:49.199708 kubelet[2670]: I0430 00:47:49.199678 2670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:47:49.234490 kubelet[2670]: E0430 00:47:49.234446 2670 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-3-c-89ff891e34\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:49.305236 kubelet[2670]: I0430 00:47:49.303856 2670 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:49.315523 kubelet[2670]: I0430 00:47:49.315475 2670 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:49.315717 kubelet[2670]: I0430 00:47:49.315590 2670 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-c-89ff891e34" Apr 30 00:47:49.415325 kubelet[2670]: I0430 00:47:49.415156 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8759560743e730f4550112cf8a9c6cf5-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-c-89ff891e34\" (UID: \"8759560743e730f4550112cf8a9c6cf5\") " pod="kube-system/kube-apiserver-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:49.415325 kubelet[2670]: I0430 00:47:49.415245 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8759560743e730f4550112cf8a9c6cf5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-c-89ff891e34\" (UID: \"8759560743e730f4550112cf8a9c6cf5\") " pod="kube-system/kube-apiserver-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:49.415325 kubelet[2670]: I0430 00:47:49.415298 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fed15795d9462771ea116f57faabd26f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-c-89ff891e34\" (UID: \"fed15795d9462771ea116f57faabd26f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:49.415325 kubelet[2670]: I0430 00:47:49.415332 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fed15795d9462771ea116f57faabd26f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-c-89ff891e34\" (UID: \"fed15795d9462771ea116f57faabd26f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:49.415325 kubelet[2670]: I0430 00:47:49.415363 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8759560743e730f4550112cf8a9c6cf5-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-c-89ff891e34\" (UID: \"8759560743e730f4550112cf8a9c6cf5\") " pod="kube-system/kube-apiserver-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:49.416079 kubelet[2670]: I0430 00:47:49.415395 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fed15795d9462771ea116f57faabd26f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-c-89ff891e34\" (UID: \"fed15795d9462771ea116f57faabd26f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:49.416079 kubelet[2670]: I0430 00:47:49.415434 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fed15795d9462771ea116f57faabd26f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-c-89ff891e34\" (UID: \"fed15795d9462771ea116f57faabd26f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:49.416079 kubelet[2670]: I0430 00:47:49.415467 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84f9f465b6dbeb37d8232a525e955baf-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-c-89ff891e34\" (UID: \"84f9f465b6dbeb37d8232a525e955baf\") " pod="kube-system/kube-scheduler-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:49.416079 kubelet[2670]: I0430 00:47:49.415498 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fed15795d9462771ea116f57faabd26f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-c-89ff891e34\" (UID: \"fed15795d9462771ea116f57faabd26f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:50.075535 kubelet[2670]: I0430 00:47:50.075480 2670 apiserver.go:52] "Watching apiserver" Apr 30 00:47:50.115498 kubelet[2670]: I0430 00:47:50.115382 2670 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 00:47:50.183651 kubelet[2670]: E0430 00:47:50.183606 2670 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-c-89ff891e34\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-c-89ff891e34" Apr 30 00:47:50.231243 kubelet[2670]: I0430 00:47:50.231151 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-c-89ff891e34" podStartSLOduration=1.231130892 podStartE2EDuration="1.231130892s" podCreationTimestamp="2025-04-30 00:47:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:47:50.216955332 +0000 UTC m=+1.217672061" watchObservedRunningTime="2025-04-30 00:47:50.231130892 +0000 UTC m=+1.231847621" Apr 30 00:47:50.231594 kubelet[2670]: I0430 00:47:50.231401 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-c-89ff891e34" podStartSLOduration=3.231391134 podStartE2EDuration="3.231391134s" podCreationTimestamp="2025-04-30 00:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:47:50.231315934 +0000 UTC m=+1.232032663" watchObservedRunningTime="2025-04-30 00:47:50.231391134 +0000 UTC m=+1.232107863" Apr 30 00:47:50.275223 kubelet[2670]: I0430 00:47:50.275155 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-c-89ff891e34" podStartSLOduration=1.2751342669999999 podStartE2EDuration="1.275134267s" podCreationTimestamp="2025-04-30 00:47:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:47:50.250426309 +0000 UTC m=+1.251143038" watchObservedRunningTime="2025-04-30 00:47:50.275134267 +0000 UTC m=+1.275850996" Apr 30 00:47:54.634200 sudo[1857]: pam_unix(sudo:session): session closed for user root Apr 30 00:47:54.795187 sshd[1839]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:54.799871 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:47:54.801166 systemd[1]: sshd@6-91.99.89.231:22-139.178.68.195:52468.service: Deactivated successfully. Apr 30 00:47:54.807227 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:47:54.809038 systemd[1]: session-7.scope: Consumed 6.619s CPU time, 154.3M memory peak, 0B memory swap peak. Apr 30 00:47:54.813299 systemd-logind[1462]: Removed session 7. Apr 30 00:47:54.991119 kubelet[2670]: I0430 00:47:54.990315 2670 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:47:54.992514 kubelet[2670]: I0430 00:47:54.991887 2670 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:47:54.992550 containerd[1476]: time="2025-04-30T00:47:54.991560228Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:47:55.680480 systemd[1]: Created slice kubepods-besteffort-pod2b3ebaf7_b3be_423b_a349_e4b1b4aac5d8.slice - libcontainer container kubepods-besteffort-pod2b3ebaf7_b3be_423b_a349_e4b1b4aac5d8.slice. Apr 30 00:47:55.757320 kubelet[2670]: I0430 00:47:55.757195 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2b3ebaf7-b3be-423b-a349-e4b1b4aac5d8-kube-proxy\") pod \"kube-proxy-q5lr6\" (UID: \"2b3ebaf7-b3be-423b-a349-e4b1b4aac5d8\") " pod="kube-system/kube-proxy-q5lr6" Apr 30 00:47:55.757911 kubelet[2670]: I0430 00:47:55.757740 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b3ebaf7-b3be-423b-a349-e4b1b4aac5d8-xtables-lock\") pod \"kube-proxy-q5lr6\" (UID: \"2b3ebaf7-b3be-423b-a349-e4b1b4aac5d8\") " pod="kube-system/kube-proxy-q5lr6" Apr 30 00:47:55.758254 kubelet[2670]: I0430 00:47:55.758118 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b3ebaf7-b3be-423b-a349-e4b1b4aac5d8-lib-modules\") pod \"kube-proxy-q5lr6\" (UID: \"2b3ebaf7-b3be-423b-a349-e4b1b4aac5d8\") " pod="kube-system/kube-proxy-q5lr6" Apr 30 00:47:55.758254 kubelet[2670]: I0430 00:47:55.758193 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfvm7\" (UniqueName: \"kubernetes.io/projected/2b3ebaf7-b3be-423b-a349-e4b1b4aac5d8-kube-api-access-tfvm7\") pod \"kube-proxy-q5lr6\" (UID: \"2b3ebaf7-b3be-423b-a349-e4b1b4aac5d8\") " pod="kube-system/kube-proxy-q5lr6" Apr 30 00:47:55.858919 systemd[1]: Created slice kubepods-besteffort-pod4db85876_a41f_4aea_88a0_d8176627915a.slice - libcontainer container kubepods-besteffort-pod4db85876_a41f_4aea_88a0_d8176627915a.slice. Apr 30 00:47:55.860162 kubelet[2670]: I0430 00:47:55.859555 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4db85876-a41f-4aea-88a0-d8176627915a-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-8c2pv\" (UID: \"4db85876-a41f-4aea-88a0-d8176627915a\") " pod="tigera-operator/tigera-operator-6f6897fdc5-8c2pv" Apr 30 00:47:55.860162 kubelet[2670]: I0430 00:47:55.859638 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h689f\" (UniqueName: \"kubernetes.io/projected/4db85876-a41f-4aea-88a0-d8176627915a-kube-api-access-h689f\") pod \"tigera-operator-6f6897fdc5-8c2pv\" (UID: \"4db85876-a41f-4aea-88a0-d8176627915a\") " pod="tigera-operator/tigera-operator-6f6897fdc5-8c2pv" Apr 30 00:47:55.989237 containerd[1476]: time="2025-04-30T00:47:55.988982437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q5lr6,Uid:2b3ebaf7-b3be-423b-a349-e4b1b4aac5d8,Namespace:kube-system,Attempt:0,}" Apr 30 00:47:56.020023 containerd[1476]: time="2025-04-30T00:47:56.019839546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:47:56.020023 containerd[1476]: time="2025-04-30T00:47:56.019943067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:47:56.020023 containerd[1476]: time="2025-04-30T00:47:56.019960307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:56.021068 containerd[1476]: time="2025-04-30T00:47:56.020045788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:56.048239 systemd[1]: Started cri-containerd-7627c85f06cffa8f89ef86181ca1132a5be7af4de97a756de59adc11682dade9.scope - libcontainer container 7627c85f06cffa8f89ef86181ca1132a5be7af4de97a756de59adc11682dade9. Apr 30 00:47:56.074393 containerd[1476]: time="2025-04-30T00:47:56.074192927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q5lr6,Uid:2b3ebaf7-b3be-423b-a349-e4b1b4aac5d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7627c85f06cffa8f89ef86181ca1132a5be7af4de97a756de59adc11682dade9\"" Apr 30 00:47:56.081529 containerd[1476]: time="2025-04-30T00:47:56.081160316Z" level=info msg="CreateContainer within sandbox \"7627c85f06cffa8f89ef86181ca1132a5be7af4de97a756de59adc11682dade9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:47:56.108372 containerd[1476]: time="2025-04-30T00:47:56.108281426Z" level=info msg="CreateContainer within sandbox \"7627c85f06cffa8f89ef86181ca1132a5be7af4de97a756de59adc11682dade9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6ad6c64c5b97c91093e50df72921e4f820236aba50d7d7fc780aba66e59f7ca3\"" Apr 30 00:47:56.111174 containerd[1476]: time="2025-04-30T00:47:56.111123415Z" level=info msg="StartContainer for \"6ad6c64c5b97c91093e50df72921e4f820236aba50d7d7fc780aba66e59f7ca3\"" Apr 30 00:47:56.139022 systemd[1]: Started cri-containerd-6ad6c64c5b97c91093e50df72921e4f820236aba50d7d7fc780aba66e59f7ca3.scope - libcontainer container 6ad6c64c5b97c91093e50df72921e4f820236aba50d7d7fc780aba66e59f7ca3. Apr 30 00:47:56.164495 containerd[1476]: time="2025-04-30T00:47:56.164057022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-8c2pv,Uid:4db85876-a41f-4aea-88a0-d8176627915a,Namespace:tigera-operator,Attempt:0,}" Apr 30 00:47:56.182740 containerd[1476]: time="2025-04-30T00:47:56.182689047Z" level=info msg="StartContainer for \"6ad6c64c5b97c91093e50df72921e4f820236aba50d7d7fc780aba66e59f7ca3\" returns successfully" Apr 30 00:47:56.212666 containerd[1476]: time="2025-04-30T00:47:56.212413783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:47:56.212666 containerd[1476]: time="2025-04-30T00:47:56.212474503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:47:56.212666 containerd[1476]: time="2025-04-30T00:47:56.212489464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:56.212666 containerd[1476]: time="2025-04-30T00:47:56.212572984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:56.235155 systemd[1]: Started cri-containerd-79c05f12b1e83966c1983b95dd18baa9b1a0f073837c44ab305884179194562a.scope - libcontainer container 79c05f12b1e83966c1983b95dd18baa9b1a0f073837c44ab305884179194562a. Apr 30 00:47:56.293045 containerd[1476]: time="2025-04-30T00:47:56.291026725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-8c2pv,Uid:4db85876-a41f-4aea-88a0-d8176627915a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"79c05f12b1e83966c1983b95dd18baa9b1a0f073837c44ab305884179194562a\"" Apr 30 00:47:56.296979 containerd[1476]: time="2025-04-30T00:47:56.296932144Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 00:47:57.199409 kubelet[2670]: I0430 00:47:57.199124 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q5lr6" podStartSLOduration=2.199097406 podStartE2EDuration="2.199097406s" podCreationTimestamp="2025-04-30 00:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:47:57.199093566 +0000 UTC m=+8.199810295" watchObservedRunningTime="2025-04-30 00:47:57.199097406 +0000 UTC m=+8.199814135" Apr 30 00:47:58.006051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1433316358.mount: Deactivated successfully. Apr 30 00:47:59.635549 containerd[1476]: time="2025-04-30T00:47:59.635483166Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:59.636972 containerd[1476]: time="2025-04-30T00:47:59.636922139Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" Apr 30 00:47:59.637941 containerd[1476]: time="2025-04-30T00:47:59.637861588Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:59.642400 containerd[1476]: time="2025-04-30T00:47:59.642331070Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:59.643389 containerd[1476]: time="2025-04-30T00:47:59.643250158Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 3.346270734s" Apr 30 00:47:59.643389 containerd[1476]: time="2025-04-30T00:47:59.643293359Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" Apr 30 00:47:59.647931 containerd[1476]: time="2025-04-30T00:47:59.647731321Z" level=info msg="CreateContainer within sandbox \"79c05f12b1e83966c1983b95dd18baa9b1a0f073837c44ab305884179194562a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 00:47:59.664608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128506140.mount: Deactivated successfully. Apr 30 00:47:59.669120 containerd[1476]: time="2025-04-30T00:47:59.669013960Z" level=info msg="CreateContainer within sandbox \"79c05f12b1e83966c1983b95dd18baa9b1a0f073837c44ab305884179194562a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"087703fd40a9b51466f2666b1d4af68a1cadc172d47c11795fa855924eafeaa8\"" Apr 30 00:47:59.671530 containerd[1476]: time="2025-04-30T00:47:59.670154571Z" level=info msg="StartContainer for \"087703fd40a9b51466f2666b1d4af68a1cadc172d47c11795fa855924eafeaa8\"" Apr 30 00:47:59.703101 systemd[1]: Started cri-containerd-087703fd40a9b51466f2666b1d4af68a1cadc172d47c11795fa855924eafeaa8.scope - libcontainer container 087703fd40a9b51466f2666b1d4af68a1cadc172d47c11795fa855924eafeaa8. Apr 30 00:47:59.737524 containerd[1476]: time="2025-04-30T00:47:59.737226961Z" level=info msg="StartContainer for \"087703fd40a9b51466f2666b1d4af68a1cadc172d47c11795fa855924eafeaa8\" returns successfully" Apr 30 00:48:02.105139 kubelet[2670]: I0430 00:48:02.104699 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-8c2pv" podStartSLOduration=3.755462422 podStartE2EDuration="7.104675064s" podCreationTimestamp="2025-04-30 00:47:55 +0000 UTC" firstStartedPulling="2025-04-30 00:47:56.295406569 +0000 UTC m=+7.296123298" lastFinishedPulling="2025-04-30 00:47:59.644619211 +0000 UTC m=+10.645335940" observedRunningTime="2025-04-30 00:48:00.227522447 +0000 UTC m=+11.228239176" watchObservedRunningTime="2025-04-30 00:48:02.104675064 +0000 UTC m=+13.105391793" Apr 30 00:48:05.494371 systemd[1]: Created slice kubepods-besteffort-podf7028c7c_3d89_4bf3_bfc5_5421ece44e98.slice - libcontainer container kubepods-besteffort-podf7028c7c_3d89_4bf3_bfc5_5421ece44e98.slice. Apr 30 00:48:05.522436 kubelet[2670]: I0430 00:48:05.522280 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7028c7c-3d89-4bf3-bfc5-5421ece44e98-tigera-ca-bundle\") pod \"calico-typha-74fd7b59c-8jwsm\" (UID: \"f7028c7c-3d89-4bf3-bfc5-5421ece44e98\") " pod="calico-system/calico-typha-74fd7b59c-8jwsm" Apr 30 00:48:05.522436 kubelet[2670]: I0430 00:48:05.522340 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f7028c7c-3d89-4bf3-bfc5-5421ece44e98-typha-certs\") pod \"calico-typha-74fd7b59c-8jwsm\" (UID: \"f7028c7c-3d89-4bf3-bfc5-5421ece44e98\") " pod="calico-system/calico-typha-74fd7b59c-8jwsm" Apr 30 00:48:05.522436 kubelet[2670]: I0430 00:48:05.522357 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs7xr\" (UniqueName: \"kubernetes.io/projected/f7028c7c-3d89-4bf3-bfc5-5421ece44e98-kube-api-access-vs7xr\") pod \"calico-typha-74fd7b59c-8jwsm\" (UID: \"f7028c7c-3d89-4bf3-bfc5-5421ece44e98\") " pod="calico-system/calico-typha-74fd7b59c-8jwsm" Apr 30 00:48:05.709670 systemd[1]: Created slice kubepods-besteffort-pod8c6ff056_8ee9_4a95_a610_5cb3519ad437.slice - libcontainer container kubepods-besteffort-pod8c6ff056_8ee9_4a95_a610_5cb3519ad437.slice. Apr 30 00:48:05.724523 kubelet[2670]: I0430 00:48:05.724477 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c6ff056-8ee9-4a95-a610-5cb3519ad437-lib-modules\") pod \"calico-node-ml5f9\" (UID: \"8c6ff056-8ee9-4a95-a610-5cb3519ad437\") " pod="calico-system/calico-node-ml5f9" Apr 30 00:48:05.724887 kubelet[2670]: I0430 00:48:05.724802 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8c6ff056-8ee9-4a95-a610-5cb3519ad437-policysync\") pod \"calico-node-ml5f9\" (UID: \"8c6ff056-8ee9-4a95-a610-5cb3519ad437\") " pod="calico-system/calico-node-ml5f9" Apr 30 00:48:05.724887 kubelet[2670]: I0430 00:48:05.724852 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8c6ff056-8ee9-4a95-a610-5cb3519ad437-var-run-calico\") pod \"calico-node-ml5f9\" (UID: \"8c6ff056-8ee9-4a95-a610-5cb3519ad437\") " pod="calico-system/calico-node-ml5f9" Apr 30 00:48:05.725215 kubelet[2670]: I0430 00:48:05.724940 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8c6ff056-8ee9-4a95-a610-5cb3519ad437-cni-bin-dir\") pod \"calico-node-ml5f9\" (UID: \"8c6ff056-8ee9-4a95-a610-5cb3519ad437\") " pod="calico-system/calico-node-ml5f9" Apr 30 00:48:05.725215 kubelet[2670]: I0430 00:48:05.724973 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8c6ff056-8ee9-4a95-a610-5cb3519ad437-cni-net-dir\") pod \"calico-node-ml5f9\" (UID: \"8c6ff056-8ee9-4a95-a610-5cb3519ad437\") " pod="calico-system/calico-node-ml5f9" Apr 30 00:48:05.725215 kubelet[2670]: I0430 00:48:05.724991 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcbt8\" (UniqueName: \"kubernetes.io/projected/8c6ff056-8ee9-4a95-a610-5cb3519ad437-kube-api-access-bcbt8\") pod \"calico-node-ml5f9\" (UID: \"8c6ff056-8ee9-4a95-a610-5cb3519ad437\") " pod="calico-system/calico-node-ml5f9" Apr 30 00:48:05.725215 kubelet[2670]: I0430 00:48:05.725011 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c6ff056-8ee9-4a95-a610-5cb3519ad437-xtables-lock\") pod \"calico-node-ml5f9\" (UID: \"8c6ff056-8ee9-4a95-a610-5cb3519ad437\") " pod="calico-system/calico-node-ml5f9" Apr 30 00:48:05.725215 kubelet[2670]: I0430 00:48:05.725035 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8c6ff056-8ee9-4a95-a610-5cb3519ad437-node-certs\") pod \"calico-node-ml5f9\" (UID: \"8c6ff056-8ee9-4a95-a610-5cb3519ad437\") " pod="calico-system/calico-node-ml5f9" Apr 30 00:48:05.725412 kubelet[2670]: I0430 00:48:05.725056 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8c6ff056-8ee9-4a95-a610-5cb3519ad437-cni-log-dir\") pod \"calico-node-ml5f9\" (UID: \"8c6ff056-8ee9-4a95-a610-5cb3519ad437\") " pod="calico-system/calico-node-ml5f9" Apr 30 00:48:05.725412 kubelet[2670]: I0430 00:48:05.725071 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c6ff056-8ee9-4a95-a610-5cb3519ad437-tigera-ca-bundle\") pod \"calico-node-ml5f9\" (UID: \"8c6ff056-8ee9-4a95-a610-5cb3519ad437\") " pod="calico-system/calico-node-ml5f9" Apr 30 00:48:05.725412 kubelet[2670]: I0430 00:48:05.725086 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c6ff056-8ee9-4a95-a610-5cb3519ad437-var-lib-calico\") pod \"calico-node-ml5f9\" (UID: \"8c6ff056-8ee9-4a95-a610-5cb3519ad437\") " pod="calico-system/calico-node-ml5f9" Apr 30 00:48:05.725412 kubelet[2670]: I0430 00:48:05.725107 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8c6ff056-8ee9-4a95-a610-5cb3519ad437-flexvol-driver-host\") pod \"calico-node-ml5f9\" (UID: \"8c6ff056-8ee9-4a95-a610-5cb3519ad437\") " pod="calico-system/calico-node-ml5f9" Apr 30 00:48:05.803665 containerd[1476]: time="2025-04-30T00:48:05.802344781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74fd7b59c-8jwsm,Uid:f7028c7c-3d89-4bf3-bfc5-5421ece44e98,Namespace:calico-system,Attempt:0,}" Apr 30 00:48:05.834722 kubelet[2670]: E0430 00:48:05.834135 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.834722 kubelet[2670]: W0430 00:48:05.834161 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.834722 kubelet[2670]: E0430 00:48:05.834207 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.834722 kubelet[2670]: E0430 00:48:05.834600 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.834722 kubelet[2670]: W0430 00:48:05.834612 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.834722 kubelet[2670]: E0430 00:48:05.834647 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.836102 kubelet[2670]: E0430 00:48:05.835784 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.836102 kubelet[2670]: W0430 00:48:05.835809 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.836634 kubelet[2670]: E0430 00:48:05.836615 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.836754 kubelet[2670]: W0430 00:48:05.836740 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.837011 kubelet[2670]: E0430 00:48:05.836707 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.837011 kubelet[2670]: E0430 00:48:05.836890 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.837316 kubelet[2670]: E0430 00:48:05.837264 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.838793 kubelet[2670]: W0430 00:48:05.837298 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.838793 kubelet[2670]: E0430 00:48:05.837450 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.839200 kubelet[2670]: E0430 00:48:05.839184 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.839484 kubelet[2670]: W0430 00:48:05.839328 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.839484 kubelet[2670]: E0430 00:48:05.839431 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.839679 kubelet[2670]: E0430 00:48:05.839642 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.839777 kubelet[2670]: W0430 00:48:05.839765 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.840355 kubelet[2670]: E0430 00:48:05.840335 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.842259 kubelet[2670]: W0430 00:48:05.841644 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.842259 kubelet[2670]: E0430 00:48:05.840727 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.842259 kubelet[2670]: E0430 00:48:05.841731 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.842259 kubelet[2670]: E0430 00:48:05.842083 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.842259 kubelet[2670]: W0430 00:48:05.842095 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.842259 kubelet[2670]: E0430 00:48:05.842186 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.842616 kubelet[2670]: E0430 00:48:05.842601 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.842722 kubelet[2670]: W0430 00:48:05.842709 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.842885 kubelet[2670]: E0430 00:48:05.842860 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.843152 kubelet[2670]: E0430 00:48:05.843130 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.843368 kubelet[2670]: W0430 00:48:05.843210 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.843368 kubelet[2670]: E0430 00:48:05.843251 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.843545 kubelet[2670]: E0430 00:48:05.843531 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.843661 kubelet[2670]: W0430 00:48:05.843597 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.843785 kubelet[2670]: E0430 00:48:05.843757 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.844097 kubelet[2670]: E0430 00:48:05.844081 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.844181 kubelet[2670]: W0430 00:48:05.844169 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.844414 kubelet[2670]: E0430 00:48:05.844400 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.844644 kubelet[2670]: E0430 00:48:05.844632 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.844751 kubelet[2670]: W0430 00:48:05.844738 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.844905 kubelet[2670]: E0430 00:48:05.844872 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.846157 kubelet[2670]: E0430 00:48:05.845442 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.846157 kubelet[2670]: W0430 00:48:05.845459 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.846157 kubelet[2670]: E0430 00:48:05.845589 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.846157 kubelet[2670]: E0430 00:48:05.845951 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.846157 kubelet[2670]: W0430 00:48:05.845963 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.846878 kubelet[2670]: E0430 00:48:05.846701 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.847065 kubelet[2670]: E0430 00:48:05.847052 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.847126 kubelet[2670]: W0430 00:48:05.847114 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.847558 containerd[1476]: time="2025-04-30T00:48:05.847189719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:48:05.847558 containerd[1476]: time="2025-04-30T00:48:05.847250119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:48:05.847558 containerd[1476]: time="2025-04-30T00:48:05.847266399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:48:05.847558 containerd[1476]: time="2025-04-30T00:48:05.847346360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:48:05.847746 kubelet[2670]: E0430 00:48:05.847704 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.848633 kubelet[2670]: E0430 00:48:05.848364 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.848633 kubelet[2670]: W0430 00:48:05.848381 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.848633 kubelet[2670]: E0430 00:48:05.848427 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.850537 kubelet[2670]: E0430 00:48:05.850427 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.850537 kubelet[2670]: W0430 00:48:05.850452 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.850537 kubelet[2670]: E0430 00:48:05.850497 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.852011 kubelet[2670]: E0430 00:48:05.851819 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.852011 kubelet[2670]: W0430 00:48:05.851850 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.852011 kubelet[2670]: E0430 00:48:05.851926 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.853409 kubelet[2670]: E0430 00:48:05.852354 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.853409 kubelet[2670]: W0430 00:48:05.852379 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.853409 kubelet[2670]: E0430 00:48:05.852395 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.870099 kubelet[2670]: E0430 00:48:05.870054 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.870099 kubelet[2670]: W0430 00:48:05.870085 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.870283 kubelet[2670]: E0430 00:48:05.870109 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.894458 systemd[1]: Started cri-containerd-128c1d0a10889ed583c87994a02ecc4080f651cd1aa5a8eda0fbf51f2a7ea6d1.scope - libcontainer container 128c1d0a10889ed583c87994a02ecc4080f651cd1aa5a8eda0fbf51f2a7ea6d1. Apr 30 00:48:05.900419 kubelet[2670]: E0430 00:48:05.900360 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:05.919503 kubelet[2670]: E0430 00:48:05.918841 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.919503 kubelet[2670]: W0430 00:48:05.918874 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.919503 kubelet[2670]: E0430 00:48:05.918929 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.920713 kubelet[2670]: E0430 00:48:05.920463 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.920713 kubelet[2670]: W0430 00:48:05.920490 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.920713 kubelet[2670]: E0430 00:48:05.920513 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.922239 kubelet[2670]: E0430 00:48:05.921716 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.922239 kubelet[2670]: W0430 00:48:05.921775 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.922239 kubelet[2670]: E0430 00:48:05.921793 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.922916 kubelet[2670]: E0430 00:48:05.922627 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.922916 kubelet[2670]: W0430 00:48:05.922646 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.923432 kubelet[2670]: E0430 00:48:05.923152 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.924585 kubelet[2670]: E0430 00:48:05.923869 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.924585 kubelet[2670]: W0430 00:48:05.923888 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.924585 kubelet[2670]: E0430 00:48:05.923943 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.925149 kubelet[2670]: E0430 00:48:05.925029 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.925149 kubelet[2670]: W0430 00:48:05.925046 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.925149 kubelet[2670]: E0430 00:48:05.925062 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.925382 kubelet[2670]: E0430 00:48:05.925248 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.925382 kubelet[2670]: W0430 00:48:05.925261 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.925573 kubelet[2670]: E0430 00:48:05.925272 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.925718 kubelet[2670]: E0430 00:48:05.925705 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.925972 kubelet[2670]: W0430 00:48:05.925756 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.925972 kubelet[2670]: E0430 00:48:05.925770 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.926512 kubelet[2670]: E0430 00:48:05.926268 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.926512 kubelet[2670]: W0430 00:48:05.926289 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.926512 kubelet[2670]: E0430 00:48:05.926411 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.927210 kubelet[2670]: E0430 00:48:05.927046 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.927210 kubelet[2670]: W0430 00:48:05.927060 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.927210 kubelet[2670]: E0430 00:48:05.927072 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.927744 kubelet[2670]: E0430 00:48:05.927639 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.927744 kubelet[2670]: W0430 00:48:05.927653 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.927744 kubelet[2670]: E0430 00:48:05.927684 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.929796 kubelet[2670]: E0430 00:48:05.929662 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.929796 kubelet[2670]: W0430 00:48:05.929677 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.929796 kubelet[2670]: E0430 00:48:05.929689 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.930679 kubelet[2670]: E0430 00:48:05.930528 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.930679 kubelet[2670]: W0430 00:48:05.930545 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.930679 kubelet[2670]: E0430 00:48:05.930562 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.931280 kubelet[2670]: E0430 00:48:05.931256 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.931467 kubelet[2670]: W0430 00:48:05.931450 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.931779 kubelet[2670]: E0430 00:48:05.931664 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.932641 kubelet[2670]: E0430 00:48:05.932350 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.932641 kubelet[2670]: W0430 00:48:05.932365 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.933351 kubelet[2670]: E0430 00:48:05.932377 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.933946 kubelet[2670]: E0430 00:48:05.933928 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.934361 kubelet[2670]: W0430 00:48:05.934118 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.934361 kubelet[2670]: E0430 00:48:05.934141 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.934783 kubelet[2670]: E0430 00:48:05.934678 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.935129 kubelet[2670]: W0430 00:48:05.934965 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.935129 kubelet[2670]: E0430 00:48:05.934988 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.936534 kubelet[2670]: E0430 00:48:05.936132 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.936534 kubelet[2670]: W0430 00:48:05.936148 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.936534 kubelet[2670]: E0430 00:48:05.936287 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.938952 kubelet[2670]: E0430 00:48:05.938864 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.938952 kubelet[2670]: W0430 00:48:05.938883 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.938952 kubelet[2670]: E0430 00:48:05.938931 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.939198 kubelet[2670]: E0430 00:48:05.939146 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.939198 kubelet[2670]: W0430 00:48:05.939163 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.939198 kubelet[2670]: E0430 00:48:05.939174 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.939480 kubelet[2670]: E0430 00:48:05.939453 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.939480 kubelet[2670]: W0430 00:48:05.939465 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.939480 kubelet[2670]: E0430 00:48:05.939475 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.939689 kubelet[2670]: I0430 00:48:05.939505 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6de56a42-e7e2-4279-87ca-32df2fc92dd6-registration-dir\") pod \"csi-node-driver-wpvmp\" (UID: \"6de56a42-e7e2-4279-87ca-32df2fc92dd6\") " pod="calico-system/csi-node-driver-wpvmp" Apr 30 00:48:05.939689 kubelet[2670]: E0430 00:48:05.939682 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.939689 kubelet[2670]: W0430 00:48:05.939691 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.939765 kubelet[2670]: E0430 00:48:05.939706 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.939765 kubelet[2670]: I0430 00:48:05.939721 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6de56a42-e7e2-4279-87ca-32df2fc92dd6-socket-dir\") pod \"csi-node-driver-wpvmp\" (UID: \"6de56a42-e7e2-4279-87ca-32df2fc92dd6\") " pod="calico-system/csi-node-driver-wpvmp" Apr 30 00:48:05.940093 kubelet[2670]: E0430 00:48:05.939929 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.940093 kubelet[2670]: W0430 00:48:05.939943 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.940093 kubelet[2670]: E0430 00:48:05.939961 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.940093 kubelet[2670]: I0430 00:48:05.939980 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f98gd\" (UniqueName: \"kubernetes.io/projected/6de56a42-e7e2-4279-87ca-32df2fc92dd6-kube-api-access-f98gd\") pod \"csi-node-driver-wpvmp\" (UID: \"6de56a42-e7e2-4279-87ca-32df2fc92dd6\") " pod="calico-system/csi-node-driver-wpvmp" Apr 30 00:48:05.940219 kubelet[2670]: E0430 00:48:05.940166 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.940219 kubelet[2670]: W0430 00:48:05.940176 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.940219 kubelet[2670]: E0430 00:48:05.940189 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.940219 kubelet[2670]: I0430 00:48:05.940208 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6de56a42-e7e2-4279-87ca-32df2fc92dd6-kubelet-dir\") pod \"csi-node-driver-wpvmp\" (UID: \"6de56a42-e7e2-4279-87ca-32df2fc92dd6\") " pod="calico-system/csi-node-driver-wpvmp" Apr 30 00:48:05.940647 kubelet[2670]: E0430 00:48:05.940390 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.940647 kubelet[2670]: W0430 00:48:05.940402 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.940647 kubelet[2670]: E0430 00:48:05.940419 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.940647 kubelet[2670]: I0430 00:48:05.940434 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6de56a42-e7e2-4279-87ca-32df2fc92dd6-varrun\") pod \"csi-node-driver-wpvmp\" (UID: \"6de56a42-e7e2-4279-87ca-32df2fc92dd6\") " pod="calico-system/csi-node-driver-wpvmp" Apr 30 00:48:05.940647 kubelet[2670]: E0430 00:48:05.940619 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.940647 kubelet[2670]: W0430 00:48:05.940628 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.940647 kubelet[2670]: E0430 00:48:05.940643 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.940857 kubelet[2670]: E0430 00:48:05.940791 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.940857 kubelet[2670]: W0430 00:48:05.940798 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.940932 kubelet[2670]: E0430 00:48:05.940872 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.941287 kubelet[2670]: E0430 00:48:05.941069 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.941287 kubelet[2670]: W0430 00:48:05.941087 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.942362 kubelet[2670]: E0430 00:48:05.942223 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.942362 kubelet[2670]: E0430 00:48:05.942326 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.942362 kubelet[2670]: W0430 00:48:05.942335 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.942493 kubelet[2670]: E0430 00:48:05.942422 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.942694 kubelet[2670]: E0430 00:48:05.942536 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.942694 kubelet[2670]: W0430 00:48:05.942551 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.942694 kubelet[2670]: E0430 00:48:05.942629 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.942821 kubelet[2670]: E0430 00:48:05.942714 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.942821 kubelet[2670]: W0430 00:48:05.942725 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.942821 kubelet[2670]: E0430 00:48:05.942793 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.943202 kubelet[2670]: E0430 00:48:05.942956 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.943202 kubelet[2670]: W0430 00:48:05.942971 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.943202 kubelet[2670]: E0430 00:48:05.942981 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.943202 kubelet[2670]: E0430 00:48:05.943157 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.943202 kubelet[2670]: W0430 00:48:05.943165 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.943202 kubelet[2670]: E0430 00:48:05.943173 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.943481 kubelet[2670]: E0430 00:48:05.943331 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.943481 kubelet[2670]: W0430 00:48:05.943338 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.943481 kubelet[2670]: E0430 00:48:05.943347 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.945181 kubelet[2670]: E0430 00:48:05.945131 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:05.945181 kubelet[2670]: W0430 00:48:05.945153 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:05.945181 kubelet[2670]: E0430 00:48:05.945169 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:05.988498 containerd[1476]: time="2025-04-30T00:48:05.988381467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74fd7b59c-8jwsm,Uid:f7028c7c-3d89-4bf3-bfc5-5421ece44e98,Namespace:calico-system,Attempt:0,} returns sandbox id \"128c1d0a10889ed583c87994a02ecc4080f651cd1aa5a8eda0fbf51f2a7ea6d1\"" Apr 30 00:48:05.991623 containerd[1476]: time="2025-04-30T00:48:05.990477805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 00:48:06.021233 containerd[1476]: time="2025-04-30T00:48:06.021198060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ml5f9,Uid:8c6ff056-8ee9-4a95-a610-5cb3519ad437,Namespace:calico-system,Attempt:0,}" Apr 30 00:48:06.042008 kubelet[2670]: E0430 00:48:06.041968 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.042008 kubelet[2670]: W0430 00:48:06.041996 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.042008 kubelet[2670]: E0430 00:48:06.042021 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.042886 kubelet[2670]: E0430 00:48:06.042659 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.042886 kubelet[2670]: W0430 00:48:06.042685 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.042886 kubelet[2670]: E0430 00:48:06.042802 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.043417 kubelet[2670]: E0430 00:48:06.043233 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.043417 kubelet[2670]: W0430 00:48:06.043253 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.043417 kubelet[2670]: E0430 00:48:06.043269 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.043969 kubelet[2670]: E0430 00:48:06.043594 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.043969 kubelet[2670]: W0430 00:48:06.043605 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.043969 kubelet[2670]: E0430 00:48:06.043783 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.044562 kubelet[2670]: E0430 00:48:06.044160 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.044562 kubelet[2670]: W0430 00:48:06.044179 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.044562 kubelet[2670]: E0430 00:48:06.044344 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.044562 kubelet[2670]: E0430 00:48:06.044506 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.044562 kubelet[2670]: W0430 00:48:06.044518 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.044562 kubelet[2670]: E0430 00:48:06.044547 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.046211 kubelet[2670]: E0430 00:48:06.044730 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.046211 kubelet[2670]: W0430 00:48:06.044739 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.046211 kubelet[2670]: E0430 00:48:06.044957 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.046211 kubelet[2670]: W0430 00:48:06.044976 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.046211 kubelet[2670]: E0430 00:48:06.045009 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.046211 kubelet[2670]: E0430 00:48:06.045045 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.046211 kubelet[2670]: E0430 00:48:06.045457 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.046211 kubelet[2670]: W0430 00:48:06.045469 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.046211 kubelet[2670]: E0430 00:48:06.045540 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.046211 kubelet[2670]: E0430 00:48:06.045766 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.046954 kubelet[2670]: W0430 00:48:06.045776 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.046954 kubelet[2670]: E0430 00:48:06.045903 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.046954 kubelet[2670]: E0430 00:48:06.046075 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.046954 kubelet[2670]: W0430 00:48:06.046085 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.046954 kubelet[2670]: E0430 00:48:06.046132 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.046954 kubelet[2670]: E0430 00:48:06.046507 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.046954 kubelet[2670]: W0430 00:48:06.046519 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.046954 kubelet[2670]: E0430 00:48:06.046584 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.047409 kubelet[2670]: E0430 00:48:06.047068 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.047409 kubelet[2670]: W0430 00:48:06.047080 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.047409 kubelet[2670]: E0430 00:48:06.047130 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.047773 kubelet[2670]: E0430 00:48:06.047637 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.047773 kubelet[2670]: W0430 00:48:06.047655 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.048537 kubelet[2670]: E0430 00:48:06.047938 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.048537 kubelet[2670]: W0430 00:48:06.047949 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.048537 kubelet[2670]: E0430 00:48:06.048171 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.048537 kubelet[2670]: W0430 00:48:06.048180 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.048537 kubelet[2670]: E0430 00:48:06.048426 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.048537 kubelet[2670]: W0430 00:48:06.048436 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.048681 kubelet[2670]: E0430 00:48:06.048641 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.048681 kubelet[2670]: W0430 00:48:06.048650 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.048681 kubelet[2670]: E0430 00:48:06.048662 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.049359 kubelet[2670]: E0430 00:48:06.048964 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.049359 kubelet[2670]: W0430 00:48:06.048984 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.049359 kubelet[2670]: E0430 00:48:06.048988 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.049359 kubelet[2670]: E0430 00:48:06.049004 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.049359 kubelet[2670]: E0430 00:48:06.047690 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.049359 kubelet[2670]: E0430 00:48:06.048995 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.049359 kubelet[2670]: E0430 00:48:06.049259 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.049359 kubelet[2670]: W0430 00:48:06.049270 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.049359 kubelet[2670]: E0430 00:48:06.049280 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.050000 kubelet[2670]: E0430 00:48:06.049447 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.050000 kubelet[2670]: W0430 00:48:06.049469 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.050000 kubelet[2670]: E0430 00:48:06.049480 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.050629 kubelet[2670]: E0430 00:48:06.050066 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.050629 kubelet[2670]: E0430 00:48:06.050419 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.050629 kubelet[2670]: W0430 00:48:06.050432 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.050629 kubelet[2670]: E0430 00:48:06.050457 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.050629 kubelet[2670]: E0430 00:48:06.050644 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.050629 kubelet[2670]: W0430 00:48:06.050652 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.050869 kubelet[2670]: E0430 00:48:06.050669 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.051333 kubelet[2670]: E0430 00:48:06.051312 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.051398 kubelet[2670]: W0430 00:48:06.051348 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.051462 kubelet[2670]: E0430 00:48:06.051436 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.051644 kubelet[2670]: E0430 00:48:06.051630 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.051644 kubelet[2670]: W0430 00:48:06.051642 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.052225 kubelet[2670]: E0430 00:48:06.051655 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.066263 containerd[1476]: time="2025-04-30T00:48:06.063380289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:48:06.066263 containerd[1476]: time="2025-04-30T00:48:06.063631651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:48:06.066263 containerd[1476]: time="2025-04-30T00:48:06.063655292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:48:06.066263 containerd[1476]: time="2025-04-30T00:48:06.063775173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:48:06.070041 kubelet[2670]: E0430 00:48:06.069996 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:06.070041 kubelet[2670]: W0430 00:48:06.070038 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:06.072434 kubelet[2670]: E0430 00:48:06.070259 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:06.088160 systemd[1]: Started cri-containerd-c1ad78fe30ce1a2baf6c53f82da11b6e2e0c20947bcdca03db4f914fd8a5332b.scope - libcontainer container c1ad78fe30ce1a2baf6c53f82da11b6e2e0c20947bcdca03db4f914fd8a5332b. Apr 30 00:48:06.115721 containerd[1476]: time="2025-04-30T00:48:06.115651762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ml5f9,Uid:8c6ff056-8ee9-4a95-a610-5cb3519ad437,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1ad78fe30ce1a2baf6c53f82da11b6e2e0c20947bcdca03db4f914fd8a5332b\"" Apr 30 00:48:07.121729 kubelet[2670]: E0430 00:48:07.121559 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:09.121788 kubelet[2670]: E0430 00:48:09.121258 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:11.123957 kubelet[2670]: E0430 00:48:11.121560 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:13.122386 kubelet[2670]: E0430 00:48:13.121559 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:15.130634 kubelet[2670]: E0430 00:48:15.129216 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:17.122145 kubelet[2670]: E0430 00:48:17.121234 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:19.123142 kubelet[2670]: E0430 00:48:19.122857 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:21.121670 kubelet[2670]: E0430 00:48:21.121171 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:23.123330 kubelet[2670]: E0430 00:48:23.121548 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:25.122642 kubelet[2670]: E0430 00:48:25.121053 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:27.122205 kubelet[2670]: E0430 00:48:27.121554 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:29.123151 kubelet[2670]: E0430 00:48:29.121770 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:31.122982 kubelet[2670]: E0430 00:48:31.121057 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:33.121550 kubelet[2670]: E0430 00:48:33.121502 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:35.121458 kubelet[2670]: E0430 00:48:35.121384 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:37.127995 kubelet[2670]: E0430 00:48:37.123252 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:39.126001 kubelet[2670]: E0430 00:48:39.125548 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:41.123029 kubelet[2670]: E0430 00:48:41.121249 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:43.123191 kubelet[2670]: E0430 00:48:43.121756 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:45.123360 kubelet[2670]: E0430 00:48:45.121252 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:47.122660 kubelet[2670]: E0430 00:48:47.121945 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:49.123178 kubelet[2670]: E0430 00:48:49.122438 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:51.124983 kubelet[2670]: E0430 00:48:51.122555 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:53.121991 kubelet[2670]: E0430 00:48:53.120606 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:53.419689 containerd[1476]: time="2025-04-30T00:48:53.419534449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:53.421505 containerd[1476]: time="2025-04-30T00:48:53.421284497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" Apr 30 00:48:53.422685 containerd[1476]: time="2025-04-30T00:48:53.422378663Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:53.425231 containerd[1476]: time="2025-04-30T00:48:53.425165516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:53.426490 containerd[1476]: time="2025-04-30T00:48:53.426076520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 47.435554355s" Apr 30 00:48:53.426490 containerd[1476]: time="2025-04-30T00:48:53.426116440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" Apr 30 00:48:53.427995 containerd[1476]: time="2025-04-30T00:48:53.427959449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 00:48:53.446160 containerd[1476]: time="2025-04-30T00:48:53.446056416Z" level=info msg="CreateContainer within sandbox \"128c1d0a10889ed583c87994a02ecc4080f651cd1aa5a8eda0fbf51f2a7ea6d1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 00:48:53.470872 containerd[1476]: time="2025-04-30T00:48:53.470753494Z" level=info msg="CreateContainer within sandbox \"128c1d0a10889ed583c87994a02ecc4080f651cd1aa5a8eda0fbf51f2a7ea6d1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"86f7a5cc05f9cf350efbf378dffa14391a8cee0c305fe581e2d734a43d5ab331\"" Apr 30 00:48:53.472811 containerd[1476]: time="2025-04-30T00:48:53.472723344Z" level=info msg="StartContainer for \"86f7a5cc05f9cf350efbf378dffa14391a8cee0c305fe581e2d734a43d5ab331\"" Apr 30 00:48:53.511143 systemd[1]: Started cri-containerd-86f7a5cc05f9cf350efbf378dffa14391a8cee0c305fe581e2d734a43d5ab331.scope - libcontainer container 86f7a5cc05f9cf350efbf378dffa14391a8cee0c305fe581e2d734a43d5ab331. Apr 30 00:48:53.554368 containerd[1476]: time="2025-04-30T00:48:53.554287334Z" level=info msg="StartContainer for \"86f7a5cc05f9cf350efbf378dffa14391a8cee0c305fe581e2d734a43d5ab331\" returns successfully" Apr 30 00:48:54.347680 kubelet[2670]: I0430 00:48:54.347316 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74fd7b59c-8jwsm" podStartSLOduration=1.910280117 podStartE2EDuration="49.3472708s" podCreationTimestamp="2025-04-30 00:48:05 +0000 UTC" firstStartedPulling="2025-04-30 00:48:05.990169162 +0000 UTC m=+16.990885891" lastFinishedPulling="2025-04-30 00:48:53.427159845 +0000 UTC m=+64.427876574" observedRunningTime="2025-04-30 00:48:54.34518883 +0000 UTC m=+65.345905599" watchObservedRunningTime="2025-04-30 00:48:54.3472708 +0000 UTC m=+65.347987529" Apr 30 00:48:54.383389 kubelet[2670]: E0430 00:48:54.383344 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.383691 kubelet[2670]: W0430 00:48:54.383633 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.383691 kubelet[2670]: E0430 00:48:54.383668 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.385317 kubelet[2670]: E0430 00:48:54.385123 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.385317 kubelet[2670]: W0430 00:48:54.385152 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.385317 kubelet[2670]: E0430 00:48:54.385174 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.385723 kubelet[2670]: E0430 00:48:54.385584 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.386043 kubelet[2670]: W0430 00:48:54.385774 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.386043 kubelet[2670]: E0430 00:48:54.385798 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.386392 kubelet[2670]: E0430 00:48:54.386375 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.386670 kubelet[2670]: W0430 00:48:54.386503 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.386670 kubelet[2670]: E0430 00:48:54.386525 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.386836 kubelet[2670]: E0430 00:48:54.386824 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.386911 kubelet[2670]: W0430 00:48:54.386884 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.387272 kubelet[2670]: E0430 00:48:54.387167 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.387463 kubelet[2670]: E0430 00:48:54.387451 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.387527 kubelet[2670]: W0430 00:48:54.387516 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.387703 kubelet[2670]: E0430 00:48:54.387592 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.387805 kubelet[2670]: E0430 00:48:54.387794 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.387880 kubelet[2670]: W0430 00:48:54.387869 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.388115 kubelet[2670]: E0430 00:48:54.388024 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.388464 kubelet[2670]: E0430 00:48:54.388451 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.388522 kubelet[2670]: W0430 00:48:54.388511 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.388716 kubelet[2670]: E0430 00:48:54.388569 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.388842 kubelet[2670]: E0430 00:48:54.388830 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.389131 kubelet[2670]: W0430 00:48:54.389021 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.389131 kubelet[2670]: E0430 00:48:54.389043 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.389506 kubelet[2670]: E0430 00:48:54.389311 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.389506 kubelet[2670]: W0430 00:48:54.389331 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.389506 kubelet[2670]: E0430 00:48:54.389343 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.389877 kubelet[2670]: E0430 00:48:54.389688 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.389877 kubelet[2670]: W0430 00:48:54.389701 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.389877 kubelet[2670]: E0430 00:48:54.389796 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.390092 kubelet[2670]: E0430 00:48:54.390080 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.390145 kubelet[2670]: W0430 00:48:54.390135 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.390201 kubelet[2670]: E0430 00:48:54.390191 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.390463 kubelet[2670]: E0430 00:48:54.390384 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.390463 kubelet[2670]: W0430 00:48:54.390394 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.390463 kubelet[2670]: E0430 00:48:54.390402 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.390615 kubelet[2670]: E0430 00:48:54.390605 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.390668 kubelet[2670]: W0430 00:48:54.390657 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.390794 kubelet[2670]: E0430 00:48:54.390710 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.390883 kubelet[2670]: E0430 00:48:54.390873 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.391003 kubelet[2670]: W0430 00:48:54.390990 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.391127 kubelet[2670]: E0430 00:48:54.391061 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.412392 kubelet[2670]: E0430 00:48:54.412359 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.412713 kubelet[2670]: W0430 00:48:54.412553 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.412713 kubelet[2670]: E0430 00:48:54.412586 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.413341 kubelet[2670]: E0430 00:48:54.413225 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.413341 kubelet[2670]: W0430 00:48:54.413242 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.413341 kubelet[2670]: E0430 00:48:54.413265 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.413810 kubelet[2670]: E0430 00:48:54.413701 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.413810 kubelet[2670]: W0430 00:48:54.413717 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.413810 kubelet[2670]: E0430 00:48:54.413738 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.414119 kubelet[2670]: E0430 00:48:54.414097 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.414175 kubelet[2670]: W0430 00:48:54.414123 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.414175 kubelet[2670]: E0430 00:48:54.414151 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.414413 kubelet[2670]: E0430 00:48:54.414398 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.414456 kubelet[2670]: W0430 00:48:54.414416 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.414456 kubelet[2670]: E0430 00:48:54.414438 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.414669 kubelet[2670]: E0430 00:48:54.414655 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.414719 kubelet[2670]: W0430 00:48:54.414672 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.414829 kubelet[2670]: E0430 00:48:54.414742 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.414922 kubelet[2670]: E0430 00:48:54.414891 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.414922 kubelet[2670]: W0430 00:48:54.414924 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.415298 kubelet[2670]: E0430 00:48:54.415037 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.415298 kubelet[2670]: E0430 00:48:54.415198 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.415298 kubelet[2670]: W0430 00:48:54.415214 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.415298 kubelet[2670]: E0430 00:48:54.415252 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.415466 kubelet[2670]: E0430 00:48:54.415450 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.415507 kubelet[2670]: W0430 00:48:54.415467 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.415507 kubelet[2670]: E0430 00:48:54.415492 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.415947 kubelet[2670]: E0430 00:48:54.415804 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.415947 kubelet[2670]: W0430 00:48:54.415819 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.415947 kubelet[2670]: E0430 00:48:54.415833 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.416307 kubelet[2670]: E0430 00:48:54.416210 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.416307 kubelet[2670]: W0430 00:48:54.416227 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.416307 kubelet[2670]: E0430 00:48:54.416251 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.416917 kubelet[2670]: E0430 00:48:54.416596 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.416917 kubelet[2670]: W0430 00:48:54.416611 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.416917 kubelet[2670]: E0430 00:48:54.416634 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.417134 kubelet[2670]: E0430 00:48:54.417121 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.417191 kubelet[2670]: W0430 00:48:54.417180 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.417273 kubelet[2670]: E0430 00:48:54.417253 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.417454 kubelet[2670]: E0430 00:48:54.417444 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.417523 kubelet[2670]: W0430 00:48:54.417512 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.417606 kubelet[2670]: E0430 00:48:54.417577 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.417821 kubelet[2670]: E0430 00:48:54.417767 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.417821 kubelet[2670]: W0430 00:48:54.417777 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.417821 kubelet[2670]: E0430 00:48:54.417787 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.418198 kubelet[2670]: E0430 00:48:54.418110 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.418198 kubelet[2670]: W0430 00:48:54.418122 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.418198 kubelet[2670]: E0430 00:48:54.418134 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.418604 kubelet[2670]: E0430 00:48:54.418421 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.418604 kubelet[2670]: W0430 00:48:54.418432 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.418604 kubelet[2670]: E0430 00:48:54.418442 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.418884 kubelet[2670]: E0430 00:48:54.418870 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.418987 kubelet[2670]: W0430 00:48:54.418975 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.419069 kubelet[2670]: E0430 00:48:54.419052 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.959453 containerd[1476]: time="2025-04-30T00:48:54.959372671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:54.960716 containerd[1476]: time="2025-04-30T00:48:54.960645437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" Apr 30 00:48:54.961754 containerd[1476]: time="2025-04-30T00:48:54.961685762Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:54.965296 containerd[1476]: time="2025-04-30T00:48:54.964217254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:54.965296 containerd[1476]: time="2025-04-30T00:48:54.964920857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.536924168s" Apr 30 00:48:54.965296 containerd[1476]: time="2025-04-30T00:48:54.964965657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" Apr 30 00:48:54.968644 containerd[1476]: time="2025-04-30T00:48:54.968606355Z" level=info msg="CreateContainer within sandbox \"c1ad78fe30ce1a2baf6c53f82da11b6e2e0c20947bcdca03db4f914fd8a5332b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 00:48:54.981308 containerd[1476]: time="2025-04-30T00:48:54.981264135Z" level=info msg="CreateContainer within sandbox \"c1ad78fe30ce1a2baf6c53f82da11b6e2e0c20947bcdca03db4f914fd8a5332b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3d1fd63675458b70a1a295951ad40f68f94cf24b48ed7e5572844b8ca95ee383\"" Apr 30 00:48:54.985053 containerd[1476]: time="2025-04-30T00:48:54.982662022Z" level=info msg="StartContainer for \"3d1fd63675458b70a1a295951ad40f68f94cf24b48ed7e5572844b8ca95ee383\"" Apr 30 00:48:55.019108 systemd[1]: Started cri-containerd-3d1fd63675458b70a1a295951ad40f68f94cf24b48ed7e5572844b8ca95ee383.scope - libcontainer container 3d1fd63675458b70a1a295951ad40f68f94cf24b48ed7e5572844b8ca95ee383. Apr 30 00:48:55.048406 containerd[1476]: time="2025-04-30T00:48:55.048271452Z" level=info msg="StartContainer for \"3d1fd63675458b70a1a295951ad40f68f94cf24b48ed7e5572844b8ca95ee383\" returns successfully" Apr 30 00:48:55.066330 systemd[1]: cri-containerd-3d1fd63675458b70a1a295951ad40f68f94cf24b48ed7e5572844b8ca95ee383.scope: Deactivated successfully. Apr 30 00:48:55.121694 kubelet[2670]: E0430 00:48:55.121463 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:55.155530 containerd[1476]: time="2025-04-30T00:48:55.155412999Z" level=info msg="shim disconnected" id=3d1fd63675458b70a1a295951ad40f68f94cf24b48ed7e5572844b8ca95ee383 namespace=k8s.io Apr 30 00:48:55.156154 containerd[1476]: time="2025-04-30T00:48:55.155833801Z" level=warning msg="cleaning up after shim disconnected" id=3d1fd63675458b70a1a295951ad40f68f94cf24b48ed7e5572844b8ca95ee383 namespace=k8s.io Apr 30 00:48:55.156154 containerd[1476]: time="2025-04-30T00:48:55.155866401Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:48:55.340561 containerd[1476]: time="2025-04-30T00:48:55.340380753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 00:48:55.434311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d1fd63675458b70a1a295951ad40f68f94cf24b48ed7e5572844b8ca95ee383-rootfs.mount: Deactivated successfully. Apr 30 00:48:57.122608 kubelet[2670]: E0430 00:48:57.122299 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:59.121787 kubelet[2670]: E0430 00:48:59.121426 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:48:59.699544 containerd[1476]: time="2025-04-30T00:48:59.699449698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:59.701121 containerd[1476]: time="2025-04-30T00:48:59.701057345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" Apr 30 00:48:59.702485 containerd[1476]: time="2025-04-30T00:48:59.702405952Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:59.705391 containerd[1476]: time="2025-04-30T00:48:59.705344285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:59.706506 containerd[1476]: time="2025-04-30T00:48:59.706260129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.365838456s" Apr 30 00:48:59.706506 containerd[1476]: time="2025-04-30T00:48:59.706302850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" Apr 30 00:48:59.719878 containerd[1476]: time="2025-04-30T00:48:59.719825952Z" level=info msg="CreateContainer within sandbox \"c1ad78fe30ce1a2baf6c53f82da11b6e2e0c20947bcdca03db4f914fd8a5332b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 00:48:59.746072 containerd[1476]: time="2025-04-30T00:48:59.745841992Z" level=info msg="CreateContainer within sandbox \"c1ad78fe30ce1a2baf6c53f82da11b6e2e0c20947bcdca03db4f914fd8a5332b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c84635dccb7d76f1d340b2d704545fc42da95f92dd5c1ea3c841bf8e92aef39b\"" Apr 30 00:48:59.748145 containerd[1476]: time="2025-04-30T00:48:59.747985402Z" level=info msg="StartContainer for \"c84635dccb7d76f1d340b2d704545fc42da95f92dd5c1ea3c841bf8e92aef39b\"" Apr 30 00:48:59.778643 systemd[1]: run-containerd-runc-k8s.io-c84635dccb7d76f1d340b2d704545fc42da95f92dd5c1ea3c841bf8e92aef39b-runc.9OsAeA.mount: Deactivated successfully. Apr 30 00:48:59.785102 systemd[1]: Started cri-containerd-c84635dccb7d76f1d340b2d704545fc42da95f92dd5c1ea3c841bf8e92aef39b.scope - libcontainer container c84635dccb7d76f1d340b2d704545fc42da95f92dd5c1ea3c841bf8e92aef39b. Apr 30 00:48:59.815414 containerd[1476]: time="2025-04-30T00:48:59.815360353Z" level=info msg="StartContainer for \"c84635dccb7d76f1d340b2d704545fc42da95f92dd5c1ea3c841bf8e92aef39b\" returns successfully" Apr 30 00:49:00.302413 containerd[1476]: time="2025-04-30T00:49:00.302360951Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:49:00.305403 systemd[1]: cri-containerd-c84635dccb7d76f1d340b2d704545fc42da95f92dd5c1ea3c841bf8e92aef39b.scope: Deactivated successfully. Apr 30 00:49:00.390684 kubelet[2670]: I0430 00:49:00.380452 2670 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Apr 30 00:49:00.452460 containerd[1476]: time="2025-04-30T00:49:00.451883117Z" level=info msg="shim disconnected" id=c84635dccb7d76f1d340b2d704545fc42da95f92dd5c1ea3c841bf8e92aef39b namespace=k8s.io Apr 30 00:49:00.452460 containerd[1476]: time="2025-04-30T00:49:00.452021917Z" level=warning msg="cleaning up after shim disconnected" id=c84635dccb7d76f1d340b2d704545fc42da95f92dd5c1ea3c841bf8e92aef39b namespace=k8s.io Apr 30 00:49:00.452460 containerd[1476]: time="2025-04-30T00:49:00.452031317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:49:00.453669 systemd[1]: Created slice kubepods-burstable-pod3a56768d_4969_42c8_b398_626188e5b2d7.slice - libcontainer container kubepods-burstable-pod3a56768d_4969_42c8_b398_626188e5b2d7.slice. Apr 30 00:49:00.461139 kubelet[2670]: I0430 00:49:00.461109 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a56768d-4969-42c8-b398-626188e5b2d7-config-volume\") pod \"coredns-6f6b679f8f-hj5vb\" (UID: \"3a56768d-4969-42c8-b398-626188e5b2d7\") " pod="kube-system/coredns-6f6b679f8f-hj5vb" Apr 30 00:49:00.461139 kubelet[2670]: I0430 00:49:00.461184 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rgnc\" (UniqueName: \"kubernetes.io/projected/3a56768d-4969-42c8-b398-626188e5b2d7-kube-api-access-4rgnc\") pod \"coredns-6f6b679f8f-hj5vb\" (UID: \"3a56768d-4969-42c8-b398-626188e5b2d7\") " pod="kube-system/coredns-6f6b679f8f-hj5vb" Apr 30 00:49:00.479054 systemd[1]: Created slice kubepods-besteffort-pod4e7144e4_2906_40ae_a883_218aba9bbba0.slice - libcontainer container kubepods-besteffort-pod4e7144e4_2906_40ae_a883_218aba9bbba0.slice. Apr 30 00:49:00.486834 systemd[1]: Created slice kubepods-burstable-pod16b9cf8b_b97b_4135_b965_002368ca22b1.slice - libcontainer container kubepods-burstable-pod16b9cf8b_b97b_4135_b965_002368ca22b1.slice. Apr 30 00:49:00.505521 systemd[1]: Created slice kubepods-besteffort-podd29a6595_5453_4daa_b826_4b1dc5b8c3af.slice - libcontainer container kubepods-besteffort-podd29a6595_5453_4daa_b826_4b1dc5b8c3af.slice. Apr 30 00:49:00.514510 systemd[1]: Created slice kubepods-besteffort-pod779394cb_9d3f_45fc_87b2_ac2caf222c9a.slice - libcontainer container kubepods-besteffort-pod779394cb_9d3f_45fc_87b2_ac2caf222c9a.slice. Apr 30 00:49:00.563260 kubelet[2670]: I0430 00:49:00.561850 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4e7144e4-2906-40ae-a883-218aba9bbba0-calico-apiserver-certs\") pod \"calico-apiserver-7766c54769-sk6mv\" (UID: \"4e7144e4-2906-40ae-a883-218aba9bbba0\") " pod="calico-apiserver/calico-apiserver-7766c54769-sk6mv" Apr 30 00:49:00.563260 kubelet[2670]: I0430 00:49:00.561923 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d29a6595-5453-4daa-b826-4b1dc5b8c3af-tigera-ca-bundle\") pod \"calico-kube-controllers-574958cb4d-svpdv\" (UID: \"d29a6595-5453-4daa-b826-4b1dc5b8c3af\") " pod="calico-system/calico-kube-controllers-574958cb4d-svpdv" Apr 30 00:49:00.563260 kubelet[2670]: I0430 00:49:00.561965 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2mn8\" (UniqueName: \"kubernetes.io/projected/d29a6595-5453-4daa-b826-4b1dc5b8c3af-kube-api-access-m2mn8\") pod \"calico-kube-controllers-574958cb4d-svpdv\" (UID: \"d29a6595-5453-4daa-b826-4b1dc5b8c3af\") " pod="calico-system/calico-kube-controllers-574958cb4d-svpdv" Apr 30 00:49:00.563260 kubelet[2670]: I0430 00:49:00.561987 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16b9cf8b-b97b-4135-b965-002368ca22b1-config-volume\") pod \"coredns-6f6b679f8f-58vbz\" (UID: \"16b9cf8b-b97b-4135-b965-002368ca22b1\") " pod="kube-system/coredns-6f6b679f8f-58vbz" Apr 30 00:49:00.563260 kubelet[2670]: I0430 00:49:00.562011 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pskb\" (UniqueName: \"kubernetes.io/projected/16b9cf8b-b97b-4135-b965-002368ca22b1-kube-api-access-8pskb\") pod \"coredns-6f6b679f8f-58vbz\" (UID: \"16b9cf8b-b97b-4135-b965-002368ca22b1\") " pod="kube-system/coredns-6f6b679f8f-58vbz" Apr 30 00:49:00.563507 kubelet[2670]: I0430 00:49:00.562036 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqzhw\" (UniqueName: \"kubernetes.io/projected/4e7144e4-2906-40ae-a883-218aba9bbba0-kube-api-access-kqzhw\") pod \"calico-apiserver-7766c54769-sk6mv\" (UID: \"4e7144e4-2906-40ae-a883-218aba9bbba0\") " pod="calico-apiserver/calico-apiserver-7766c54769-sk6mv" Apr 30 00:49:00.563507 kubelet[2670]: I0430 00:49:00.562072 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khx47\" (UniqueName: \"kubernetes.io/projected/779394cb-9d3f-45fc-87b2-ac2caf222c9a-kube-api-access-khx47\") pod \"calico-apiserver-7766c54769-29prm\" (UID: \"779394cb-9d3f-45fc-87b2-ac2caf222c9a\") " pod="calico-apiserver/calico-apiserver-7766c54769-29prm" Apr 30 00:49:00.563507 kubelet[2670]: I0430 00:49:00.562090 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/779394cb-9d3f-45fc-87b2-ac2caf222c9a-calico-apiserver-certs\") pod \"calico-apiserver-7766c54769-29prm\" (UID: \"779394cb-9d3f-45fc-87b2-ac2caf222c9a\") " pod="calico-apiserver/calico-apiserver-7766c54769-29prm" Apr 30 00:49:00.740877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c84635dccb7d76f1d340b2d704545fc42da95f92dd5c1ea3c841bf8e92aef39b-rootfs.mount: Deactivated successfully. Apr 30 00:49:00.771041 containerd[1476]: time="2025-04-30T00:49:00.770853019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hj5vb,Uid:3a56768d-4969-42c8-b398-626188e5b2d7,Namespace:kube-system,Attempt:0,}" Apr 30 00:49:00.786104 containerd[1476]: time="2025-04-30T00:49:00.785631327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766c54769-sk6mv,Uid:4e7144e4-2906-40ae-a883-218aba9bbba0,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:49:00.806532 containerd[1476]: time="2025-04-30T00:49:00.806423863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-58vbz,Uid:16b9cf8b-b97b-4135-b965-002368ca22b1,Namespace:kube-system,Attempt:0,}" Apr 30 00:49:00.817374 containerd[1476]: time="2025-04-30T00:49:00.817135592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-574958cb4d-svpdv,Uid:d29a6595-5453-4daa-b826-4b1dc5b8c3af,Namespace:calico-system,Attempt:0,}" Apr 30 00:49:00.821741 containerd[1476]: time="2025-04-30T00:49:00.821474172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766c54769-29prm,Uid:779394cb-9d3f-45fc-87b2-ac2caf222c9a,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:49:00.978199 containerd[1476]: time="2025-04-30T00:49:00.978147530Z" level=error msg="Failed to destroy network for sandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.979062 containerd[1476]: time="2025-04-30T00:49:00.978813733Z" level=error msg="encountered an error cleaning up failed sandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.979353 containerd[1476]: time="2025-04-30T00:49:00.979239415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766c54769-sk6mv,Uid:4e7144e4-2906-40ae-a883-218aba9bbba0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.979866 kubelet[2670]: E0430 00:49:00.979721 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.980134 kubelet[2670]: E0430 00:49:00.979884 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7766c54769-sk6mv" Apr 30 00:49:00.980134 kubelet[2670]: E0430 00:49:00.980038 2670 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7766c54769-sk6mv" Apr 30 00:49:00.980754 kubelet[2670]: E0430 00:49:00.980190 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7766c54769-sk6mv_calico-apiserver(4e7144e4-2906-40ae-a883-218aba9bbba0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7766c54769-sk6mv_calico-apiserver(4e7144e4-2906-40ae-a883-218aba9bbba0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7766c54769-sk6mv" podUID="4e7144e4-2906-40ae-a883-218aba9bbba0" Apr 30 00:49:00.984823 containerd[1476]: time="2025-04-30T00:49:00.984772801Z" level=error msg="Failed to destroy network for sandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.985239 containerd[1476]: time="2025-04-30T00:49:00.985194682Z" level=error msg="encountered an error cleaning up failed sandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.985308 containerd[1476]: time="2025-04-30T00:49:00.985267563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-574958cb4d-svpdv,Uid:d29a6595-5453-4daa-b826-4b1dc5b8c3af,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.987187 kubelet[2670]: E0430 00:49:00.987143 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.987305 kubelet[2670]: E0430 00:49:00.987204 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-574958cb4d-svpdv" Apr 30 00:49:00.987305 kubelet[2670]: E0430 00:49:00.987224 2670 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-574958cb4d-svpdv" Apr 30 00:49:00.987305 kubelet[2670]: E0430 00:49:00.987276 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-574958cb4d-svpdv_calico-system(d29a6595-5453-4daa-b826-4b1dc5b8c3af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-574958cb4d-svpdv_calico-system(d29a6595-5453-4daa-b826-4b1dc5b8c3af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-574958cb4d-svpdv" podUID="d29a6595-5453-4daa-b826-4b1dc5b8c3af" Apr 30 00:49:01.002319 containerd[1476]: time="2025-04-30T00:49:01.002178600Z" level=error msg="Failed to destroy network for sandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.003064 containerd[1476]: time="2025-04-30T00:49:01.002717043Z" level=error msg="encountered an error cleaning up failed sandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.003064 containerd[1476]: time="2025-04-30T00:49:01.002794923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hj5vb,Uid:3a56768d-4969-42c8-b398-626188e5b2d7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.003212 kubelet[2670]: E0430 00:49:01.003180 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.003445 kubelet[2670]: E0430 00:49:01.003249 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hj5vb" Apr 30 00:49:01.003445 kubelet[2670]: E0430 00:49:01.003280 2670 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hj5vb" Apr 30 00:49:01.003445 kubelet[2670]: E0430 00:49:01.003326 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hj5vb_kube-system(3a56768d-4969-42c8-b398-626188e5b2d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hj5vb_kube-system(3a56768d-4969-42c8-b398-626188e5b2d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hj5vb" podUID="3a56768d-4969-42c8-b398-626188e5b2d7" Apr 30 00:49:01.013884 containerd[1476]: time="2025-04-30T00:49:01.013585252Z" level=error msg="Failed to destroy network for sandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.014818 containerd[1476]: time="2025-04-30T00:49:01.014696737Z" level=error msg="encountered an error cleaning up failed sandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.014818 containerd[1476]: time="2025-04-30T00:49:01.014776938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766c54769-29prm,Uid:779394cb-9d3f-45fc-87b2-ac2caf222c9a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.015379 kubelet[2670]: E0430 00:49:01.015345 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.015625 kubelet[2670]: E0430 00:49:01.015531 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7766c54769-29prm" Apr 30 00:49:01.015625 kubelet[2670]: E0430 00:49:01.015571 2670 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7766c54769-29prm" Apr 30 00:49:01.015788 kubelet[2670]: E0430 00:49:01.015739 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7766c54769-29prm_calico-apiserver(779394cb-9d3f-45fc-87b2-ac2caf222c9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7766c54769-29prm_calico-apiserver(779394cb-9d3f-45fc-87b2-ac2caf222c9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7766c54769-29prm" podUID="779394cb-9d3f-45fc-87b2-ac2caf222c9a" Apr 30 00:49:01.030366 containerd[1476]: time="2025-04-30T00:49:01.030214368Z" level=error msg="Failed to destroy network for sandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.030908 containerd[1476]: time="2025-04-30T00:49:01.030792131Z" level=error msg="encountered an error cleaning up failed sandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.030908 containerd[1476]: time="2025-04-30T00:49:01.030873011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-58vbz,Uid:16b9cf8b-b97b-4135-b965-002368ca22b1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.031755 kubelet[2670]: E0430 00:49:01.031195 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.031755 kubelet[2670]: E0430 00:49:01.031254 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-58vbz" Apr 30 00:49:01.031755 kubelet[2670]: E0430 00:49:01.031276 2670 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-58vbz" Apr 30 00:49:01.031873 kubelet[2670]: E0430 00:49:01.031328 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-58vbz_kube-system(16b9cf8b-b97b-4135-b965-002368ca22b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-58vbz_kube-system(16b9cf8b-b97b-4135-b965-002368ca22b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-58vbz" podUID="16b9cf8b-b97b-4135-b965-002368ca22b1" Apr 30 00:49:01.131875 systemd[1]: Created slice kubepods-besteffort-pod6de56a42_e7e2_4279_87ca_32df2fc92dd6.slice - libcontainer container kubepods-besteffort-pod6de56a42_e7e2_4279_87ca_32df2fc92dd6.slice. Apr 30 00:49:01.135858 containerd[1476]: time="2025-04-30T00:49:01.135814130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpvmp,Uid:6de56a42-e7e2-4279-87ca-32df2fc92dd6,Namespace:calico-system,Attempt:0,}" Apr 30 00:49:01.201718 containerd[1476]: time="2025-04-30T00:49:01.201658230Z" level=error msg="Failed to destroy network for sandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.202122 containerd[1476]: time="2025-04-30T00:49:01.202070072Z" level=error msg="encountered an error cleaning up failed sandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.202184 containerd[1476]: time="2025-04-30T00:49:01.202159792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpvmp,Uid:6de56a42-e7e2-4279-87ca-32df2fc92dd6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.202530 kubelet[2670]: E0430 00:49:01.202394 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.202530 kubelet[2670]: E0430 00:49:01.202459 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpvmp" Apr 30 00:49:01.202530 kubelet[2670]: E0430 00:49:01.202479 2670 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpvmp" Apr 30 00:49:01.202974 kubelet[2670]: E0430 00:49:01.202844 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wpvmp_calico-system(6de56a42-e7e2-4279-87ca-32df2fc92dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wpvmp_calico-system(6de56a42-e7e2-4279-87ca-32df2fc92dd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:49:01.353695 kubelet[2670]: I0430 00:49:01.353659 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:01.356728 containerd[1476]: time="2025-04-30T00:49:01.355396451Z" level=info msg="StopPodSandbox for \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\"" Apr 30 00:49:01.356728 containerd[1476]: time="2025-04-30T00:49:01.355585332Z" level=info msg="Ensure that sandbox c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451 in task-service has been cleanup successfully" Apr 30 00:49:01.358361 kubelet[2670]: I0430 00:49:01.358033 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:01.359801 containerd[1476]: time="2025-04-30T00:49:01.359739671Z" level=info msg="StopPodSandbox for \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\"" Apr 30 00:49:01.360657 containerd[1476]: time="2025-04-30T00:49:01.360006192Z" level=info msg="Ensure that sandbox ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766 in task-service has been cleanup successfully" Apr 30 00:49:01.363819 kubelet[2670]: I0430 00:49:01.363107 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:01.365083 containerd[1476]: time="2025-04-30T00:49:01.365038175Z" level=info msg="StopPodSandbox for \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\"" Apr 30 00:49:01.365309 containerd[1476]: time="2025-04-30T00:49:01.365243656Z" level=info msg="Ensure that sandbox ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3 in task-service has been cleanup successfully" Apr 30 00:49:01.368989 kubelet[2670]: I0430 00:49:01.368033 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:01.369114 containerd[1476]: time="2025-04-30T00:49:01.368744072Z" level=info msg="StopPodSandbox for \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\"" Apr 30 00:49:01.371006 containerd[1476]: time="2025-04-30T00:49:01.369423915Z" level=info msg="Ensure that sandbox 38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e in task-service has been cleanup successfully" Apr 30 00:49:01.377753 kubelet[2670]: I0430 00:49:01.376554 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:01.379154 containerd[1476]: time="2025-04-30T00:49:01.378578037Z" level=info msg="StopPodSandbox for \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\"" Apr 30 00:49:01.379989 containerd[1476]: time="2025-04-30T00:49:01.379943923Z" level=info msg="Ensure that sandbox f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1 in task-service has been cleanup successfully" Apr 30 00:49:01.428232 containerd[1476]: time="2025-04-30T00:49:01.428087743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 00:49:01.431361 kubelet[2670]: I0430 00:49:01.430643 2670 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:01.439407 containerd[1476]: time="2025-04-30T00:49:01.439365154Z" level=info msg="StopPodSandbox for \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\"" Apr 30 00:49:01.440343 containerd[1476]: time="2025-04-30T00:49:01.440229478Z" level=info msg="Ensure that sandbox 411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119 in task-service has been cleanup successfully" Apr 30 00:49:01.478198 containerd[1476]: time="2025-04-30T00:49:01.478148091Z" level=error msg="StopPodSandbox for \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\" failed" error="failed to destroy network for sandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.478810 kubelet[2670]: E0430 00:49:01.478633 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:01.478810 kubelet[2670]: E0430 00:49:01.478696 2670 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3"} Apr 30 00:49:01.478810 kubelet[2670]: E0430 00:49:01.478756 2670 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d29a6595-5453-4daa-b826-4b1dc5b8c3af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:49:01.478810 kubelet[2670]: E0430 00:49:01.478778 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d29a6595-5453-4daa-b826-4b1dc5b8c3af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-574958cb4d-svpdv" podUID="d29a6595-5453-4daa-b826-4b1dc5b8c3af" Apr 30 00:49:01.495086 containerd[1476]: time="2025-04-30T00:49:01.495034528Z" level=error msg="StopPodSandbox for \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\" failed" error="failed to destroy network for sandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.496354 kubelet[2670]: E0430 00:49:01.496171 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:01.496354 kubelet[2670]: E0430 00:49:01.496244 2670 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451"} Apr 30 00:49:01.496354 kubelet[2670]: E0430 00:49:01.496279 2670 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a56768d-4969-42c8-b398-626188e5b2d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:49:01.496354 kubelet[2670]: E0430 00:49:01.496319 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a56768d-4969-42c8-b398-626188e5b2d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hj5vb" podUID="3a56768d-4969-42c8-b398-626188e5b2d7" Apr 30 00:49:01.503044 containerd[1476]: time="2025-04-30T00:49:01.502993844Z" level=error msg="StopPodSandbox for \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\" failed" error="failed to destroy network for sandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.503582 kubelet[2670]: E0430 00:49:01.503262 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:01.503582 kubelet[2670]: E0430 00:49:01.503349 2670 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766"} Apr 30 00:49:01.503582 kubelet[2670]: E0430 00:49:01.503399 2670 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"779394cb-9d3f-45fc-87b2-ac2caf222c9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:49:01.503582 kubelet[2670]: E0430 00:49:01.503424 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"779394cb-9d3f-45fc-87b2-ac2caf222c9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7766c54769-29prm" podUID="779394cb-9d3f-45fc-87b2-ac2caf222c9a" Apr 30 00:49:01.509954 containerd[1476]: time="2025-04-30T00:49:01.509817036Z" level=error msg="StopPodSandbox for \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\" failed" error="failed to destroy network for sandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.510517 kubelet[2670]: E0430 00:49:01.510348 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:01.510517 kubelet[2670]: E0430 00:49:01.510409 2670 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e"} Apr 30 00:49:01.510517 kubelet[2670]: E0430 00:49:01.510442 2670 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16b9cf8b-b97b-4135-b965-002368ca22b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:49:01.510517 kubelet[2670]: E0430 00:49:01.510479 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16b9cf8b-b97b-4135-b965-002368ca22b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-58vbz" podUID="16b9cf8b-b97b-4135-b965-002368ca22b1" Apr 30 00:49:01.513322 containerd[1476]: time="2025-04-30T00:49:01.512167166Z" level=error msg="StopPodSandbox for \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\" failed" error="failed to destroy network for sandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.513445 kubelet[2670]: E0430 00:49:01.513073 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:01.513445 kubelet[2670]: E0430 00:49:01.513122 2670 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1"} Apr 30 00:49:01.513445 kubelet[2670]: E0430 00:49:01.513161 2670 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6de56a42-e7e2-4279-87ca-32df2fc92dd6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:49:01.513445 kubelet[2670]: E0430 00:49:01.513262 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6de56a42-e7e2-4279-87ca-32df2fc92dd6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpvmp" podUID="6de56a42-e7e2-4279-87ca-32df2fc92dd6" Apr 30 00:49:01.525984 containerd[1476]: time="2025-04-30T00:49:01.525907549Z" level=error msg="StopPodSandbox for \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\" failed" error="failed to destroy network for sandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.526441 kubelet[2670]: E0430 00:49:01.526379 2670 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:01.526596 kubelet[2670]: E0430 00:49:01.526572 2670 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119"} Apr 30 00:49:01.526711 kubelet[2670]: E0430 00:49:01.526694 2670 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e7144e4-2906-40ae-a883-218aba9bbba0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:49:01.526857 kubelet[2670]: E0430 00:49:01.526833 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e7144e4-2906-40ae-a883-218aba9bbba0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7766c54769-sk6mv" podUID="4e7144e4-2906-40ae-a883-218aba9bbba0" Apr 30 00:49:01.733365 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3-shm.mount: Deactivated successfully. Apr 30 00:49:01.733715 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119-shm.mount: Deactivated successfully. Apr 30 00:49:01.733780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451-shm.mount: Deactivated successfully. Apr 30 00:49:07.638739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627290399.mount: Deactivated successfully. Apr 30 00:49:07.677380 containerd[1476]: time="2025-04-30T00:49:07.677310840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:49:07.678455 containerd[1476]: time="2025-04-30T00:49:07.678324565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" Apr 30 00:49:07.679615 containerd[1476]: time="2025-04-30T00:49:07.679312809Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:49:07.681815 containerd[1476]: time="2025-04-30T00:49:07.681761940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:49:07.682917 containerd[1476]: time="2025-04-30T00:49:07.682865545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 6.254449681s" Apr 30 00:49:07.683433 containerd[1476]: time="2025-04-30T00:49:07.682927545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" Apr 30 00:49:07.699643 containerd[1476]: time="2025-04-30T00:49:07.699603379Z" level=info msg="CreateContainer within sandbox \"c1ad78fe30ce1a2baf6c53f82da11b6e2e0c20947bcdca03db4f914fd8a5332b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 00:49:07.721382 containerd[1476]: time="2025-04-30T00:49:07.721232674Z" level=info msg="CreateContainer within sandbox \"c1ad78fe30ce1a2baf6c53f82da11b6e2e0c20947bcdca03db4f914fd8a5332b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"01ac3c72040eff5d4dbfbfafa01149e2fa1d260b76df265ae9ecfa2a045e91b3\"" Apr 30 00:49:07.722955 containerd[1476]: time="2025-04-30T00:49:07.722002918Z" level=info msg="StartContainer for \"01ac3c72040eff5d4dbfbfafa01149e2fa1d260b76df265ae9ecfa2a045e91b3\"" Apr 30 00:49:07.765117 systemd[1]: Started cri-containerd-01ac3c72040eff5d4dbfbfafa01149e2fa1d260b76df265ae9ecfa2a045e91b3.scope - libcontainer container 01ac3c72040eff5d4dbfbfafa01149e2fa1d260b76df265ae9ecfa2a045e91b3. Apr 30 00:49:07.803145 containerd[1476]: time="2025-04-30T00:49:07.801872471Z" level=info msg="StartContainer for \"01ac3c72040eff5d4dbfbfafa01149e2fa1d260b76df265ae9ecfa2a045e91b3\" returns successfully" Apr 30 00:49:07.934724 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 00:49:07.934889 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 00:49:08.481260 kubelet[2670]: I0430 00:49:08.481175 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ml5f9" podStartSLOduration=1.914470413 podStartE2EDuration="1m3.481154547s" podCreationTimestamp="2025-04-30 00:48:05 +0000 UTC" firstStartedPulling="2025-04-30 00:48:06.117546657 +0000 UTC m=+17.118263386" lastFinishedPulling="2025-04-30 00:49:07.684230831 +0000 UTC m=+78.684947520" observedRunningTime="2025-04-30 00:49:08.479100258 +0000 UTC m=+79.479817067" watchObservedRunningTime="2025-04-30 00:49:08.481154547 +0000 UTC m=+79.481871276" Apr 30 00:49:08.854713 update_engine[1463]: I20250430 00:49:08.853013 1463 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 00:49:08.854713 update_engine[1463]: I20250430 00:49:08.853072 1463 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 00:49:08.854713 update_engine[1463]: I20250430 00:49:08.853370 1463 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 00:49:08.854713 update_engine[1463]: I20250430 00:49:08.853854 1463 omaha_request_params.cc:62] Current group set to lts Apr 30 00:49:08.854713 update_engine[1463]: I20250430 00:49:08.854035 1463 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 00:49:08.854713 update_engine[1463]: I20250430 00:49:08.854050 1463 update_attempter.cc:643] Scheduling an action processor start. Apr 30 00:49:08.854713 update_engine[1463]: I20250430 00:49:08.854070 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 00:49:08.854713 update_engine[1463]: I20250430 00:49:08.854111 1463 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 00:49:08.854713 update_engine[1463]: I20250430 00:49:08.854183 1463 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 00:49:08.854713 update_engine[1463]: I20250430 00:49:08.854194 1463 omaha_request_action.cc:272] Request: Apr 30 00:49:08.854713 update_engine[1463]: Apr 30 00:49:08.854713 update_engine[1463]: Apr 30 00:49:08.854713 update_engine[1463]: Apr 30 00:49:08.854713 update_engine[1463]: Apr 30 00:49:08.854713 update_engine[1463]: Apr 30 00:49:08.854713 update_engine[1463]: Apr 30 00:49:08.854713 update_engine[1463]: Apr 30 00:49:08.854713 update_engine[1463]: Apr 30 00:49:08.854713 update_engine[1463]: I20250430 00:49:08.854202 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:49:08.855734 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 00:49:08.856925 update_engine[1463]: I20250430 00:49:08.856837 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:49:08.857362 update_engine[1463]: I20250430 00:49:08.857317 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:49:08.858712 update_engine[1463]: E20250430 00:49:08.858649 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:49:08.858792 update_engine[1463]: I20250430 00:49:08.858740 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 00:49:09.486669 systemd[1]: run-containerd-runc-k8s.io-01ac3c72040eff5d4dbfbfafa01149e2fa1d260b76df265ae9ecfa2a045e91b3-runc.rbVye6.mount: Deactivated successfully. Apr 30 00:49:09.831930 kernel: bpftool[3996]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 00:49:10.046395 systemd-networkd[1367]: vxlan.calico: Link UP Apr 30 00:49:10.046410 systemd-networkd[1367]: vxlan.calico: Gained carrier Apr 30 00:49:11.988429 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Apr 30 00:49:13.123541 containerd[1476]: time="2025-04-30T00:49:13.122449749Z" level=info msg="StopPodSandbox for \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\"" Apr 30 00:49:13.125958 containerd[1476]: time="2025-04-30T00:49:13.125243601Z" level=info msg="StopPodSandbox for \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\"" Apr 30 00:49:13.127805 containerd[1476]: time="2025-04-30T00:49:13.127679531Z" level=info msg="StopPodSandbox for \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\"" Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.251 [INFO][4107] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.251 [INFO][4107] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" iface="eth0" netns="/var/run/netns/cni-1f6f2b67-1b8d-7879-d143-1d66fba18590" Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.252 [INFO][4107] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" iface="eth0" netns="/var/run/netns/cni-1f6f2b67-1b8d-7879-d143-1d66fba18590" Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.253 [INFO][4107] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" iface="eth0" netns="/var/run/netns/cni-1f6f2b67-1b8d-7879-d143-1d66fba18590" Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.253 [INFO][4107] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.253 [INFO][4107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.312 [INFO][4126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" HandleID="k8s-pod-network.ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.312 [INFO][4126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.313 [INFO][4126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.327 [WARNING][4126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" HandleID="k8s-pod-network.ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.327 [INFO][4126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" HandleID="k8s-pod-network.ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.330 [INFO][4126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:13.342517 containerd[1476]: 2025-04-30 00:49:13.337 [INFO][4107] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:13.342517 containerd[1476]: time="2025-04-30T00:49:13.342232777Z" level=info msg="TearDown network for sandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\" successfully" Apr 30 00:49:13.342517 containerd[1476]: time="2025-04-30T00:49:13.342271737Z" level=info msg="StopPodSandbox for \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\" returns successfully" Apr 30 00:49:13.343481 systemd[1]: run-netns-cni\x2d1f6f2b67\x2d1b8d\x2d7879\x2dd143\x2d1d66fba18590.mount: Deactivated successfully. Apr 30 00:49:13.346481 containerd[1476]: time="2025-04-30T00:49:13.346117633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-574958cb4d-svpdv,Uid:d29a6595-5453-4daa-b826-4b1dc5b8c3af,Namespace:calico-system,Attempt:1,}" Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.255 [INFO][4099] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.257 [INFO][4099] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" iface="eth0" netns="/var/run/netns/cni-ead8d54a-7c23-44cd-2bc3-7b3cbd880c6a" Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.258 [INFO][4099] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" iface="eth0" netns="/var/run/netns/cni-ead8d54a-7c23-44cd-2bc3-7b3cbd880c6a" Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.259 [INFO][4099] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" iface="eth0" netns="/var/run/netns/cni-ead8d54a-7c23-44cd-2bc3-7b3cbd880c6a" Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.259 [INFO][4099] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.259 [INFO][4099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.316 [INFO][4133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" HandleID="k8s-pod-network.ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.316 [INFO][4133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.330 [INFO][4133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.348 [WARNING][4133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" HandleID="k8s-pod-network.ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.348 [INFO][4133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" HandleID="k8s-pod-network.ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.351 [INFO][4133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:13.359344 containerd[1476]: 2025-04-30 00:49:13.356 [INFO][4099] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:13.361115 containerd[1476]: time="2025-04-30T00:49:13.360552256Z" level=info msg="TearDown network for sandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\" successfully" Apr 30 00:49:13.361115 containerd[1476]: time="2025-04-30T00:49:13.360591056Z" level=info msg="StopPodSandbox for \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\" returns successfully" Apr 30 00:49:13.365448 containerd[1476]: time="2025-04-30T00:49:13.364559353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766c54769-29prm,Uid:779394cb-9d3f-45fc-87b2-ac2caf222c9a,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:49:13.365016 systemd[1]: run-netns-cni\x2dead8d54a\x2d7c23\x2d44cd\x2d2bc3\x2d7b3cbd880c6a.mount: Deactivated successfully. Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.250 [INFO][4106] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.253 [INFO][4106] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" iface="eth0" netns="/var/run/netns/cni-7c7f5dd4-0abe-8d34-8afc-dd49b337164c" Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.255 [INFO][4106] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" iface="eth0" netns="/var/run/netns/cni-7c7f5dd4-0abe-8d34-8afc-dd49b337164c" Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.256 [INFO][4106] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" iface="eth0" netns="/var/run/netns/cni-7c7f5dd4-0abe-8d34-8afc-dd49b337164c" Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.256 [INFO][4106] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.256 [INFO][4106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.318 [INFO][4128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" HandleID="k8s-pod-network.38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.321 [INFO][4128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.351 [INFO][4128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.390 [WARNING][4128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" HandleID="k8s-pod-network.38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.390 [INFO][4128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" HandleID="k8s-pod-network.38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.397 [INFO][4128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:13.409122 containerd[1476]: 2025-04-30 00:49:13.405 [INFO][4106] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:13.409122 containerd[1476]: time="2025-04-30T00:49:13.409100065Z" level=info msg="TearDown network for sandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\" successfully" Apr 30 00:49:13.410818 containerd[1476]: time="2025-04-30T00:49:13.409142465Z" level=info msg="StopPodSandbox for \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\" returns successfully" Apr 30 00:49:13.411270 containerd[1476]: time="2025-04-30T00:49:13.411191874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-58vbz,Uid:16b9cf8b-b97b-4135-b965-002368ca22b1,Namespace:kube-system,Attempt:1,}" Apr 30 00:49:13.614152 systemd-networkd[1367]: cali40603478634: Link UP Apr 30 00:49:13.615236 systemd-networkd[1367]: cali40603478634: Gained carrier Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.464 [INFO][4157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0 calico-apiserver-7766c54769- calico-apiserver 779394cb-9d3f-45fc-87b2-ac2caf222c9a 875 0 2025-04-30 00:48:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7766c54769 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-c-89ff891e34 calico-apiserver-7766c54769-29prm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali40603478634 [] []}} ContainerID="849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-29prm" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-" Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.464 [INFO][4157] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-29prm" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.519 [INFO][4187] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" HandleID="k8s-pod-network.849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.541 [INFO][4187] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" HandleID="k8s-pod-network.849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c7c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-c-89ff891e34", "pod":"calico-apiserver-7766c54769-29prm", "timestamp":"2025-04-30 00:49:13.519394221 +0000 UTC"}, Hostname:"ci-4081-3-3-c-89ff891e34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.542 [INFO][4187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.542 [INFO][4187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.542 [INFO][4187] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-89ff891e34' Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.545 [INFO][4187] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.556 [INFO][4187] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.567 [INFO][4187] ipam/ipam.go 489: Trying affinity for 192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.572 [INFO][4187] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.578 [INFO][4187] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.578 [INFO][4187] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.581 [INFO][4187] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6 Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.588 [INFO][4187] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.599 [INFO][4187] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.129/26] block=192.168.33.128/26 handle="k8s-pod-network.849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.599 [INFO][4187] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.129/26] handle="k8s-pod-network.849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.600 [INFO][4187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:13.655158 containerd[1476]: 2025-04-30 00:49:13.600 [INFO][4187] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.129/26] IPv6=[] ContainerID="849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" HandleID="k8s-pod-network.849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:13.655871 containerd[1476]: 2025-04-30 00:49:13.608 [INFO][4157] cni-plugin/k8s.go 386: Populated endpoint ContainerID="849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-29prm" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0", GenerateName:"calico-apiserver-7766c54769-", Namespace:"calico-apiserver", SelfLink:"", UID:"779394cb-9d3f-45fc-87b2-ac2caf222c9a", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766c54769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"", Pod:"calico-apiserver-7766c54769-29prm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40603478634", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:13.655871 containerd[1476]: 2025-04-30 00:49:13.609 [INFO][4157] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.129/32] ContainerID="849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-29prm" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:13.655871 containerd[1476]: 2025-04-30 00:49:13.609 [INFO][4157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40603478634 ContainerID="849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-29prm" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:13.655871 containerd[1476]: 2025-04-30 00:49:13.615 [INFO][4157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-29prm" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:13.655871 containerd[1476]: 2025-04-30 00:49:13.616 [INFO][4157] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-29prm" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0", GenerateName:"calico-apiserver-7766c54769-", Namespace:"calico-apiserver", SelfLink:"", UID:"779394cb-9d3f-45fc-87b2-ac2caf222c9a", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766c54769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6", Pod:"calico-apiserver-7766c54769-29prm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40603478634", MAC:"a2:d4:5d:32:5a:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:13.655871 containerd[1476]: 2025-04-30 00:49:13.638 [INFO][4157] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-29prm" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:13.689679 containerd[1476]: time="2025-04-30T00:49:13.689254644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:49:13.689679 containerd[1476]: time="2025-04-30T00:49:13.689319084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:49:13.689679 containerd[1476]: time="2025-04-30T00:49:13.689330805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:13.689679 containerd[1476]: time="2025-04-30T00:49:13.689426965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:13.709187 systemd[1]: Started cri-containerd-849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6.scope - libcontainer container 849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6. Apr 30 00:49:13.769469 containerd[1476]: time="2025-04-30T00:49:13.769340604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766c54769-29prm,Uid:779394cb-9d3f-45fc-87b2-ac2caf222c9a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6\"" Apr 30 00:49:13.775609 containerd[1476]: time="2025-04-30T00:49:13.774005305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:49:13.798002 systemd-networkd[1367]: cali857fb70d846: Link UP Apr 30 00:49:13.799547 systemd-networkd[1367]: cali857fb70d846: Gained carrier Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.467 [INFO][4148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0 calico-kube-controllers-574958cb4d- calico-system d29a6595-5453-4daa-b826-4b1dc5b8c3af 873 0 2025-04-30 00:48:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:574958cb4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-c-89ff891e34 calico-kube-controllers-574958cb4d-svpdv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali857fb70d846 [] []}} ContainerID="980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" Namespace="calico-system" Pod="calico-kube-controllers-574958cb4d-svpdv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-" Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.467 [INFO][4148] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" Namespace="calico-system" Pod="calico-kube-controllers-574958cb4d-svpdv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.547 [INFO][4182] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" HandleID="k8s-pod-network.980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.571 [INFO][4182] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" HandleID="k8s-pod-network.980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000fb110), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-c-89ff891e34", "pod":"calico-kube-controllers-574958cb4d-svpdv", "timestamp":"2025-04-30 00:49:13.547957664 +0000 UTC"}, Hostname:"ci-4081-3-3-c-89ff891e34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.572 [INFO][4182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.599 [INFO][4182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.600 [INFO][4182] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-89ff891e34' Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.649 [INFO][4182] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.740 [INFO][4182] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.753 [INFO][4182] ipam/ipam.go 489: Trying affinity for 192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.759 [INFO][4182] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.762 [INFO][4182] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.762 [INFO][4182] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.765 [INFO][4182] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005 Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.778 [INFO][4182] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.788 [INFO][4182] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.130/26] block=192.168.33.128/26 handle="k8s-pod-network.980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.789 [INFO][4182] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.130/26] handle="k8s-pod-network.980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.789 [INFO][4182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:13.821430 containerd[1476]: 2025-04-30 00:49:13.789 [INFO][4182] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.130/26] IPv6=[] ContainerID="980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" HandleID="k8s-pod-network.980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:13.822059 containerd[1476]: 2025-04-30 00:49:13.791 [INFO][4148] cni-plugin/k8s.go 386: Populated endpoint ContainerID="980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" Namespace="calico-system" Pod="calico-kube-controllers-574958cb4d-svpdv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0", GenerateName:"calico-kube-controllers-574958cb4d-", Namespace:"calico-system", SelfLink:"", UID:"d29a6595-5453-4daa-b826-4b1dc5b8c3af", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"574958cb4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"", Pod:"calico-kube-controllers-574958cb4d-svpdv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali857fb70d846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:13.822059 containerd[1476]: 2025-04-30 00:49:13.792 [INFO][4148] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.130/32] ContainerID="980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" Namespace="calico-system" Pod="calico-kube-controllers-574958cb4d-svpdv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:13.822059 containerd[1476]: 2025-04-30 00:49:13.792 [INFO][4148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali857fb70d846 ContainerID="980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" Namespace="calico-system" Pod="calico-kube-controllers-574958cb4d-svpdv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:13.822059 containerd[1476]: 2025-04-30 00:49:13.799 [INFO][4148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" Namespace="calico-system" Pod="calico-kube-controllers-574958cb4d-svpdv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:13.822059 containerd[1476]: 2025-04-30 00:49:13.800 [INFO][4148] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" Namespace="calico-system" Pod="calico-kube-controllers-574958cb4d-svpdv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0", GenerateName:"calico-kube-controllers-574958cb4d-", Namespace:"calico-system", SelfLink:"", UID:"d29a6595-5453-4daa-b826-4b1dc5b8c3af", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"574958cb4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005", Pod:"calico-kube-controllers-574958cb4d-svpdv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali857fb70d846", MAC:"62:dc:f1:e4:a7:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:13.822059 containerd[1476]: 2025-04-30 00:49:13.816 [INFO][4148] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005" Namespace="calico-system" Pod="calico-kube-controllers-574958cb4d-svpdv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:13.870079 containerd[1476]: time="2025-04-30T00:49:13.869412453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:49:13.870079 containerd[1476]: time="2025-04-30T00:49:13.869479734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:49:13.870079 containerd[1476]: time="2025-04-30T00:49:13.869496214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:13.870079 containerd[1476]: time="2025-04-30T00:49:13.869579974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:13.892272 systemd[1]: Started cri-containerd-980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005.scope - libcontainer container 980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005. Apr 30 00:49:13.901537 systemd-networkd[1367]: cali9318c584e76: Link UP Apr 30 00:49:13.901997 systemd-networkd[1367]: cali9318c584e76: Gained carrier Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.527 [INFO][4170] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0 coredns-6f6b679f8f- kube-system 16b9cf8b-b97b-4135-b965-002368ca22b1 874 0 2025-04-30 00:47:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-c-89ff891e34 coredns-6f6b679f8f-58vbz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9318c584e76 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" Namespace="kube-system" Pod="coredns-6f6b679f8f-58vbz" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-" Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.527 [INFO][4170] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" Namespace="kube-system" Pod="coredns-6f6b679f8f-58vbz" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.594 [INFO][4199] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" HandleID="k8s-pod-network.c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.739 [INFO][4199] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" HandleID="k8s-pod-network.c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001f9490), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-c-89ff891e34", "pod":"coredns-6f6b679f8f-58vbz", "timestamp":"2025-04-30 00:49:13.594124423 +0000 UTC"}, Hostname:"ci-4081-3-3-c-89ff891e34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.740 [INFO][4199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.790 [INFO][4199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.790 [INFO][4199] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-89ff891e34' Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.795 [INFO][4199] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.837 [INFO][4199] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.854 [INFO][4199] ipam/ipam.go 489: Trying affinity for 192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.858 [INFO][4199] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.862 [INFO][4199] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.862 [INFO][4199] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.867 [INFO][4199] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.876 [INFO][4199] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.887 [INFO][4199] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.131/26] block=192.168.33.128/26 handle="k8s-pod-network.c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.887 [INFO][4199] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.131/26] handle="k8s-pod-network.c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.887 [INFO][4199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:13.927562 containerd[1476]: 2025-04-30 00:49:13.887 [INFO][4199] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.131/26] IPv6=[] ContainerID="c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" HandleID="k8s-pod-network.c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:13.928490 containerd[1476]: 2025-04-30 00:49:13.892 [INFO][4170] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" Namespace="kube-system" Pod="coredns-6f6b679f8f-58vbz" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"16b9cf8b-b97b-4135-b965-002368ca22b1", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 47, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"", Pod:"coredns-6f6b679f8f-58vbz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9318c584e76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:13.928490 containerd[1476]: 2025-04-30 00:49:13.892 [INFO][4170] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.131/32] ContainerID="c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" Namespace="kube-system" Pod="coredns-6f6b679f8f-58vbz" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:13.928490 containerd[1476]: 2025-04-30 00:49:13.892 [INFO][4170] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9318c584e76 ContainerID="c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" Namespace="kube-system" Pod="coredns-6f6b679f8f-58vbz" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:13.928490 containerd[1476]: 2025-04-30 00:49:13.903 [INFO][4170] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" Namespace="kube-system" Pod="coredns-6f6b679f8f-58vbz" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:13.928490 containerd[1476]: 2025-04-30 00:49:13.903 [INFO][4170] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" Namespace="kube-system" Pod="coredns-6f6b679f8f-58vbz" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"16b9cf8b-b97b-4135-b965-002368ca22b1", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 47, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a", Pod:"coredns-6f6b679f8f-58vbz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9318c584e76", MAC:"a2:70:c1:7e:83:de", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:13.928490 containerd[1476]: 2025-04-30 00:49:13.923 [INFO][4170] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a" Namespace="kube-system" Pod="coredns-6f6b679f8f-58vbz" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:13.969138 containerd[1476]: time="2025-04-30T00:49:13.968524178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:49:13.969138 containerd[1476]: time="2025-04-30T00:49:13.968685659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:49:13.969138 containerd[1476]: time="2025-04-30T00:49:13.968719219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:13.970319 containerd[1476]: time="2025-04-30T00:49:13.969085061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:13.976297 containerd[1476]: time="2025-04-30T00:49:13.976236093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-574958cb4d-svpdv,Uid:d29a6595-5453-4daa-b826-4b1dc5b8c3af,Namespace:calico-system,Attempt:1,} returns sandbox id \"980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005\"" Apr 30 00:49:13.999116 systemd[1]: Started cri-containerd-c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a.scope - libcontainer container c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a. Apr 30 00:49:14.038665 containerd[1476]: time="2025-04-30T00:49:14.038624887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-58vbz,Uid:16b9cf8b-b97b-4135-b965-002368ca22b1,Namespace:kube-system,Attempt:1,} returns sandbox id \"c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a\"" Apr 30 00:49:14.042287 containerd[1476]: time="2025-04-30T00:49:14.042242667Z" level=info msg="CreateContainer within sandbox \"c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:49:14.055869 containerd[1476]: time="2025-04-30T00:49:14.055785219Z" level=info msg="CreateContainer within sandbox \"c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a48437b324242205aef23357c94c271980d6e3071f6503467242c4d47671c49e\"" Apr 30 00:49:14.057302 containerd[1476]: time="2025-04-30T00:49:14.057215067Z" level=info msg="StartContainer for \"a48437b324242205aef23357c94c271980d6e3071f6503467242c4d47671c49e\"" Apr 30 00:49:14.088595 systemd[1]: Started cri-containerd-a48437b324242205aef23357c94c271980d6e3071f6503467242c4d47671c49e.scope - libcontainer container a48437b324242205aef23357c94c271980d6e3071f6503467242c4d47671c49e. Apr 30 00:49:14.117697 containerd[1476]: time="2025-04-30T00:49:14.117629073Z" level=info msg="StartContainer for \"a48437b324242205aef23357c94c271980d6e3071f6503467242c4d47671c49e\" returns successfully" Apr 30 00:49:14.121769 containerd[1476]: time="2025-04-30T00:49:14.121718135Z" level=info msg="StopPodSandbox for \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\"" Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.203 [INFO][4414] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.203 [INFO][4414] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" iface="eth0" netns="/var/run/netns/cni-8720226a-8261-e424-91a4-b42ead094cb3" Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.203 [INFO][4414] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" iface="eth0" netns="/var/run/netns/cni-8720226a-8261-e424-91a4-b42ead094cb3" Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.203 [INFO][4414] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" iface="eth0" netns="/var/run/netns/cni-8720226a-8261-e424-91a4-b42ead094cb3" Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.204 [INFO][4414] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.204 [INFO][4414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.243 [INFO][4425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" HandleID="k8s-pod-network.c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.243 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.243 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.255 [WARNING][4425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" HandleID="k8s-pod-network.c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.255 [INFO][4425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" HandleID="k8s-pod-network.c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.258 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:14.265381 containerd[1476]: 2025-04-30 00:49:14.262 [INFO][4414] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:14.266812 containerd[1476]: time="2025-04-30T00:49:14.265972151Z" level=info msg="TearDown network for sandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\" successfully" Apr 30 00:49:14.266812 containerd[1476]: time="2025-04-30T00:49:14.266040392Z" level=info msg="StopPodSandbox for \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\" returns successfully" Apr 30 00:49:14.267172 containerd[1476]: time="2025-04-30T00:49:14.267140918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hj5vb,Uid:3a56768d-4969-42c8-b398-626188e5b2d7,Namespace:kube-system,Attempt:1,}" Apr 30 00:49:14.362284 systemd[1]: run-netns-cni\x2d7c7f5dd4\x2d0abe\x2d8d34\x2d8afc\x2ddd49b337164c.mount: Deactivated successfully. Apr 30 00:49:14.362400 systemd[1]: run-netns-cni\x2d8720226a\x2d8261\x2de424\x2d91a4\x2db42ead094cb3.mount: Deactivated successfully. Apr 30 00:49:14.470245 systemd-networkd[1367]: cali96762031212: Link UP Apr 30 00:49:14.471996 systemd-networkd[1367]: cali96762031212: Gained carrier Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.334 [INFO][4432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0 coredns-6f6b679f8f- kube-system 3a56768d-4969-42c8-b398-626188e5b2d7 891 0 2025-04-30 00:47:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-c-89ff891e34 coredns-6f6b679f8f-hj5vb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali96762031212 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" Namespace="kube-system" Pod="coredns-6f6b679f8f-hj5vb" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-" Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.334 [INFO][4432] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" Namespace="kube-system" Pod="coredns-6f6b679f8f-hj5vb" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.392 [INFO][4445] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" HandleID="k8s-pod-network.ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.411 [INFO][4445] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" HandleID="k8s-pod-network.ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-c-89ff891e34", "pod":"coredns-6f6b679f8f-hj5vb", "timestamp":"2025-04-30 00:49:14.39201291 +0000 UTC"}, Hostname:"ci-4081-3-3-c-89ff891e34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.411 [INFO][4445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.411 [INFO][4445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.411 [INFO][4445] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-89ff891e34' Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.415 [INFO][4445] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.421 [INFO][4445] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.435 [INFO][4445] ipam/ipam.go 489: Trying affinity for 192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.439 [INFO][4445] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.443 [INFO][4445] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.443 [INFO][4445] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.445 [INFO][4445] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.451 [INFO][4445] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.460 [INFO][4445] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.132/26] block=192.168.33.128/26 handle="k8s-pod-network.ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.461 [INFO][4445] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.132/26] handle="k8s-pod-network.ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.461 [INFO][4445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:14.498953 containerd[1476]: 2025-04-30 00:49:14.461 [INFO][4445] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.132/26] IPv6=[] ContainerID="ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" HandleID="k8s-pod-network.ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:14.501391 containerd[1476]: 2025-04-30 00:49:14.465 [INFO][4432] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" Namespace="kube-system" Pod="coredns-6f6b679f8f-hj5vb" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3a56768d-4969-42c8-b398-626188e5b2d7", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 47, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"", Pod:"coredns-6f6b679f8f-hj5vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96762031212", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:14.501391 containerd[1476]: 2025-04-30 00:49:14.465 [INFO][4432] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.132/32] ContainerID="ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" Namespace="kube-system" Pod="coredns-6f6b679f8f-hj5vb" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:14.501391 containerd[1476]: 2025-04-30 00:49:14.465 [INFO][4432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96762031212 ContainerID="ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" Namespace="kube-system" Pod="coredns-6f6b679f8f-hj5vb" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:14.501391 containerd[1476]: 2025-04-30 00:49:14.470 [INFO][4432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" Namespace="kube-system" Pod="coredns-6f6b679f8f-hj5vb" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:14.501391 containerd[1476]: 2025-04-30 00:49:14.471 [INFO][4432] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" Namespace="kube-system" Pod="coredns-6f6b679f8f-hj5vb" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3a56768d-4969-42c8-b398-626188e5b2d7", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 47, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f", Pod:"coredns-6f6b679f8f-hj5vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96762031212", MAC:"ca:fb:4f:5e:66:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:14.501391 containerd[1476]: 2025-04-30 00:49:14.492 [INFO][4432] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f" Namespace="kube-system" Pod="coredns-6f6b679f8f-hj5vb" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:14.511989 kubelet[2670]: I0430 00:49:14.511267 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-58vbz" podStartSLOduration=79.511247552 podStartE2EDuration="1m19.511247552s" podCreationTimestamp="2025-04-30 00:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:49:14.510928311 +0000 UTC m=+85.511645040" watchObservedRunningTime="2025-04-30 00:49:14.511247552 +0000 UTC m=+85.511964281" Apr 30 00:49:14.548243 containerd[1476]: time="2025-04-30T00:49:14.547974070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:49:14.548243 containerd[1476]: time="2025-04-30T00:49:14.548040111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:49:14.548243 containerd[1476]: time="2025-04-30T00:49:14.548051711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:14.548504 containerd[1476]: time="2025-04-30T00:49:14.548152151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:14.582216 systemd[1]: Started cri-containerd-ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f.scope - libcontainer container ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f. Apr 30 00:49:14.627452 containerd[1476]: time="2025-04-30T00:49:14.627398578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hj5vb,Uid:3a56768d-4969-42c8-b398-626188e5b2d7,Namespace:kube-system,Attempt:1,} returns sandbox id \"ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f\"" Apr 30 00:49:14.631249 containerd[1476]: time="2025-04-30T00:49:14.631206639Z" level=info msg="CreateContainer within sandbox \"ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:49:14.650519 containerd[1476]: time="2025-04-30T00:49:14.650456662Z" level=info msg="CreateContainer within sandbox \"ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5aa899b4388db0528bf3f3663a3ffc7006c4a6cab07573dd5f536934877528da\"" Apr 30 00:49:14.652518 containerd[1476]: time="2025-04-30T00:49:14.652456073Z" level=info msg="StartContainer for \"5aa899b4388db0528bf3f3663a3ffc7006c4a6cab07573dd5f536934877528da\"" Apr 30 00:49:14.680136 systemd[1]: Started cri-containerd-5aa899b4388db0528bf3f3663a3ffc7006c4a6cab07573dd5f536934877528da.scope - libcontainer container 5aa899b4388db0528bf3f3663a3ffc7006c4a6cab07573dd5f536934877528da. Apr 30 00:49:14.714215 containerd[1476]: time="2025-04-30T00:49:14.714166325Z" level=info msg="StartContainer for \"5aa899b4388db0528bf3f3663a3ffc7006c4a6cab07573dd5f536934877528da\" returns successfully" Apr 30 00:49:14.932513 systemd-networkd[1367]: cali40603478634: Gained IPv6LL Apr 30 00:49:15.444416 systemd-networkd[1367]: cali9318c584e76: Gained IPv6LL Apr 30 00:49:15.521548 kubelet[2670]: I0430 00:49:15.521462 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hj5vb" podStartSLOduration=80.521440302 podStartE2EDuration="1m20.521440302s" podCreationTimestamp="2025-04-30 00:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:49:15.520637137 +0000 UTC m=+86.521353866" watchObservedRunningTime="2025-04-30 00:49:15.521440302 +0000 UTC m=+86.522157031" Apr 30 00:49:15.700633 systemd-networkd[1367]: cali857fb70d846: Gained IPv6LL Apr 30 00:49:16.122585 containerd[1476]: time="2025-04-30T00:49:16.122471842Z" level=info msg="StopPodSandbox for \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\"" Apr 30 00:49:16.123845 containerd[1476]: time="2025-04-30T00:49:16.123461648Z" level=info msg="StopPodSandbox for \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\"" Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.197 [INFO][4583] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.197 [INFO][4583] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" iface="eth0" netns="/var/run/netns/cni-183f16f3-65b8-10f6-4c15-620cb50e8b03" Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.197 [INFO][4583] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" iface="eth0" netns="/var/run/netns/cni-183f16f3-65b8-10f6-4c15-620cb50e8b03" Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.198 [INFO][4583] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" iface="eth0" netns="/var/run/netns/cni-183f16f3-65b8-10f6-4c15-620cb50e8b03" Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.198 [INFO][4583] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.198 [INFO][4583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.237 [INFO][4595] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" HandleID="k8s-pod-network.f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Workload="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.237 [INFO][4595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.237 [INFO][4595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.252 [WARNING][4595] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" HandleID="k8s-pod-network.f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Workload="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.252 [INFO][4595] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" HandleID="k8s-pod-network.f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Workload="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.255 [INFO][4595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:16.259861 containerd[1476]: 2025-04-30 00:49:16.258 [INFO][4583] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:16.262762 systemd[1]: run-netns-cni\x2d183f16f3\x2d65b8\x2d10f6\x2d4c15\x2d620cb50e8b03.mount: Deactivated successfully. Apr 30 00:49:16.265303 containerd[1476]: time="2025-04-30T00:49:16.262857632Z" level=info msg="TearDown network for sandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\" successfully" Apr 30 00:49:16.265303 containerd[1476]: time="2025-04-30T00:49:16.262935393Z" level=info msg="StopPodSandbox for \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\" returns successfully" Apr 30 00:49:16.265303 containerd[1476]: time="2025-04-30T00:49:16.264291400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpvmp,Uid:6de56a42-e7e2-4279-87ca-32df2fc92dd6,Namespace:calico-system,Attempt:1,}" Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.207 [INFO][4582] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.209 [INFO][4582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" iface="eth0" netns="/var/run/netns/cni-3879b519-7074-aab3-6e59-1047dc47f5b7" Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.209 [INFO][4582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" iface="eth0" netns="/var/run/netns/cni-3879b519-7074-aab3-6e59-1047dc47f5b7" Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.210 [INFO][4582] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" iface="eth0" netns="/var/run/netns/cni-3879b519-7074-aab3-6e59-1047dc47f5b7" Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.210 [INFO][4582] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.210 [INFO][4582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.244 [INFO][4600] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" HandleID="k8s-pod-network.411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.244 [INFO][4600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.255 [INFO][4600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.274 [WARNING][4600] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" HandleID="k8s-pod-network.411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.274 [INFO][4600] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" HandleID="k8s-pod-network.411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.277 [INFO][4600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:16.292653 containerd[1476]: 2025-04-30 00:49:16.284 [INFO][4582] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:16.292653 containerd[1476]: time="2025-04-30T00:49:16.292524191Z" level=info msg="TearDown network for sandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\" successfully" Apr 30 00:49:16.292653 containerd[1476]: time="2025-04-30T00:49:16.292551151Z" level=info msg="StopPodSandbox for \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\" returns successfully" Apr 30 00:49:16.295631 containerd[1476]: time="2025-04-30T00:49:16.295228085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766c54769-sk6mv,Uid:4e7144e4-2906-40ae-a883-218aba9bbba0,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:49:16.296716 systemd[1]: run-netns-cni\x2d3879b519\x2d7074\x2daab3\x2d6e59\x2d1047dc47f5b7.mount: Deactivated successfully. Apr 30 00:49:16.468305 systemd-networkd[1367]: cali96762031212: Gained IPv6LL Apr 30 00:49:16.494221 systemd-networkd[1367]: cali1b5ed042d7c: Link UP Apr 30 00:49:16.495517 systemd-networkd[1367]: cali1b5ed042d7c: Gained carrier Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.354 [INFO][4610] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0 csi-node-driver- calico-system 6de56a42-e7e2-4279-87ca-32df2fc92dd6 918 0 2025-04-30 00:48:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-c-89ff891e34 csi-node-driver-wpvmp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1b5ed042d7c [] []}} ContainerID="cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" Namespace="calico-system" Pod="csi-node-driver-wpvmp" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-" Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.354 [INFO][4610] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" Namespace="calico-system" Pod="csi-node-driver-wpvmp" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.412 [INFO][4635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" HandleID="k8s-pod-network.cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" Workload="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.429 [INFO][4635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" HandleID="k8s-pod-network.cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" Workload="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011a490), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-c-89ff891e34", "pod":"csi-node-driver-wpvmp", "timestamp":"2025-04-30 00:49:16.41220407 +0000 UTC"}, Hostname:"ci-4081-3-3-c-89ff891e34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.431 [INFO][4635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.431 [INFO][4635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.431 [INFO][4635] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-89ff891e34' Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.434 [INFO][4635] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.448 [INFO][4635] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.459 [INFO][4635] ipam/ipam.go 489: Trying affinity for 192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.462 [INFO][4635] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.465 [INFO][4635] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.465 [INFO][4635] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.471 [INFO][4635] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.477 [INFO][4635] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.486 [INFO][4635] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.133/26] block=192.168.33.128/26 handle="k8s-pod-network.cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.486 [INFO][4635] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.133/26] handle="k8s-pod-network.cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.486 [INFO][4635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:16.525757 containerd[1476]: 2025-04-30 00:49:16.486 [INFO][4635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.133/26] IPv6=[] ContainerID="cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" HandleID="k8s-pod-network.cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" Workload="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:16.527257 containerd[1476]: 2025-04-30 00:49:16.489 [INFO][4610] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" Namespace="calico-system" Pod="csi-node-driver-wpvmp" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6de56a42-e7e2-4279-87ca-32df2fc92dd6", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"", Pod:"csi-node-driver-wpvmp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b5ed042d7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:16.527257 containerd[1476]: 2025-04-30 00:49:16.490 [INFO][4610] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.133/32] ContainerID="cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" Namespace="calico-system" Pod="csi-node-driver-wpvmp" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:16.527257 containerd[1476]: 2025-04-30 00:49:16.490 [INFO][4610] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b5ed042d7c ContainerID="cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" Namespace="calico-system" Pod="csi-node-driver-wpvmp" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:16.527257 containerd[1476]: 2025-04-30 00:49:16.495 [INFO][4610] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" Namespace="calico-system" Pod="csi-node-driver-wpvmp" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:16.527257 containerd[1476]: 2025-04-30 00:49:16.496 [INFO][4610] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" Namespace="calico-system" Pod="csi-node-driver-wpvmp" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6de56a42-e7e2-4279-87ca-32df2fc92dd6", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b", Pod:"csi-node-driver-wpvmp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b5ed042d7c", MAC:"fa:88:09:cb:1c:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:16.527257 containerd[1476]: 2025-04-30 00:49:16.522 [INFO][4610] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b" Namespace="calico-system" Pod="csi-node-driver-wpvmp" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:16.572756 containerd[1476]: time="2025-04-30T00:49:16.572586807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:49:16.575527 containerd[1476]: time="2025-04-30T00:49:16.575414542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:49:16.576348 containerd[1476]: time="2025-04-30T00:49:16.575676743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:16.578005 containerd[1476]: time="2025-04-30T00:49:16.577288232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:16.619091 systemd[1]: Started cri-containerd-cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b.scope - libcontainer container cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b. Apr 30 00:49:16.621623 systemd-networkd[1367]: caliadafc63a3b3: Link UP Apr 30 00:49:16.624119 systemd-networkd[1367]: caliadafc63a3b3: Gained carrier Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.386 [INFO][4620] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0 calico-apiserver-7766c54769- calico-apiserver 4e7144e4-2906-40ae-a883-218aba9bbba0 919 0 2025-04-30 00:48:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7766c54769 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-c-89ff891e34 calico-apiserver-7766c54769-sk6mv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliadafc63a3b3 [] []}} ContainerID="bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-sk6mv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-" Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.386 [INFO][4620] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-sk6mv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.442 [INFO][4641] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" HandleID="k8s-pod-network.bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.531 [INFO][4641] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" HandleID="k8s-pod-network.bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-c-89ff891e34", "pod":"calico-apiserver-7766c54769-sk6mv", "timestamp":"2025-04-30 00:49:16.442888234 +0000 UTC"}, Hostname:"ci-4081-3-3-c-89ff891e34", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.531 [INFO][4641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.531 [INFO][4641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.531 [INFO][4641] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-c-89ff891e34' Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.535 [INFO][4641] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.540 [INFO][4641] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.561 [INFO][4641] ipam/ipam.go 489: Trying affinity for 192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.565 [INFO][4641] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.576 [INFO][4641] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.576 [INFO][4641] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.586 [INFO][4641] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557 Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.596 [INFO][4641] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.612 [INFO][4641] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.134/26] block=192.168.33.128/26 handle="k8s-pod-network.bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.612 [INFO][4641] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.134/26] handle="k8s-pod-network.bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" host="ci-4081-3-3-c-89ff891e34" Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.612 [INFO][4641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:16.647768 containerd[1476]: 2025-04-30 00:49:16.612 [INFO][4641] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.134/26] IPv6=[] ContainerID="bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" HandleID="k8s-pod-network.bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:16.648481 containerd[1476]: 2025-04-30 00:49:16.615 [INFO][4620] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-sk6mv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0", GenerateName:"calico-apiserver-7766c54769-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e7144e4-2906-40ae-a883-218aba9bbba0", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766c54769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"", Pod:"calico-apiserver-7766c54769-sk6mv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadafc63a3b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:16.648481 containerd[1476]: 2025-04-30 00:49:16.615 [INFO][4620] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.134/32] ContainerID="bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-sk6mv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:16.648481 containerd[1476]: 2025-04-30 00:49:16.615 [INFO][4620] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadafc63a3b3 ContainerID="bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-sk6mv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:16.648481 containerd[1476]: 2025-04-30 00:49:16.623 [INFO][4620] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-sk6mv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:16.648481 containerd[1476]: 2025-04-30 00:49:16.625 [INFO][4620] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-sk6mv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0", GenerateName:"calico-apiserver-7766c54769-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e7144e4-2906-40ae-a883-218aba9bbba0", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766c54769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557", Pod:"calico-apiserver-7766c54769-sk6mv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadafc63a3b3", MAC:"ea:db:f1:ff:19:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:16.648481 containerd[1476]: 2025-04-30 00:49:16.644 [INFO][4620] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557" Namespace="calico-apiserver" Pod="calico-apiserver-7766c54769-sk6mv" WorkloadEndpoint="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:16.671903 containerd[1476]: time="2025-04-30T00:49:16.671809377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpvmp,Uid:6de56a42-e7e2-4279-87ca-32df2fc92dd6,Namespace:calico-system,Attempt:1,} returns sandbox id \"cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b\"" Apr 30 00:49:16.688686 containerd[1476]: time="2025-04-30T00:49:16.688219984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:49:16.688686 containerd[1476]: time="2025-04-30T00:49:16.688328225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:49:16.688686 containerd[1476]: time="2025-04-30T00:49:16.688366825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:16.688686 containerd[1476]: time="2025-04-30T00:49:16.688517226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:16.718122 systemd[1]: Started cri-containerd-bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557.scope - libcontainer container bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557. Apr 30 00:49:16.759783 containerd[1476]: time="2025-04-30T00:49:16.759733366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766c54769-sk6mv,Uid:4e7144e4-2906-40ae-a883-218aba9bbba0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557\"" Apr 30 00:49:17.684304 systemd-networkd[1367]: caliadafc63a3b3: Gained IPv6LL Apr 30 00:49:17.748228 systemd-networkd[1367]: cali1b5ed042d7c: Gained IPv6LL Apr 30 00:49:18.855131 update_engine[1463]: I20250430 00:49:18.855019 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:49:18.855602 update_engine[1463]: I20250430 00:49:18.855325 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:49:18.855602 update_engine[1463]: I20250430 00:49:18.855571 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:49:18.857670 update_engine[1463]: E20250430 00:49:18.857594 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:49:18.857819 update_engine[1463]: I20250430 00:49:18.857694 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 00:49:28.862915 update_engine[1463]: I20250430 00:49:28.862732 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:49:28.863405 update_engine[1463]: I20250430 00:49:28.863245 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:49:28.863708 update_engine[1463]: I20250430 00:49:28.863612 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:49:28.864594 update_engine[1463]: E20250430 00:49:28.864512 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:49:28.864725 update_engine[1463]: I20250430 00:49:28.864627 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 00:49:38.862236 update_engine[1463]: I20250430 00:49:38.862123 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:49:38.862942 update_engine[1463]: I20250430 00:49:38.862422 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:49:38.862942 update_engine[1463]: I20250430 00:49:38.862672 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:49:38.863847 update_engine[1463]: E20250430 00:49:38.863698 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:49:38.863847 update_engine[1463]: I20250430 00:49:38.863803 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 00:49:38.863847 update_engine[1463]: I20250430 00:49:38.863816 1463 omaha_request_action.cc:617] Omaha request response: Apr 30 00:49:38.864035 update_engine[1463]: E20250430 00:49:38.863925 1463 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 00:49:38.864035 update_engine[1463]: I20250430 00:49:38.863956 1463 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 00:49:38.864035 update_engine[1463]: I20250430 00:49:38.863981 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 00:49:38.864035 update_engine[1463]: I20250430 00:49:38.863987 1463 update_attempter.cc:306] Processing Done. Apr 30 00:49:38.864035 update_engine[1463]: E20250430 00:49:38.864002 1463 update_attempter.cc:619] Update failed. Apr 30 00:49:38.864035 update_engine[1463]: I20250430 00:49:38.864007 1463 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 00:49:38.864035 update_engine[1463]: I20250430 00:49:38.864012 1463 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 00:49:38.864035 update_engine[1463]: I20250430 00:49:38.864017 1463 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 00:49:38.864268 update_engine[1463]: I20250430 00:49:38.864086 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 00:49:38.864268 update_engine[1463]: I20250430 00:49:38.864110 1463 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 00:49:38.864268 update_engine[1463]: I20250430 00:49:38.864115 1463 omaha_request_action.cc:272] Request: Apr 30 00:49:38.864268 update_engine[1463]: Apr 30 00:49:38.864268 update_engine[1463]: Apr 30 00:49:38.864268 update_engine[1463]: Apr 30 00:49:38.864268 update_engine[1463]: Apr 30 00:49:38.864268 update_engine[1463]: Apr 30 00:49:38.864268 update_engine[1463]: Apr 30 00:49:38.864268 update_engine[1463]: I20250430 00:49:38.864121 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:49:38.864502 update_engine[1463]: I20250430 00:49:38.864270 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:49:38.864502 update_engine[1463]: I20250430 00:49:38.864445 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:49:38.864877 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 00:49:38.865327 update_engine[1463]: E20250430 00:49:38.865274 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:49:38.865366 update_engine[1463]: I20250430 00:49:38.865334 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 00:49:38.865366 update_engine[1463]: I20250430 00:49:38.865343 1463 omaha_request_action.cc:617] Omaha request response: Apr 30 00:49:38.865366 update_engine[1463]: I20250430 00:49:38.865350 1463 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 00:49:38.865366 update_engine[1463]: I20250430 00:49:38.865356 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 00:49:38.865366 update_engine[1463]: I20250430 00:49:38.865361 1463 update_attempter.cc:306] Processing Done. Apr 30 00:49:38.865491 update_engine[1463]: I20250430 00:49:38.865368 1463 update_attempter.cc:310] Error event sent. Apr 30 00:49:38.865491 update_engine[1463]: I20250430 00:49:38.865377 1463 update_check_scheduler.cc:74] Next update check in 40m18s Apr 30 00:49:38.865714 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 00:49:49.133080 containerd[1476]: time="2025-04-30T00:49:49.133029985Z" level=info msg="StopPodSandbox for \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\"" Apr 30 00:49:49.261708 containerd[1476]: 2025-04-30 00:49:49.204 [WARNING][4818] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3a56768d-4969-42c8-b398-626188e5b2d7", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 47, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f", Pod:"coredns-6f6b679f8f-hj5vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96762031212", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:49.261708 containerd[1476]: 2025-04-30 00:49:49.205 [INFO][4818] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:49.261708 containerd[1476]: 2025-04-30 00:49:49.205 [INFO][4818] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" iface="eth0" netns="" Apr 30 00:49:49.261708 containerd[1476]: 2025-04-30 00:49:49.205 [INFO][4818] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:49.261708 containerd[1476]: 2025-04-30 00:49:49.206 [INFO][4818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:49.261708 containerd[1476]: 2025-04-30 00:49:49.232 [INFO][4825] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" HandleID="k8s-pod-network.c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:49.261708 containerd[1476]: 2025-04-30 00:49:49.232 [INFO][4825] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:49.261708 containerd[1476]: 2025-04-30 00:49:49.233 [INFO][4825] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:49.261708 containerd[1476]: 2025-04-30 00:49:49.252 [WARNING][4825] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" HandleID="k8s-pod-network.c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:49.261708 containerd[1476]: 2025-04-30 00:49:49.253 [INFO][4825] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" HandleID="k8s-pod-network.c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:49.261708 containerd[1476]: 2025-04-30 00:49:49.257 [INFO][4825] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:49.261708 containerd[1476]: 2025-04-30 00:49:49.259 [INFO][4818] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:49.261708 containerd[1476]: time="2025-04-30T00:49:49.261703560Z" level=info msg="TearDown network for sandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\" successfully" Apr 30 00:49:49.262989 containerd[1476]: time="2025-04-30T00:49:49.261729560Z" level=info msg="StopPodSandbox for \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\" returns successfully" Apr 30 00:49:49.262989 containerd[1476]: time="2025-04-30T00:49:49.262505484Z" level=info msg="RemovePodSandbox for \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\"" Apr 30 00:49:49.262989 containerd[1476]: time="2025-04-30T00:49:49.262581124Z" level=info msg="Forcibly stopping sandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\"" Apr 30 00:49:49.357343 containerd[1476]: 2025-04-30 00:49:49.309 [WARNING][4843] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3a56768d-4969-42c8-b398-626188e5b2d7", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 47, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"ffd15e21c2e28b7c74449143aafb50fb194d4b6aba381b72dcd31b95eb41203f", Pod:"coredns-6f6b679f8f-hj5vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96762031212", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:49.357343 containerd[1476]: 2025-04-30 00:49:49.309 [INFO][4843] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:49.357343 containerd[1476]: 2025-04-30 00:49:49.311 [INFO][4843] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" iface="eth0" netns="" Apr 30 00:49:49.357343 containerd[1476]: 2025-04-30 00:49:49.311 [INFO][4843] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:49.357343 containerd[1476]: 2025-04-30 00:49:49.311 [INFO][4843] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:49.357343 containerd[1476]: 2025-04-30 00:49:49.334 [INFO][4850] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" HandleID="k8s-pod-network.c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:49.357343 containerd[1476]: 2025-04-30 00:49:49.335 [INFO][4850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:49.357343 containerd[1476]: 2025-04-30 00:49:49.335 [INFO][4850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:49.357343 containerd[1476]: 2025-04-30 00:49:49.350 [WARNING][4850] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" HandleID="k8s-pod-network.c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:49.357343 containerd[1476]: 2025-04-30 00:49:49.350 [INFO][4850] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" HandleID="k8s-pod-network.c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--hj5vb-eth0" Apr 30 00:49:49.357343 containerd[1476]: 2025-04-30 00:49:49.353 [INFO][4850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:49.357343 containerd[1476]: 2025-04-30 00:49:49.355 [INFO][4843] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451" Apr 30 00:49:49.357853 containerd[1476]: time="2025-04-30T00:49:49.357405258Z" level=info msg="TearDown network for sandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\" successfully" Apr 30 00:49:49.363108 containerd[1476]: time="2025-04-30T00:49:49.363053365Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:49:49.363108 containerd[1476]: time="2025-04-30T00:49:49.363134205Z" level=info msg="RemovePodSandbox \"c0f5220dddcfe0b17697f869774beaf35c96160a39a954d9fbf1b226a9326451\" returns successfully" Apr 30 00:49:49.363774 containerd[1476]: time="2025-04-30T00:49:49.363729368Z" level=info msg="StopPodSandbox for \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\"" Apr 30 00:49:49.460044 containerd[1476]: 2025-04-30 00:49:49.417 [WARNING][4869] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0", GenerateName:"calico-kube-controllers-574958cb4d-", Namespace:"calico-system", SelfLink:"", UID:"d29a6595-5453-4daa-b826-4b1dc5b8c3af", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"574958cb4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005", Pod:"calico-kube-controllers-574958cb4d-svpdv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali857fb70d846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:49.460044 containerd[1476]: 2025-04-30 00:49:49.417 [INFO][4869] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:49.460044 containerd[1476]: 2025-04-30 00:49:49.417 [INFO][4869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" iface="eth0" netns="" Apr 30 00:49:49.460044 containerd[1476]: 2025-04-30 00:49:49.417 [INFO][4869] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:49.460044 containerd[1476]: 2025-04-30 00:49:49.417 [INFO][4869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:49.460044 containerd[1476]: 2025-04-30 00:49:49.444 [INFO][4876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" HandleID="k8s-pod-network.ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:49.460044 containerd[1476]: 2025-04-30 00:49:49.444 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:49.460044 containerd[1476]: 2025-04-30 00:49:49.444 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:49.460044 containerd[1476]: 2025-04-30 00:49:49.454 [WARNING][4876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" HandleID="k8s-pod-network.ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:49.460044 containerd[1476]: 2025-04-30 00:49:49.454 [INFO][4876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" HandleID="k8s-pod-network.ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:49.460044 containerd[1476]: 2025-04-30 00:49:49.456 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:49.460044 containerd[1476]: 2025-04-30 00:49:49.458 [INFO][4869] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:49.461346 containerd[1476]: time="2025-04-30T00:49:49.459881547Z" level=info msg="TearDown network for sandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\" successfully" Apr 30 00:49:49.461346 containerd[1476]: time="2025-04-30T00:49:49.460857392Z" level=info msg="StopPodSandbox for \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\" returns successfully" Apr 30 00:49:49.462189 containerd[1476]: time="2025-04-30T00:49:49.462153798Z" level=info msg="RemovePodSandbox for \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\"" Apr 30 00:49:49.462303 containerd[1476]: time="2025-04-30T00:49:49.462198158Z" level=info msg="Forcibly stopping sandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\"" Apr 30 00:49:49.564821 containerd[1476]: 2025-04-30 00:49:49.515 [WARNING][4894] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0", GenerateName:"calico-kube-controllers-574958cb4d-", Namespace:"calico-system", SelfLink:"", UID:"d29a6595-5453-4daa-b826-4b1dc5b8c3af", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"574958cb4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005", Pod:"calico-kube-controllers-574958cb4d-svpdv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali857fb70d846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:49.564821 containerd[1476]: 2025-04-30 00:49:49.516 [INFO][4894] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:49.564821 containerd[1476]: 2025-04-30 00:49:49.516 [INFO][4894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" iface="eth0" netns="" Apr 30 00:49:49.564821 containerd[1476]: 2025-04-30 00:49:49.516 [INFO][4894] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:49.564821 containerd[1476]: 2025-04-30 00:49:49.516 [INFO][4894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:49.564821 containerd[1476]: 2025-04-30 00:49:49.538 [INFO][4902] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" HandleID="k8s-pod-network.ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:49.564821 containerd[1476]: 2025-04-30 00:49:49.538 [INFO][4902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:49.564821 containerd[1476]: 2025-04-30 00:49:49.538 [INFO][4902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:49.564821 containerd[1476]: 2025-04-30 00:49:49.557 [WARNING][4902] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" HandleID="k8s-pod-network.ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:49.564821 containerd[1476]: 2025-04-30 00:49:49.557 [INFO][4902] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" HandleID="k8s-pod-network.ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--kube--controllers--574958cb4d--svpdv-eth0" Apr 30 00:49:49.564821 containerd[1476]: 2025-04-30 00:49:49.559 [INFO][4902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:49.564821 containerd[1476]: 2025-04-30 00:49:49.561 [INFO][4894] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3" Apr 30 00:49:49.564821 containerd[1476]: time="2025-04-30T00:49:49.564290526Z" level=info msg="TearDown network for sandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\" successfully" Apr 30 00:49:49.569398 containerd[1476]: time="2025-04-30T00:49:49.569225070Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:49:49.569398 containerd[1476]: time="2025-04-30T00:49:49.569296790Z" level=info msg="RemovePodSandbox \"ede6fa4e5e4b41ede6ebf855ebc53560b613e1f0652b2aadfe69d61bb40888c3\" returns successfully" Apr 30 00:49:49.569905 containerd[1476]: time="2025-04-30T00:49:49.569864793Z" level=info msg="StopPodSandbox for \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\"" Apr 30 00:49:49.659568 containerd[1476]: 2025-04-30 00:49:49.613 [WARNING][4920] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"16b9cf8b-b97b-4135-b965-002368ca22b1", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 47, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a", Pod:"coredns-6f6b679f8f-58vbz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9318c584e76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:49.659568 containerd[1476]: 2025-04-30 00:49:49.614 [INFO][4920] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:49.659568 containerd[1476]: 2025-04-30 00:49:49.614 [INFO][4920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" iface="eth0" netns="" Apr 30 00:49:49.659568 containerd[1476]: 2025-04-30 00:49:49.614 [INFO][4920] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:49.659568 containerd[1476]: 2025-04-30 00:49:49.614 [INFO][4920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:49.659568 containerd[1476]: 2025-04-30 00:49:49.640 [INFO][4927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" HandleID="k8s-pod-network.38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:49.659568 containerd[1476]: 2025-04-30 00:49:49.640 [INFO][4927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:49.659568 containerd[1476]: 2025-04-30 00:49:49.640 [INFO][4927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:49.659568 containerd[1476]: 2025-04-30 00:49:49.653 [WARNING][4927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" HandleID="k8s-pod-network.38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:49.659568 containerd[1476]: 2025-04-30 00:49:49.654 [INFO][4927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" HandleID="k8s-pod-network.38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:49.659568 containerd[1476]: 2025-04-30 00:49:49.656 [INFO][4927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:49.659568 containerd[1476]: 2025-04-30 00:49:49.657 [INFO][4920] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:49.660203 containerd[1476]: time="2025-04-30T00:49:49.659629702Z" level=info msg="TearDown network for sandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\" successfully" Apr 30 00:49:49.660203 containerd[1476]: time="2025-04-30T00:49:49.659664982Z" level=info msg="StopPodSandbox for \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\" returns successfully" Apr 30 00:49:49.660710 containerd[1476]: time="2025-04-30T00:49:49.660674227Z" level=info msg="RemovePodSandbox for \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\"" Apr 30 00:49:49.660787 containerd[1476]: time="2025-04-30T00:49:49.660718507Z" level=info msg="Forcibly stopping sandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\"" Apr 30 00:49:49.750429 containerd[1476]: 2025-04-30 00:49:49.708 [WARNING][4945] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"16b9cf8b-b97b-4135-b965-002368ca22b1", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 47, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"c7569ae9fcd196c2d14b5458049b0a2452885ef5980295fbb880458ea49ff98a", Pod:"coredns-6f6b679f8f-58vbz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9318c584e76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:49.750429 containerd[1476]: 2025-04-30 00:49:49.709 [INFO][4945] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:49.750429 containerd[1476]: 2025-04-30 00:49:49.709 [INFO][4945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" iface="eth0" netns="" Apr 30 00:49:49.750429 containerd[1476]: 2025-04-30 00:49:49.709 [INFO][4945] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:49.750429 containerd[1476]: 2025-04-30 00:49:49.709 [INFO][4945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:49.750429 containerd[1476]: 2025-04-30 00:49:49.731 [INFO][4953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" HandleID="k8s-pod-network.38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:49.750429 containerd[1476]: 2025-04-30 00:49:49.731 [INFO][4953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:49.750429 containerd[1476]: 2025-04-30 00:49:49.731 [INFO][4953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:49.750429 containerd[1476]: 2025-04-30 00:49:49.743 [WARNING][4953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" HandleID="k8s-pod-network.38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:49.750429 containerd[1476]: 2025-04-30 00:49:49.743 [INFO][4953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" HandleID="k8s-pod-network.38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Workload="ci--4081--3--3--c--89ff891e34-k8s-coredns--6f6b679f8f--58vbz-eth0" Apr 30 00:49:49.750429 containerd[1476]: 2025-04-30 00:49:49.746 [INFO][4953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:49.750429 containerd[1476]: 2025-04-30 00:49:49.748 [INFO][4945] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e" Apr 30 00:49:49.750862 containerd[1476]: time="2025-04-30T00:49:49.750435696Z" level=info msg="TearDown network for sandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\" successfully" Apr 30 00:49:49.757183 containerd[1476]: time="2025-04-30T00:49:49.756866087Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:49:49.757183 containerd[1476]: time="2025-04-30T00:49:49.757009088Z" level=info msg="RemovePodSandbox \"38487cf29079bd0f162490b3abf8782bd3e12a4616bb25e7b91adeb39f4dfa3e\" returns successfully" Apr 30 00:49:49.757746 containerd[1476]: time="2025-04-30T00:49:49.757668731Z" level=info msg="StopPodSandbox for \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\"" Apr 30 00:49:49.843223 containerd[1476]: 2025-04-30 00:49:49.798 [WARNING][4971] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0", GenerateName:"calico-apiserver-7766c54769-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e7144e4-2906-40ae-a883-218aba9bbba0", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766c54769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557", Pod:"calico-apiserver-7766c54769-sk6mv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadafc63a3b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:49.843223 containerd[1476]: 2025-04-30 00:49:49.798 [INFO][4971] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:49.843223 containerd[1476]: 2025-04-30 00:49:49.798 [INFO][4971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" iface="eth0" netns="" Apr 30 00:49:49.843223 containerd[1476]: 2025-04-30 00:49:49.798 [INFO][4971] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:49.843223 containerd[1476]: 2025-04-30 00:49:49.798 [INFO][4971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:49.843223 containerd[1476]: 2025-04-30 00:49:49.824 [INFO][4978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" HandleID="k8s-pod-network.411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:49.843223 containerd[1476]: 2025-04-30 00:49:49.824 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:49.843223 containerd[1476]: 2025-04-30 00:49:49.824 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:49.843223 containerd[1476]: 2025-04-30 00:49:49.836 [WARNING][4978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" HandleID="k8s-pod-network.411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:49.843223 containerd[1476]: 2025-04-30 00:49:49.837 [INFO][4978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" HandleID="k8s-pod-network.411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:49.843223 containerd[1476]: 2025-04-30 00:49:49.839 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:49.843223 containerd[1476]: 2025-04-30 00:49:49.841 [INFO][4971] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:49.845526 containerd[1476]: time="2025-04-30T00:49:49.843270460Z" level=info msg="TearDown network for sandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\" successfully" Apr 30 00:49:49.845526 containerd[1476]: time="2025-04-30T00:49:49.843297500Z" level=info msg="StopPodSandbox for \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\" returns successfully" Apr 30 00:49:49.845526 containerd[1476]: time="2025-04-30T00:49:49.843850983Z" level=info msg="RemovePodSandbox for \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\"" Apr 30 00:49:49.845526 containerd[1476]: time="2025-04-30T00:49:49.843884063Z" level=info msg="Forcibly stopping sandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\"" Apr 30 00:49:49.936131 containerd[1476]: 2025-04-30 00:49:49.892 [WARNING][4996] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0", GenerateName:"calico-apiserver-7766c54769-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e7144e4-2906-40ae-a883-218aba9bbba0", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766c54769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557", Pod:"calico-apiserver-7766c54769-sk6mv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadafc63a3b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:49.936131 containerd[1476]: 2025-04-30 00:49:49.892 [INFO][4996] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:49.936131 containerd[1476]: 2025-04-30 00:49:49.892 [INFO][4996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" iface="eth0" netns="" Apr 30 00:49:49.936131 containerd[1476]: 2025-04-30 00:49:49.892 [INFO][4996] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:49.936131 containerd[1476]: 2025-04-30 00:49:49.892 [INFO][4996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:49.936131 containerd[1476]: 2025-04-30 00:49:49.917 [INFO][5004] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" HandleID="k8s-pod-network.411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:49.936131 containerd[1476]: 2025-04-30 00:49:49.918 [INFO][5004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:49.936131 containerd[1476]: 2025-04-30 00:49:49.918 [INFO][5004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:49.936131 containerd[1476]: 2025-04-30 00:49:49.930 [WARNING][5004] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" HandleID="k8s-pod-network.411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:49.936131 containerd[1476]: 2025-04-30 00:49:49.930 [INFO][5004] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" HandleID="k8s-pod-network.411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--sk6mv-eth0" Apr 30 00:49:49.936131 containerd[1476]: 2025-04-30 00:49:49.932 [INFO][5004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:49.936131 containerd[1476]: 2025-04-30 00:49:49.934 [INFO][4996] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119" Apr 30 00:49:49.936592 containerd[1476]: time="2025-04-30T00:49:49.936169464Z" level=info msg="TearDown network for sandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\" successfully" Apr 30 00:49:49.940253 containerd[1476]: time="2025-04-30T00:49:49.940167003Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:49:49.940636 containerd[1476]: time="2025-04-30T00:49:49.940288164Z" level=info msg="RemovePodSandbox \"411896d2f900c0f797e390e11814c9acbfde489d0e517cd754d56c2cadb18119\" returns successfully" Apr 30 00:49:49.942618 containerd[1476]: time="2025-04-30T00:49:49.942578254Z" level=info msg="StopPodSandbox for \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\"" Apr 30 00:49:50.039077 containerd[1476]: 2025-04-30 00:49:49.988 [WARNING][5023] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6de56a42-e7e2-4279-87ca-32df2fc92dd6", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b", Pod:"csi-node-driver-wpvmp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b5ed042d7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:50.039077 containerd[1476]: 2025-04-30 00:49:49.989 [INFO][5023] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:50.039077 containerd[1476]: 2025-04-30 00:49:49.989 [INFO][5023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" iface="eth0" netns="" Apr 30 00:49:50.039077 containerd[1476]: 2025-04-30 00:49:49.989 [INFO][5023] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:50.039077 containerd[1476]: 2025-04-30 00:49:49.989 [INFO][5023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:50.039077 containerd[1476]: 2025-04-30 00:49:50.020 [INFO][5030] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" HandleID="k8s-pod-network.f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Workload="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:50.039077 containerd[1476]: 2025-04-30 00:49:50.020 [INFO][5030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:50.039077 containerd[1476]: 2025-04-30 00:49:50.020 [INFO][5030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:50.039077 containerd[1476]: 2025-04-30 00:49:50.032 [WARNING][5030] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" HandleID="k8s-pod-network.f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Workload="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:50.039077 containerd[1476]: 2025-04-30 00:49:50.032 [INFO][5030] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" HandleID="k8s-pod-network.f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Workload="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:50.039077 containerd[1476]: 2025-04-30 00:49:50.035 [INFO][5030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:50.039077 containerd[1476]: 2025-04-30 00:49:50.037 [INFO][5023] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:50.039769 containerd[1476]: time="2025-04-30T00:49:50.039101355Z" level=info msg="TearDown network for sandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\" successfully" Apr 30 00:49:50.039769 containerd[1476]: time="2025-04-30T00:49:50.039137676Z" level=info msg="StopPodSandbox for \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\" returns successfully" Apr 30 00:49:50.041681 containerd[1476]: time="2025-04-30T00:49:50.040147920Z" level=info msg="RemovePodSandbox for \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\"" Apr 30 00:49:50.041681 containerd[1476]: time="2025-04-30T00:49:50.040208881Z" level=info msg="Forcibly stopping sandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\"" Apr 30 00:49:50.135788 containerd[1476]: 2025-04-30 00:49:50.092 [WARNING][5049] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6de56a42-e7e2-4279-87ca-32df2fc92dd6", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b", Pod:"csi-node-driver-wpvmp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b5ed042d7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:50.135788 containerd[1476]: 2025-04-30 00:49:50.093 [INFO][5049] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:50.135788 containerd[1476]: 2025-04-30 00:49:50.093 [INFO][5049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" iface="eth0" netns="" Apr 30 00:49:50.135788 containerd[1476]: 2025-04-30 00:49:50.093 [INFO][5049] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:50.135788 containerd[1476]: 2025-04-30 00:49:50.093 [INFO][5049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:50.135788 containerd[1476]: 2025-04-30 00:49:50.114 [INFO][5056] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" HandleID="k8s-pod-network.f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Workload="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:50.135788 containerd[1476]: 2025-04-30 00:49:50.115 [INFO][5056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:50.135788 containerd[1476]: 2025-04-30 00:49:50.115 [INFO][5056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:50.135788 containerd[1476]: 2025-04-30 00:49:50.126 [WARNING][5056] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" HandleID="k8s-pod-network.f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Workload="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:50.135788 containerd[1476]: 2025-04-30 00:49:50.127 [INFO][5056] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" HandleID="k8s-pod-network.f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Workload="ci--4081--3--3--c--89ff891e34-k8s-csi--node--driver--wpvmp-eth0" Apr 30 00:49:50.135788 containerd[1476]: 2025-04-30 00:49:50.129 [INFO][5056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:50.135788 containerd[1476]: 2025-04-30 00:49:50.132 [INFO][5049] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1" Apr 30 00:49:50.137265 containerd[1476]: time="2025-04-30T00:49:50.135809296Z" level=info msg="TearDown network for sandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\" successfully" Apr 30 00:49:50.139751 containerd[1476]: time="2025-04-30T00:49:50.139698795Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:49:50.139833 containerd[1476]: time="2025-04-30T00:49:50.139805475Z" level=info msg="RemovePodSandbox \"f30c84b6d8d59a52f74c7b53801e10499d0d2c8d401fbbb5bdb993a3cb1e88e1\" returns successfully" Apr 30 00:49:50.140514 containerd[1476]: time="2025-04-30T00:49:50.140467199Z" level=info msg="StopPodSandbox for \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\"" Apr 30 00:49:50.233335 containerd[1476]: 2025-04-30 00:49:50.190 [WARNING][5075] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0", GenerateName:"calico-apiserver-7766c54769-", Namespace:"calico-apiserver", SelfLink:"", UID:"779394cb-9d3f-45fc-87b2-ac2caf222c9a", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766c54769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6", Pod:"calico-apiserver-7766c54769-29prm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40603478634", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:50.233335 containerd[1476]: 2025-04-30 00:49:50.190 [INFO][5075] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:50.233335 containerd[1476]: 2025-04-30 00:49:50.190 [INFO][5075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" iface="eth0" netns="" Apr 30 00:49:50.233335 containerd[1476]: 2025-04-30 00:49:50.190 [INFO][5075] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:50.233335 containerd[1476]: 2025-04-30 00:49:50.190 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:50.233335 containerd[1476]: 2025-04-30 00:49:50.216 [INFO][5083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" HandleID="k8s-pod-network.ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:50.233335 containerd[1476]: 2025-04-30 00:49:50.216 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:50.233335 containerd[1476]: 2025-04-30 00:49:50.216 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:50.233335 containerd[1476]: 2025-04-30 00:49:50.227 [WARNING][5083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" HandleID="k8s-pod-network.ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:50.233335 containerd[1476]: 2025-04-30 00:49:50.227 [INFO][5083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" HandleID="k8s-pod-network.ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:50.233335 containerd[1476]: 2025-04-30 00:49:50.230 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:50.233335 containerd[1476]: 2025-04-30 00:49:50.231 [INFO][5075] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:50.233826 containerd[1476]: time="2025-04-30T00:49:50.233379281Z" level=info msg="TearDown network for sandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\" successfully" Apr 30 00:49:50.233826 containerd[1476]: time="2025-04-30T00:49:50.233411442Z" level=info msg="StopPodSandbox for \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\" returns successfully" Apr 30 00:49:50.234697 containerd[1476]: time="2025-04-30T00:49:50.234346126Z" level=info msg="RemovePodSandbox for \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\"" Apr 30 00:49:50.234697 containerd[1476]: time="2025-04-30T00:49:50.234393566Z" level=info msg="Forcibly stopping sandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\"" Apr 30 00:49:50.326609 containerd[1476]: 2025-04-30 00:49:50.278 [WARNING][5102] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0", GenerateName:"calico-apiserver-7766c54769-", Namespace:"calico-apiserver", SelfLink:"", UID:"779394cb-9d3f-45fc-87b2-ac2caf222c9a", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766c54769", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-c-89ff891e34", ContainerID:"849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6", Pod:"calico-apiserver-7766c54769-29prm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40603478634", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:50.326609 containerd[1476]: 2025-04-30 00:49:50.279 [INFO][5102] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:50.326609 containerd[1476]: 2025-04-30 00:49:50.279 [INFO][5102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" iface="eth0" netns="" Apr 30 00:49:50.326609 containerd[1476]: 2025-04-30 00:49:50.279 [INFO][5102] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:50.326609 containerd[1476]: 2025-04-30 00:49:50.279 [INFO][5102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:50.326609 containerd[1476]: 2025-04-30 00:49:50.307 [INFO][5109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" HandleID="k8s-pod-network.ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:50.326609 containerd[1476]: 2025-04-30 00:49:50.307 [INFO][5109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:50.326609 containerd[1476]: 2025-04-30 00:49:50.307 [INFO][5109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:50.326609 containerd[1476]: 2025-04-30 00:49:50.320 [WARNING][5109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" HandleID="k8s-pod-network.ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:50.326609 containerd[1476]: 2025-04-30 00:49:50.320 [INFO][5109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" HandleID="k8s-pod-network.ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Workload="ci--4081--3--3--c--89ff891e34-k8s-calico--apiserver--7766c54769--29prm-eth0" Apr 30 00:49:50.326609 containerd[1476]: 2025-04-30 00:49:50.322 [INFO][5109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:50.326609 containerd[1476]: 2025-04-30 00:49:50.324 [INFO][5102] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766" Apr 30 00:49:50.327615 containerd[1476]: time="2025-04-30T00:49:50.327150088Z" level=info msg="TearDown network for sandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\" successfully" Apr 30 00:49:50.331716 containerd[1476]: time="2025-04-30T00:49:50.331545749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:49:50.331716 containerd[1476]: time="2025-04-30T00:49:50.331621150Z" level=info msg="RemovePodSandbox \"ef327f814f52c11654abc602072e796fa358a9ee7bec5630d31affc9e0e2d766\" returns successfully" Apr 30 00:50:09.881718 containerd[1476]: time="2025-04-30T00:50:09.880582963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:09.882726 containerd[1476]: time="2025-04-30T00:50:09.882668212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" Apr 30 00:50:09.884253 containerd[1476]: time="2025-04-30T00:50:09.884197579Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:09.887939 containerd[1476]: time="2025-04-30T00:50:09.887748356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:09.889099 containerd[1476]: time="2025-04-30T00:50:09.888606439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 56.114551414s" Apr 30 00:50:09.889099 containerd[1476]: time="2025-04-30T00:50:09.888655280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:50:09.891815 containerd[1476]: time="2025-04-30T00:50:09.891300332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 00:50:09.892802 containerd[1476]: time="2025-04-30T00:50:09.892763578Z" level=info msg="CreateContainer within sandbox \"849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:50:09.919338 containerd[1476]: time="2025-04-30T00:50:09.919190819Z" level=info msg="CreateContainer within sandbox \"849a8c416d6428cf9b72459902e8c870833d8f25ef0709dab1f5a19d590452e6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5fed7576029ba46826ca38d6684c21f503742aef189c089298ff8a61a7a32945\"" Apr 30 00:50:09.921171 containerd[1476]: time="2025-04-30T00:50:09.921110228Z" level=info msg="StartContainer for \"5fed7576029ba46826ca38d6684c21f503742aef189c089298ff8a61a7a32945\"" Apr 30 00:50:09.972399 systemd[1]: run-containerd-runc-k8s.io-5fed7576029ba46826ca38d6684c21f503742aef189c089298ff8a61a7a32945-runc.1dBpfS.mount: Deactivated successfully. Apr 30 00:50:09.983129 systemd[1]: Started cri-containerd-5fed7576029ba46826ca38d6684c21f503742aef189c089298ff8a61a7a32945.scope - libcontainer container 5fed7576029ba46826ca38d6684c21f503742aef189c089298ff8a61a7a32945. Apr 30 00:50:10.069260 containerd[1476]: time="2025-04-30T00:50:10.068941261Z" level=info msg="StartContainer for \"5fed7576029ba46826ca38d6684c21f503742aef189c089298ff8a61a7a32945\" returns successfully" Apr 30 00:50:11.759768 kubelet[2670]: I0430 00:50:11.759689 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7766c54769-29prm" podStartSLOduration=72.64339556 podStartE2EDuration="2m8.759672022s" podCreationTimestamp="2025-04-30 00:48:03 +0000 UTC" firstStartedPulling="2025-04-30 00:49:13.773653783 +0000 UTC m=+84.774370512" lastFinishedPulling="2025-04-30 00:50:09.889930165 +0000 UTC m=+140.890646974" observedRunningTime="2025-04-30 00:50:10.690194646 +0000 UTC m=+141.690911375" watchObservedRunningTime="2025-04-30 00:50:11.759672022 +0000 UTC m=+142.760388751" Apr 30 00:50:12.524459 containerd[1476]: time="2025-04-30T00:50:12.524406088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:12.526188 containerd[1476]: time="2025-04-30T00:50:12.526100456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" Apr 30 00:50:12.526939 containerd[1476]: time="2025-04-30T00:50:12.526856059Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:12.529860 containerd[1476]: time="2025-04-30T00:50:12.529778232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:12.531332 containerd[1476]: time="2025-04-30T00:50:12.531143678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 2.639796946s" Apr 30 00:50:12.531332 containerd[1476]: time="2025-04-30T00:50:12.531230199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" Apr 30 00:50:12.533891 containerd[1476]: time="2025-04-30T00:50:12.533804851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 00:50:12.569490 containerd[1476]: time="2025-04-30T00:50:12.569435612Z" level=info msg="CreateContainer within sandbox \"980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 00:50:12.593876 containerd[1476]: time="2025-04-30T00:50:12.593816162Z" level=info msg="CreateContainer within sandbox \"980852e63ad9114ae2b99d97ab8a9579f0064f389eb953f49d6495674b2a6005\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"479cf390ffefeb22b666a9e349d2c3df709589b66d58c5c3ef9fb890424646d6\"" Apr 30 00:50:12.595732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626484397.mount: Deactivated successfully. Apr 30 00:50:12.596939 containerd[1476]: time="2025-04-30T00:50:12.596480894Z" level=info msg="StartContainer for \"479cf390ffefeb22b666a9e349d2c3df709589b66d58c5c3ef9fb890424646d6\"" Apr 30 00:50:12.647110 systemd[1]: Started cri-containerd-479cf390ffefeb22b666a9e349d2c3df709589b66d58c5c3ef9fb890424646d6.scope - libcontainer container 479cf390ffefeb22b666a9e349d2c3df709589b66d58c5c3ef9fb890424646d6. Apr 30 00:50:12.702152 containerd[1476]: time="2025-04-30T00:50:12.702012412Z" level=info msg="StartContainer for \"479cf390ffefeb22b666a9e349d2c3df709589b66d58c5c3ef9fb890424646d6\" returns successfully" Apr 30 00:50:13.768470 kubelet[2670]: I0430 00:50:13.767890 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-574958cb4d-svpdv" podStartSLOduration=70.21353681 podStartE2EDuration="2m8.767862593s" podCreationTimestamp="2025-04-30 00:48:05 +0000 UTC" firstStartedPulling="2025-04-30 00:49:13.978089861 +0000 UTC m=+84.978806550" lastFinishedPulling="2025-04-30 00:50:12.532415604 +0000 UTC m=+143.533132333" observedRunningTime="2025-04-30 00:50:13.715464116 +0000 UTC m=+144.716180845" watchObservedRunningTime="2025-04-30 00:50:13.767862593 +0000 UTC m=+144.768579362" Apr 30 00:50:19.682065 containerd[1476]: time="2025-04-30T00:50:19.682009442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:19.684220 containerd[1476]: time="2025-04-30T00:50:19.684007291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" Apr 30 00:50:19.684993 containerd[1476]: time="2025-04-30T00:50:19.684936575Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:19.690891 containerd[1476]: time="2025-04-30T00:50:19.690833841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 7.15698671s" Apr 30 00:50:19.690891 containerd[1476]: time="2025-04-30T00:50:19.690881401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" Apr 30 00:50:19.691599 containerd[1476]: time="2025-04-30T00:50:19.691487204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:19.693737 containerd[1476]: time="2025-04-30T00:50:19.693662694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:50:19.713012 containerd[1476]: time="2025-04-30T00:50:19.712863060Z" level=info msg="CreateContainer within sandbox \"cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 00:50:19.744723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount478418637.mount: Deactivated successfully. Apr 30 00:50:19.746645 containerd[1476]: time="2025-04-30T00:50:19.746171608Z" level=info msg="CreateContainer within sandbox \"cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"62ec7ec2b2119981b3928dfed1e4d42e01ec022d678f1c3220ee0995c2f5bc24\"" Apr 30 00:50:19.747411 containerd[1476]: time="2025-04-30T00:50:19.747385734Z" level=info msg="StartContainer for \"62ec7ec2b2119981b3928dfed1e4d42e01ec022d678f1c3220ee0995c2f5bc24\"" Apr 30 00:50:19.782331 systemd[1]: Started cri-containerd-62ec7ec2b2119981b3928dfed1e4d42e01ec022d678f1c3220ee0995c2f5bc24.scope - libcontainer container 62ec7ec2b2119981b3928dfed1e4d42e01ec022d678f1c3220ee0995c2f5bc24. Apr 30 00:50:19.816858 containerd[1476]: time="2025-04-30T00:50:19.816711444Z" level=info msg="StartContainer for \"62ec7ec2b2119981b3928dfed1e4d42e01ec022d678f1c3220ee0995c2f5bc24\" returns successfully" Apr 30 00:50:20.063743 containerd[1476]: time="2025-04-30T00:50:20.063610147Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:20.065941 containerd[1476]: time="2025-04-30T00:50:20.064888352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 00:50:20.067076 containerd[1476]: time="2025-04-30T00:50:20.067030082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 373.334388ms" Apr 30 00:50:20.067235 containerd[1476]: time="2025-04-30T00:50:20.067216123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:50:20.068947 containerd[1476]: time="2025-04-30T00:50:20.068866050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 00:50:20.070551 containerd[1476]: time="2025-04-30T00:50:20.070487417Z" level=info msg="CreateContainer within sandbox \"bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:50:20.093282 containerd[1476]: time="2025-04-30T00:50:20.093207559Z" level=info msg="CreateContainer within sandbox \"bfc49a0e19e03d8a8b973fd5a6433a9136592efb754e07752d7f223cf810e557\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e8beaba6b850076a580133380f889c825aecbc7f322735c02513392b4df9ab87\"" Apr 30 00:50:20.094721 containerd[1476]: time="2025-04-30T00:50:20.094632885Z" level=info msg="StartContainer for \"e8beaba6b850076a580133380f889c825aecbc7f322735c02513392b4df9ab87\"" Apr 30 00:50:20.131261 systemd[1]: Started cri-containerd-e8beaba6b850076a580133380f889c825aecbc7f322735c02513392b4df9ab87.scope - libcontainer container e8beaba6b850076a580133380f889c825aecbc7f322735c02513392b4df9ab87. Apr 30 00:50:20.185714 containerd[1476]: time="2025-04-30T00:50:20.185312850Z" level=info msg="StartContainer for \"e8beaba6b850076a580133380f889c825aecbc7f322735c02513392b4df9ab87\" returns successfully" Apr 30 00:50:20.739314 kubelet[2670]: I0430 00:50:20.739101 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7766c54769-sk6mv" podStartSLOduration=74.433530775 podStartE2EDuration="2m17.73908068s" podCreationTimestamp="2025-04-30 00:48:03 +0000 UTC" firstStartedPulling="2025-04-30 00:49:16.762619742 +0000 UTC m=+87.763336471" lastFinishedPulling="2025-04-30 00:50:20.068169647 +0000 UTC m=+151.068886376" observedRunningTime="2025-04-30 00:50:20.734742781 +0000 UTC m=+151.735459510" watchObservedRunningTime="2025-04-30 00:50:20.73908068 +0000 UTC m=+151.739797409" Apr 30 00:50:45.140837 containerd[1476]: time="2025-04-30T00:50:45.140776568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:45.145146 containerd[1476]: time="2025-04-30T00:50:45.142570576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" Apr 30 00:50:45.145146 containerd[1476]: time="2025-04-30T00:50:45.144844506Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:45.149154 containerd[1476]: time="2025-04-30T00:50:45.148074520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:45.149154 containerd[1476]: time="2025-04-30T00:50:45.148878363Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 25.079928593s" Apr 30 00:50:45.149154 containerd[1476]: time="2025-04-30T00:50:45.148964523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" Apr 30 00:50:45.152540 containerd[1476]: time="2025-04-30T00:50:45.152484379Z" level=info msg="CreateContainer within sandbox \"cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 00:50:45.171859 containerd[1476]: time="2025-04-30T00:50:45.171801822Z" level=info msg="CreateContainer within sandbox \"cf006348e773b0fcd0b4e94f19fe45f19ec6a8ba7285fb87228485a608210f9b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"945e9d04c3bfcf88012c9726ce2c671200eaa8ad1ef38b9f1215f56fffdae34f\"" Apr 30 00:50:45.177322 containerd[1476]: time="2025-04-30T00:50:45.176039040Z" level=info msg="StartContainer for \"945e9d04c3bfcf88012c9726ce2c671200eaa8ad1ef38b9f1215f56fffdae34f\"" Apr 30 00:50:45.176278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082054661.mount: Deactivated successfully. Apr 30 00:50:45.216287 systemd[1]: Started cri-containerd-945e9d04c3bfcf88012c9726ce2c671200eaa8ad1ef38b9f1215f56fffdae34f.scope - libcontainer container 945e9d04c3bfcf88012c9726ce2c671200eaa8ad1ef38b9f1215f56fffdae34f. Apr 30 00:50:45.244295 containerd[1476]: time="2025-04-30T00:50:45.244225493Z" level=info msg="StartContainer for \"945e9d04c3bfcf88012c9726ce2c671200eaa8ad1ef38b9f1215f56fffdae34f\" returns successfully" Apr 30 00:50:45.319759 kubelet[2670]: I0430 00:50:45.319714 2670 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 00:50:45.340538 kubelet[2670]: I0430 00:50:45.340368 2670 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 00:52:01.381465 systemd[1]: run-containerd-runc-k8s.io-01ac3c72040eff5d4dbfbfafa01149e2fa1d260b76df265ae9ecfa2a045e91b3-runc.lg4rnp.mount: Deactivated successfully. Apr 30 00:52:29.767432 systemd[1]: Started sshd@7-91.99.89.231:22-139.178.68.195:47714.service - OpenSSH per-connection server daemon (139.178.68.195:47714). Apr 30 00:52:30.766146 sshd[5664]: Accepted publickey for core from 139.178.68.195 port 47714 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:30.768761 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:30.778461 systemd-logind[1462]: New session 8 of user core. Apr 30 00:52:30.783127 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:52:31.386600 systemd[1]: run-containerd-runc-k8s.io-01ac3c72040eff5d4dbfbfafa01149e2fa1d260b76df265ae9ecfa2a045e91b3-runc.sMPOav.mount: Deactivated successfully. Apr 30 00:52:31.577714 sshd[5664]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:31.583473 systemd[1]: sshd@7-91.99.89.231:22-139.178.68.195:47714.service: Deactivated successfully. Apr 30 00:52:31.586410 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:52:31.588346 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:52:31.590559 systemd-logind[1462]: Removed session 8. Apr 30 00:52:36.754259 systemd[1]: Started sshd@8-91.99.89.231:22-139.178.68.195:42882.service - OpenSSH per-connection server daemon (139.178.68.195:42882). Apr 30 00:52:37.727136 sshd[5718]: Accepted publickey for core from 139.178.68.195 port 42882 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:37.729580 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:37.736146 systemd-logind[1462]: New session 9 of user core. Apr 30 00:52:37.744228 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:52:38.483347 sshd[5718]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:38.489713 systemd[1]: sshd@8-91.99.89.231:22-139.178.68.195:42882.service: Deactivated successfully. Apr 30 00:52:38.490076 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:52:38.492690 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:52:38.495097 systemd-logind[1462]: Removed session 9. Apr 30 00:52:43.661318 systemd[1]: Started sshd@9-91.99.89.231:22-139.178.68.195:42898.service - OpenSSH per-connection server daemon (139.178.68.195:42898). Apr 30 00:52:44.659503 sshd[5732]: Accepted publickey for core from 139.178.68.195 port 42898 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:44.662505 sshd[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:44.669460 systemd-logind[1462]: New session 10 of user core. Apr 30 00:52:44.674171 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:52:45.424480 sshd[5732]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:45.430104 systemd[1]: sshd@9-91.99.89.231:22-139.178.68.195:42898.service: Deactivated successfully. Apr 30 00:52:45.433964 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:52:45.435259 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:52:45.437119 systemd-logind[1462]: Removed session 10. Apr 30 00:52:45.601297 systemd[1]: Started sshd@10-91.99.89.231:22-139.178.68.195:33168.service - OpenSSH per-connection server daemon (139.178.68.195:33168). Apr 30 00:52:46.589740 sshd[5747]: Accepted publickey for core from 139.178.68.195 port 33168 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:46.592485 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:46.598517 systemd-logind[1462]: New session 11 of user core. Apr 30 00:52:46.603111 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:52:47.394056 sshd[5747]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:47.399101 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:52:47.399541 systemd[1]: sshd@10-91.99.89.231:22-139.178.68.195:33168.service: Deactivated successfully. Apr 30 00:52:47.403313 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:52:47.406595 systemd-logind[1462]: Removed session 11. Apr 30 00:52:47.576494 systemd[1]: Started sshd@11-91.99.89.231:22-139.178.68.195:33176.service - OpenSSH per-connection server daemon (139.178.68.195:33176). Apr 30 00:52:48.559860 sshd[5759]: Accepted publickey for core from 139.178.68.195 port 33176 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:48.563398 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:48.570876 systemd-logind[1462]: New session 12 of user core. Apr 30 00:52:48.574128 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:52:49.326617 sshd[5759]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:49.333099 systemd[1]: sshd@11-91.99.89.231:22-139.178.68.195:33176.service: Deactivated successfully. Apr 30 00:52:49.333466 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:52:49.337104 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:52:49.338647 systemd-logind[1462]: Removed session 12. Apr 30 00:52:54.504445 systemd[1]: Started sshd@12-91.99.89.231:22-139.178.68.195:33182.service - OpenSSH per-connection server daemon (139.178.68.195:33182). Apr 30 00:52:55.503114 sshd[5778]: Accepted publickey for core from 139.178.68.195 port 33182 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:55.505163 sshd[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:55.511166 systemd-logind[1462]: New session 13 of user core. Apr 30 00:52:55.517173 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:52:56.266291 sshd[5778]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:56.273729 systemd[1]: sshd@12-91.99.89.231:22-139.178.68.195:33182.service: Deactivated successfully. Apr 30 00:52:56.278993 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:52:56.280193 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:52:56.281499 systemd-logind[1462]: Removed session 13. Apr 30 00:52:56.441444 systemd[1]: Started sshd@13-91.99.89.231:22-139.178.68.195:36348.service - OpenSSH per-connection server daemon (139.178.68.195:36348). Apr 30 00:52:57.433348 sshd[5794]: Accepted publickey for core from 139.178.68.195 port 36348 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:57.435998 sshd[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:57.444502 systemd-logind[1462]: New session 14 of user core. Apr 30 00:52:57.455306 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:52:58.334590 sshd[5794]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:58.341194 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:52:58.342584 systemd[1]: sshd@13-91.99.89.231:22-139.178.68.195:36348.service: Deactivated successfully. Apr 30 00:52:58.345760 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:52:58.347276 systemd-logind[1462]: Removed session 14. Apr 30 00:52:58.509215 systemd[1]: Started sshd@14-91.99.89.231:22-139.178.68.195:36354.service - OpenSSH per-connection server daemon (139.178.68.195:36354). Apr 30 00:52:59.483431 sshd[5804]: Accepted publickey for core from 139.178.68.195 port 36354 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:59.487565 sshd[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:59.494638 systemd-logind[1462]: New session 15 of user core. Apr 30 00:52:59.498423 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:53:02.386326 sshd[5804]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:02.394881 systemd[1]: sshd@14-91.99.89.231:22-139.178.68.195:36354.service: Deactivated successfully. Apr 30 00:53:02.402659 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:53:02.404628 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:53:02.406121 systemd-logind[1462]: Removed session 15. Apr 30 00:53:02.550655 systemd[1]: Started sshd@15-91.99.89.231:22-139.178.68.195:36362.service - OpenSSH per-connection server daemon (139.178.68.195:36362). Apr 30 00:53:03.555339 sshd[5881]: Accepted publickey for core from 139.178.68.195 port 36362 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:03.554241 sshd[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:03.567053 systemd-logind[1462]: New session 16 of user core. Apr 30 00:53:03.573141 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:53:04.473577 sshd[5881]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:04.479522 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:53:04.480363 systemd[1]: sshd@15-91.99.89.231:22-139.178.68.195:36362.service: Deactivated successfully. Apr 30 00:53:04.485385 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:53:04.486812 systemd-logind[1462]: Removed session 16. Apr 30 00:53:04.647175 systemd[1]: Started sshd@16-91.99.89.231:22-139.178.68.195:36366.service - OpenSSH per-connection server daemon (139.178.68.195:36366). Apr 30 00:53:05.643844 sshd[5892]: Accepted publickey for core from 139.178.68.195 port 36366 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:05.646382 sshd[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:05.651516 systemd-logind[1462]: New session 17 of user core. Apr 30 00:53:05.658219 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:53:06.401650 sshd[5892]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:06.407130 systemd[1]: sshd@16-91.99.89.231:22-139.178.68.195:36366.service: Deactivated successfully. Apr 30 00:53:06.410702 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:53:06.412489 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:53:06.414058 systemd-logind[1462]: Removed session 17. Apr 30 00:53:11.581351 systemd[1]: Started sshd@17-91.99.89.231:22-139.178.68.195:44730.service - OpenSSH per-connection server daemon (139.178.68.195:44730). Apr 30 00:53:12.578887 sshd[5908]: Accepted publickey for core from 139.178.68.195 port 44730 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:12.580372 sshd[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:12.590647 systemd-logind[1462]: New session 18 of user core. Apr 30 00:53:12.592115 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:53:13.346530 sshd[5908]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:13.353730 systemd[1]: sshd@17-91.99.89.231:22-139.178.68.195:44730.service: Deactivated successfully. Apr 30 00:53:13.354000 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:53:13.359093 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:53:13.360122 systemd-logind[1462]: Removed session 18. Apr 30 00:53:18.515139 systemd[1]: Started sshd@18-91.99.89.231:22-139.178.68.195:41268.service - OpenSSH per-connection server daemon (139.178.68.195:41268). Apr 30 00:53:19.510405 sshd[5922]: Accepted publickey for core from 139.178.68.195 port 41268 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:19.513179 sshd[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:19.518827 systemd-logind[1462]: New session 19 of user core. Apr 30 00:53:19.524123 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:53:20.273106 sshd[5922]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:20.277247 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:53:20.279487 systemd[1]: sshd@18-91.99.89.231:22-139.178.68.195:41268.service: Deactivated successfully. Apr 30 00:53:20.283326 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:53:20.285743 systemd-logind[1462]: Removed session 19.