Apr 30 00:50:25.936041 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:50:25.936087 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:50:25.936099 kernel: KASLR enabled Apr 30 00:50:25.936105 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 30 00:50:25.936111 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 30 00:50:25.936117 kernel: random: crng init done Apr 30 00:50:25.936124 kernel: ACPI: Early table checksum verification disabled Apr 30 00:50:25.936130 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 30 00:50:25.936136 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 30 00:50:25.936144 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:50:25.936150 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:50:25.936156 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:50:25.936162 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:50:25.936169 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:50:25.936187 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:50:25.936197 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:50:25.936204 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:50:25.936210 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:50:25.936216 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 00:50:25.936223 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 30 00:50:25.936229 kernel: NUMA: Failed to initialise from firmware Apr 30 00:50:25.936235 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 00:50:25.936242 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 30 00:50:25.936248 kernel: Zone ranges: Apr 30 00:50:25.936255 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 00:50:25.936263 kernel: DMA32 empty Apr 30 00:50:25.936269 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 30 00:50:25.936275 kernel: Movable zone start for each node Apr 30 00:50:25.936281 kernel: Early memory node ranges Apr 30 00:50:25.936288 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 30 00:50:25.936294 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 30 00:50:25.936301 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 30 00:50:25.936307 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 30 00:50:25.936314 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 30 00:50:25.936320 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 30 00:50:25.936326 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 30 00:50:25.936333 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 00:50:25.936341 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 30 00:50:25.936348 kernel: psci: probing for conduit method from ACPI. Apr 30 00:50:25.936354 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:50:25.936364 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:50:25.936371 kernel: psci: Trusted OS migration not required Apr 30 00:50:25.936378 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:50:25.936386 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 00:50:25.936393 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:50:25.936400 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:50:25.936407 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:50:25.936413 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:50:25.936432 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:50:25.936439 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:50:25.936445 kernel: CPU features: detected: Spectre-v4 Apr 30 00:50:25.936452 kernel: CPU features: detected: Spectre-BHB Apr 30 00:50:25.936459 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:50:25.936467 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:50:25.936474 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:50:25.936481 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:50:25.936488 kernel: alternatives: applying boot alternatives Apr 30 00:50:25.936496 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:50:25.936503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:50:25.936509 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:50:25.936516 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:50:25.936523 kernel: Fallback order for Node 0: 0 Apr 30 00:50:25.936530 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 30 00:50:25.936537 kernel: Policy zone: Normal Apr 30 00:50:25.936545 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:50:25.936552 kernel: software IO TLB: area num 2. Apr 30 00:50:25.936559 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 30 00:50:25.936566 kernel: Memory: 3882872K/4096000K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213128K reserved, 0K cma-reserved) Apr 30 00:50:25.936574 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:50:25.936581 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:50:25.936588 kernel: rcu: RCU event tracing is enabled. Apr 30 00:50:25.936596 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:50:25.936603 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:50:25.936610 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:50:25.936617 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:50:25.936625 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:50:25.936632 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:50:25.936639 kernel: GICv3: 256 SPIs implemented Apr 30 00:50:25.936647 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:50:25.936654 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:50:25.936661 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:50:25.936668 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 00:50:25.936674 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 00:50:25.936681 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:50:25.936688 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:50:25.936695 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 30 00:50:25.936702 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 30 00:50:25.936710 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:50:25.936717 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:50:25.936724 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:50:25.936731 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:50:25.936737 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:50:25.936744 kernel: Console: colour dummy device 80x25 Apr 30 00:50:25.936751 kernel: ACPI: Core revision 20230628 Apr 30 00:50:25.936759 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:50:25.936766 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:50:25.936773 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:50:25.936782 kernel: landlock: Up and running. Apr 30 00:50:25.936789 kernel: SELinux: Initializing. Apr 30 00:50:25.936796 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:50:25.936803 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:50:25.936810 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:50:25.936817 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:50:25.936824 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:50:25.936831 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:50:25.936838 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 00:50:25.936847 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 00:50:25.936854 kernel: Remapping and enabling EFI services. Apr 30 00:50:25.936861 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:50:25.936868 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:50:25.936875 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 00:50:25.936882 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 30 00:50:25.936889 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:50:25.936896 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:50:25.936903 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:50:25.936910 kernel: SMP: Total of 2 processors activated. Apr 30 00:50:25.936919 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:50:25.936926 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:50:25.936939 kernel: CPU features: detected: Common not Private translations Apr 30 00:50:25.936948 kernel: CPU features: detected: CRC32 instructions Apr 30 00:50:25.936955 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 00:50:25.936962 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:50:25.936969 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:50:25.936977 kernel: CPU features: detected: Privileged Access Never Apr 30 00:50:25.936984 kernel: CPU features: detected: RAS Extension Support Apr 30 00:50:25.936993 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 00:50:25.937001 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:50:25.937008 kernel: alternatives: applying system-wide alternatives Apr 30 00:50:25.937015 kernel: devtmpfs: initialized Apr 30 00:50:25.937023 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:50:25.937030 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:50:25.937038 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:50:25.937046 kernel: SMBIOS 3.0.0 present. Apr 30 00:50:25.937054 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 30 00:50:25.937061 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:50:25.937068 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:50:25.937076 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:50:25.937083 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:50:25.937091 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:50:25.937098 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Apr 30 00:50:25.937105 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:50:25.937114 kernel: cpuidle: using governor menu Apr 30 00:50:25.937122 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:50:25.937129 kernel: ASID allocator initialised with 32768 entries Apr 30 00:50:25.937136 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:50:25.937143 kernel: Serial: AMBA PL011 UART driver Apr 30 00:50:25.937151 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:50:25.937159 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:50:25.937166 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:50:25.937173 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:50:25.937190 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:50:25.938204 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:50:25.938242 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:50:25.938250 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:50:25.938258 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:50:25.938266 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:50:25.938274 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:50:25.938281 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:50:25.938289 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:50:25.938304 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:50:25.938311 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:50:25.938319 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:50:25.938326 kernel: ACPI: Interpreter enabled Apr 30 00:50:25.938333 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:50:25.938341 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:50:25.938348 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:50:25.938355 kernel: printk: console [ttyAMA0] enabled Apr 30 00:50:25.938363 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:50:25.938607 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:50:25.938691 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:50:25.938760 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:50:25.938831 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 00:50:25.938898 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 00:50:25.938907 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 00:50:25.938915 kernel: PCI host bridge to bus 0000:00 Apr 30 00:50:25.938999 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 00:50:25.939062 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:50:25.939145 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 00:50:25.941371 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:50:25.941561 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 00:50:25.941652 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 30 00:50:25.941776 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 30 00:50:25.941860 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 00:50:25.941940 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 00:50:25.942012 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 30 00:50:25.942096 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 00:50:25.942167 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 30 00:50:25.943350 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 00:50:25.943471 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 30 00:50:25.943560 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 00:50:25.943632 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 30 00:50:25.943711 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 00:50:25.943781 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 30 00:50:25.943858 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 00:50:25.943931 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 30 00:50:25.944006 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 00:50:25.944083 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 30 00:50:25.944162 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 00:50:25.945412 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 30 00:50:25.945537 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 00:50:25.945617 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 30 00:50:25.945705 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 30 00:50:25.945774 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 30 00:50:25.945855 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 00:50:25.945928 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 30 00:50:25.946000 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:50:25.946077 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 00:50:25.946158 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 00:50:25.947344 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 30 00:50:25.947512 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 00:50:25.947594 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 30 00:50:25.947704 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 30 00:50:25.947797 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 00:50:25.947882 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 30 00:50:25.948005 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 00:50:25.948085 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 30 00:50:25.948157 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 30 00:50:25.948296 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 00:50:25.948377 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 30 00:50:25.948473 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 00:50:25.948562 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 00:50:25.948637 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 30 00:50:25.948711 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 30 00:50:25.948783 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 00:50:25.948862 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 30 00:50:25.948960 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 30 00:50:25.949133 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 30 00:50:25.951261 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 30 00:50:25.951376 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 30 00:50:25.951470 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 30 00:50:25.951550 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 30 00:50:25.951620 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 30 00:50:25.951689 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 30 00:50:25.951772 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 30 00:50:25.951840 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 30 00:50:25.951908 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 30 00:50:25.951981 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 30 00:50:25.952051 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 30 00:50:25.952119 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 30 00:50:25.953281 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 30 00:50:25.953393 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 30 00:50:25.953501 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 30 00:50:25.953582 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 00:50:25.953652 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 30 00:50:25.953724 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 30 00:50:25.953798 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 00:50:25.953868 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 30 00:50:25.953941 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 30 00:50:25.954031 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 00:50:25.954119 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 30 00:50:25.955291 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 30 00:50:25.955412 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 30 00:50:25.955580 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 00:50:25.955671 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 30 00:50:25.955752 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 00:50:25.955854 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 30 00:50:25.955978 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 00:50:25.956070 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 30 00:50:25.956157 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 00:50:25.956353 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 30 00:50:25.956466 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 00:50:25.956553 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 30 00:50:25.956644 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 00:50:25.956732 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 30 00:50:25.956802 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 00:50:25.956878 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 30 00:50:25.956948 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 00:50:25.957031 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 30 00:50:25.957105 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 00:50:25.957191 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 30 00:50:25.957263 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 30 00:50:25.957335 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 30 00:50:25.957404 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 00:50:25.957489 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 30 00:50:25.957559 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 00:50:25.957629 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 30 00:50:25.957703 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 00:50:25.957774 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 30 00:50:25.957844 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 30 00:50:25.957914 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 30 00:50:25.957985 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 30 00:50:25.958054 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 30 00:50:25.958126 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 30 00:50:25.960412 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 30 00:50:25.960562 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 30 00:50:25.960647 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 30 00:50:25.960716 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 30 00:50:25.960788 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 30 00:50:25.960866 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 30 00:50:25.960959 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 30 00:50:25.961054 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 30 00:50:25.961142 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:50:25.961349 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 30 00:50:25.961483 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 00:50:25.961581 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 30 00:50:25.961666 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 30 00:50:25.961749 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 00:50:25.961830 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 30 00:50:25.961922 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 00:50:25.961998 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 30 00:50:25.962080 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 30 00:50:25.962159 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 00:50:25.963415 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 00:50:25.963582 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 30 00:50:25.963666 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 00:50:25.963736 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 30 00:50:25.963803 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 30 00:50:25.963876 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 00:50:25.963958 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 00:50:25.964033 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 00:50:25.964104 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 30 00:50:25.964173 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 30 00:50:25.964270 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 00:50:25.964353 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 30 00:50:25.964441 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 30 00:50:25.964519 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 00:50:25.964588 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 30 00:50:25.964658 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 30 00:50:25.964727 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 00:50:25.964805 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 30 00:50:25.964880 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 30 00:50:25.964954 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 00:50:25.965025 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 30 00:50:25.965092 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 30 00:50:25.965160 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 00:50:25.965866 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 30 00:50:25.965972 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 30 00:50:25.966047 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 30 00:50:25.966127 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 00:50:25.966329 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 30 00:50:25.966406 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 30 00:50:25.966491 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 00:50:25.966564 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 00:50:25.966632 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 30 00:50:25.966699 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 30 00:50:25.966768 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 00:50:25.966847 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 00:50:25.966917 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 30 00:50:25.966986 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 30 00:50:25.967053 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 00:50:25.967121 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 00:50:25.967199 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:50:25.967260 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 00:50:25.967342 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 30 00:50:25.967445 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 30 00:50:25.967512 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 00:50:25.967583 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 30 00:50:25.967646 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 30 00:50:25.967739 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 00:50:25.967816 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 30 00:50:25.967884 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 30 00:50:25.967948 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 00:50:25.968030 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 00:50:25.968098 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 30 00:50:25.968163 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 00:50:25.968272 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 30 00:50:25.968340 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 30 00:50:25.968407 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 00:50:25.968540 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 30 00:50:25.968611 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 30 00:50:25.968681 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 00:50:25.968758 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 30 00:50:25.968822 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 30 00:50:25.968945 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 00:50:25.969030 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 30 00:50:25.969097 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 30 00:50:25.969161 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 00:50:25.969278 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 30 00:50:25.969355 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 30 00:50:25.969436 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 00:50:25.969448 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:50:25.969457 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:50:25.969465 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:50:25.969473 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:50:25.969480 kernel: iommu: Default domain type: Translated Apr 30 00:50:25.969488 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:50:25.969499 kernel: efivars: Registered efivars operations Apr 30 00:50:25.969507 kernel: vgaarb: loaded Apr 30 00:50:25.969515 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:50:25.969523 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:50:25.969531 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:50:25.969539 kernel: pnp: PnP ACPI init Apr 30 00:50:25.969635 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 00:50:25.969647 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:50:25.969658 kernel: NET: Registered PF_INET protocol family Apr 30 00:50:25.969666 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:50:25.969674 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:50:25.969682 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:50:25.969690 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:50:25.969698 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:50:25.969706 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:50:25.969713 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:50:25.969721 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:50:25.969731 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:50:25.969813 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 30 00:50:25.969824 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:50:25.969833 kernel: kvm [1]: HYP mode not available Apr 30 00:50:25.969840 kernel: Initialise system trusted keyrings Apr 30 00:50:25.969848 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:50:25.969856 kernel: Key type asymmetric registered Apr 30 00:50:25.969864 kernel: Asymmetric key parser 'x509' registered Apr 30 00:50:25.969872 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:50:25.969882 kernel: io scheduler mq-deadline registered Apr 30 00:50:25.969889 kernel: io scheduler kyber registered Apr 30 00:50:25.969897 kernel: io scheduler bfq registered Apr 30 00:50:25.969906 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 00:50:25.969981 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 30 00:50:25.970053 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 30 00:50:25.970124 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:50:25.970224 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 30 00:50:25.970301 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 30 00:50:25.970372 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:50:25.970465 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 30 00:50:25.970538 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 30 00:50:25.970608 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:50:25.970685 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 30 00:50:25.970757 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 30 00:50:25.970828 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:50:25.970903 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 30 00:50:25.970972 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 30 00:50:25.971041 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:50:25.971118 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 30 00:50:25.971200 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 30 00:50:25.971272 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:50:25.971347 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 30 00:50:25.971416 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 30 00:50:25.971550 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:50:25.971629 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 30 00:50:25.971700 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 30 00:50:25.971770 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:50:25.971780 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 30 00:50:25.971852 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 30 00:50:25.971929 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 30 00:50:25.972011 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:50:25.972024 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:50:25.972032 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:50:25.972043 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:50:25.972121 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 30 00:50:25.972264 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 30 00:50:25.972278 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:50:25.972286 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 00:50:25.972361 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 30 00:50:25.972376 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 30 00:50:25.972384 kernel: thunder_xcv, ver 1.0 Apr 30 00:50:25.972392 kernel: thunder_bgx, ver 1.0 Apr 30 00:50:25.972400 kernel: nicpf, ver 1.0 Apr 30 00:50:25.972407 kernel: nicvf, ver 1.0 Apr 30 00:50:25.972510 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:50:25.972580 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:50:25 UTC (1745974225) Apr 30 00:50:25.972591 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:50:25.972601 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 00:50:25.972609 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:50:25.972617 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:50:25.972627 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:50:25.972635 kernel: Segment Routing with IPv6 Apr 30 00:50:25.972643 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:50:25.972650 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:50:25.972658 kernel: Key type dns_resolver registered Apr 30 00:50:25.972666 kernel: registered taskstats version 1 Apr 30 00:50:25.972675 kernel: Loading compiled-in X.509 certificates Apr 30 00:50:25.972683 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:50:25.972691 kernel: Key type .fscrypt registered Apr 30 00:50:25.972698 kernel: Key type fscrypt-provisioning registered Apr 30 00:50:25.972706 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:50:25.972714 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:50:25.972721 kernel: ima: No architecture policies found Apr 30 00:50:25.972729 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:50:25.972737 kernel: clk: Disabling unused clocks Apr 30 00:50:25.972746 kernel: Freeing unused kernel memory: 39424K Apr 30 00:50:25.972753 kernel: Run /init as init process Apr 30 00:50:25.972761 kernel: with arguments: Apr 30 00:50:25.972769 kernel: /init Apr 30 00:50:25.972776 kernel: with environment: Apr 30 00:50:25.972783 kernel: HOME=/ Apr 30 00:50:25.972791 kernel: TERM=linux Apr 30 00:50:25.972798 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:50:25.972812 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:50:25.972826 systemd[1]: Detected virtualization kvm. Apr 30 00:50:25.972834 systemd[1]: Detected architecture arm64. Apr 30 00:50:25.972842 systemd[1]: Running in initrd. Apr 30 00:50:25.972850 systemd[1]: No hostname configured, using default hostname. Apr 30 00:50:25.972858 systemd[1]: Hostname set to . Apr 30 00:50:25.972866 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:50:25.972874 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:50:25.972885 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:50:25.972893 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:50:25.972902 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:50:25.972911 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:50:25.972919 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:50:25.972930 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:50:25.972941 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:50:25.972951 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:50:25.972959 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:50:25.972967 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:50:25.972976 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:50:25.972984 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:50:25.972992 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:50:25.973000 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:50:25.973009 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:50:25.973018 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:50:25.973027 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:50:25.973035 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:50:25.973046 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:50:25.973055 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:50:25.973063 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:50:25.973074 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:50:25.973084 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:50:25.973094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:50:25.973106 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:50:25.973114 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:50:25.973123 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:50:25.973131 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:50:25.973139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:50:25.973150 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:50:25.973160 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:50:25.973170 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:50:25.973222 systemd-journald[236]: Collecting audit messages is disabled. Apr 30 00:50:25.973247 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:50:25.973259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:50:25.973269 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:50:25.973278 kernel: Bridge firewalling registered Apr 30 00:50:25.973287 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:50:25.973295 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:50:25.973304 systemd-journald[236]: Journal started Apr 30 00:50:25.973329 systemd-journald[236]: Runtime Journal (/run/log/journal/be257bf57664480f8ff7b2d2ce7ef527) is 8.0M, max 76.6M, 68.6M free. Apr 30 00:50:25.935839 systemd-modules-load[237]: Inserted module 'overlay' Apr 30 00:50:25.958916 systemd-modules-load[237]: Inserted module 'br_netfilter' Apr 30 00:50:25.981047 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:50:25.986544 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:50:26.000728 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:50:26.007364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:50:26.021432 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:50:26.028118 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:50:26.029378 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:50:26.031036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:50:26.039488 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:50:26.040322 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:50:26.050475 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:50:26.052653 dracut-cmdline[272]: dracut-dracut-053 Apr 30 00:50:26.054766 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:50:26.083329 systemd-resolved[277]: Positive Trust Anchors: Apr 30 00:50:26.083345 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:50:26.083378 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:50:26.093528 systemd-resolved[277]: Defaulting to hostname 'linux'. Apr 30 00:50:26.095096 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:50:26.096329 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:50:26.140217 kernel: SCSI subsystem initialized Apr 30 00:50:26.144252 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:50:26.152233 kernel: iscsi: registered transport (tcp) Apr 30 00:50:26.167247 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:50:26.167368 kernel: QLogic iSCSI HBA Driver Apr 30 00:50:26.225944 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:50:26.232443 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:50:26.264342 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:50:26.264429 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:50:26.266007 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:50:26.321231 kernel: raid6: neonx8 gen() 15698 MB/s Apr 30 00:50:26.338227 kernel: raid6: neonx4 gen() 15495 MB/s Apr 30 00:50:26.355248 kernel: raid6: neonx2 gen() 13123 MB/s Apr 30 00:50:26.372241 kernel: raid6: neonx1 gen() 10416 MB/s Apr 30 00:50:26.389235 kernel: raid6: int64x8 gen() 6909 MB/s Apr 30 00:50:26.406232 kernel: raid6: int64x4 gen() 7293 MB/s Apr 30 00:50:26.423240 kernel: raid6: int64x2 gen() 6079 MB/s Apr 30 00:50:26.440228 kernel: raid6: int64x1 gen() 5036 MB/s Apr 30 00:50:26.440326 kernel: raid6: using algorithm neonx8 gen() 15698 MB/s Apr 30 00:50:26.457265 kernel: raid6: .... xor() 11845 MB/s, rmw enabled Apr 30 00:50:26.457361 kernel: raid6: using neon recovery algorithm Apr 30 00:50:26.462231 kernel: xor: measuring software checksum speed Apr 30 00:50:26.462309 kernel: 8regs : 19716 MB/sec Apr 30 00:50:26.462332 kernel: 32regs : 17488 MB/sec Apr 30 00:50:26.463208 kernel: arm64_neon : 27052 MB/sec Apr 30 00:50:26.463235 kernel: xor: using function: arm64_neon (27052 MB/sec) Apr 30 00:50:26.515245 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:50:26.530932 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:50:26.537445 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:50:26.552402 systemd-udevd[456]: Using default interface naming scheme 'v255'. Apr 30 00:50:26.555981 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:50:26.563409 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:50:26.597832 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Apr 30 00:50:26.635632 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:50:26.640564 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:50:26.704756 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:50:26.712493 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:50:26.745509 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:50:26.746381 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:50:26.748532 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:50:26.750352 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:50:26.757534 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:50:26.777424 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:50:26.820438 kernel: scsi host0: Virtio SCSI HBA Apr 30 00:50:26.827257 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 00:50:26.828232 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 00:50:26.865633 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 30 00:50:26.868333 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 30 00:50:26.868485 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:50:26.868497 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 30 00:50:26.877314 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:50:26.879170 kernel: ACPI: bus type USB registered Apr 30 00:50:26.877451 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:50:26.879473 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:50:26.880776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:50:26.881893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:50:26.885012 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:50:26.889273 kernel: usbcore: registered new interface driver usbfs Apr 30 00:50:26.889297 kernel: usbcore: registered new interface driver hub Apr 30 00:50:26.889308 kernel: usbcore: registered new device driver usb Apr 30 00:50:26.890920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:50:26.906202 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 30 00:50:26.915586 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 00:50:26.915723 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 30 00:50:26.915824 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 30 00:50:26.915914 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 00:50:26.916007 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:50:26.916017 kernel: GPT:17805311 != 80003071 Apr 30 00:50:26.916027 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:50:26.916037 kernel: GPT:17805311 != 80003071 Apr 30 00:50:26.916046 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:50:26.916055 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:50:26.916068 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 30 00:50:26.923088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:50:26.927745 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 00:50:26.945036 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 00:50:26.945161 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 00:50:26.945468 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 00:50:26.945566 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 00:50:26.945660 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 00:50:26.945749 kernel: hub 1-0:1.0: USB hub found Apr 30 00:50:26.945871 kernel: hub 1-0:1.0: 4 ports detected Apr 30 00:50:26.945955 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 00:50:26.946056 kernel: hub 2-0:1.0: USB hub found Apr 30 00:50:26.946151 kernel: hub 2-0:1.0: 4 ports detected Apr 30 00:50:26.930608 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:50:26.961339 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:50:26.988852 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (500) Apr 30 00:50:26.990257 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (507) Apr 30 00:50:27.001825 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 00:50:27.011745 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 00:50:27.018922 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 00:50:27.024147 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 00:50:27.025545 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 00:50:27.034488 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:50:27.044774 disk-uuid[574]: Primary Header is updated. Apr 30 00:50:27.044774 disk-uuid[574]: Secondary Entries is updated. Apr 30 00:50:27.044774 disk-uuid[574]: Secondary Header is updated. Apr 30 00:50:27.052232 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:50:27.056292 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:50:27.060215 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:50:27.187229 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 00:50:27.430220 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 30 00:50:27.566245 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 30 00:50:27.566304 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 00:50:27.568210 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 30 00:50:27.623590 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 30 00:50:27.623906 kernel: usbcore: registered new interface driver usbhid Apr 30 00:50:27.625250 kernel: usbhid: USB HID core driver Apr 30 00:50:28.067029 disk-uuid[575]: The operation has completed successfully. Apr 30 00:50:28.067807 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:50:28.121452 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:50:28.121570 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:50:28.138564 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:50:28.143533 sh[592]: Success Apr 30 00:50:28.162371 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:50:28.226414 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:50:28.237373 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:50:28.240330 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:50:28.274538 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:50:28.274611 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:50:28.274624 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:50:28.274635 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:50:28.275277 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:50:28.282220 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 00:50:28.286231 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:50:28.287023 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:50:28.292517 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:50:28.294030 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:50:28.312208 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:50:28.312266 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:50:28.312281 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:50:28.316259 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:50:28.316334 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:50:28.333611 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:50:28.334226 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:50:28.347482 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:50:28.354472 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:50:28.464415 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:50:28.467357 ignition[682]: Ignition 2.19.0 Apr 30 00:50:28.467372 ignition[682]: Stage: fetch-offline Apr 30 00:50:28.471558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:50:28.467430 ignition[682]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:50:28.474729 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:50:28.467443 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:50:28.467616 ignition[682]: parsed url from cmdline: "" Apr 30 00:50:28.467620 ignition[682]: no config URL provided Apr 30 00:50:28.467624 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:50:28.467633 ignition[682]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:50:28.467638 ignition[682]: failed to fetch config: resource requires networking Apr 30 00:50:28.467949 ignition[682]: Ignition finished successfully Apr 30 00:50:28.498124 systemd-networkd[778]: lo: Link UP Apr 30 00:50:28.498136 systemd-networkd[778]: lo: Gained carrier Apr 30 00:50:28.502949 systemd-networkd[778]: Enumeration completed Apr 30 00:50:28.503302 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:50:28.504083 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:50:28.504089 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:50:28.504353 systemd[1]: Reached target network.target - Network. Apr 30 00:50:28.505339 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:50:28.505345 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:50:28.506635 systemd-networkd[778]: eth0: Link UP Apr 30 00:50:28.506639 systemd-networkd[778]: eth0: Gained carrier Apr 30 00:50:28.506648 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:50:28.509664 systemd-networkd[778]: eth1: Link UP Apr 30 00:50:28.509668 systemd-networkd[778]: eth1: Gained carrier Apr 30 00:50:28.509678 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:50:28.516482 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:50:28.530262 ignition[781]: Ignition 2.19.0 Apr 30 00:50:28.530994 ignition[781]: Stage: fetch Apr 30 00:50:28.531590 ignition[781]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:50:28.531603 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:50:28.531718 ignition[781]: parsed url from cmdline: "" Apr 30 00:50:28.531721 ignition[781]: no config URL provided Apr 30 00:50:28.531726 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:50:28.531735 ignition[781]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:50:28.531757 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 00:50:28.535620 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:50:28.532443 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 00:50:28.559301 systemd-networkd[778]: eth0: DHCPv4 address 49.12.45.4/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 00:50:28.733617 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 00:50:28.741454 ignition[781]: GET result: OK Apr 30 00:50:28.741639 ignition[781]: parsing config with SHA512: b6686b6cbe59cd96b657ab8b5953b8323e782eb6db299b53ef88d457c5f12ca676dbee2bbd3f46ade740ce334864cdd509454548f97bcd805d1e75aea30714c9 Apr 30 00:50:28.747656 unknown[781]: fetched base config from "system" Apr 30 00:50:28.747676 unknown[781]: fetched base config from "system" Apr 30 00:50:28.748169 ignition[781]: fetch: fetch complete Apr 30 00:50:28.747682 unknown[781]: fetched user config from "hetzner" Apr 30 00:50:28.748175 ignition[781]: fetch: fetch passed Apr 30 00:50:28.748247 ignition[781]: Ignition finished successfully Apr 30 00:50:28.752463 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:50:28.759432 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:50:28.772284 ignition[788]: Ignition 2.19.0 Apr 30 00:50:28.772296 ignition[788]: Stage: kargs Apr 30 00:50:28.772517 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:50:28.772535 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:50:28.773654 ignition[788]: kargs: kargs passed Apr 30 00:50:28.773723 ignition[788]: Ignition finished successfully Apr 30 00:50:28.776982 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:50:28.783527 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:50:28.798859 ignition[794]: Ignition 2.19.0 Apr 30 00:50:28.798869 ignition[794]: Stage: disks Apr 30 00:50:28.799074 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:50:28.799085 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:50:28.800213 ignition[794]: disks: disks passed Apr 30 00:50:28.800275 ignition[794]: Ignition finished successfully Apr 30 00:50:28.801453 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:50:28.804227 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:50:28.804880 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:50:28.806090 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:50:28.807308 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:50:28.808497 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:50:28.815624 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:50:28.837799 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 00:50:28.842224 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:50:28.847339 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:50:28.916245 kernel: EXT4-fs (sda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:50:28.917524 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:50:28.920074 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:50:28.928365 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:50:28.932793 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:50:28.934997 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:50:28.939526 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:50:28.939589 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:50:28.944437 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (810) Apr 30 00:50:28.946326 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:50:28.949438 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:50:28.953355 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:50:28.953412 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:50:28.953424 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:50:28.958580 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:50:28.958654 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:50:28.964275 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:50:29.020063 coreos-metadata[812]: Apr 30 00:50:29.019 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 00:50:29.021827 coreos-metadata[812]: Apr 30 00:50:29.021 INFO Fetch successful Apr 30 00:50:29.023434 coreos-metadata[812]: Apr 30 00:50:29.023 INFO wrote hostname ci-4081-3-3-6-32a99953eb to /sysroot/etc/hostname Apr 30 00:50:29.026620 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:50:29.027929 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:50:29.035466 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:50:29.041315 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:50:29.048088 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:50:29.167096 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:50:29.173361 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:50:29.176706 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:50:29.187246 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:50:29.213831 ignition[930]: INFO : Ignition 2.19.0 Apr 30 00:50:29.213831 ignition[930]: INFO : Stage: mount Apr 30 00:50:29.214816 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:50:29.214816 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:50:29.215945 ignition[930]: INFO : mount: mount passed Apr 30 00:50:29.215945 ignition[930]: INFO : Ignition finished successfully Apr 30 00:50:29.217269 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:50:29.222418 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:50:29.223300 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:50:29.274955 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:50:29.288677 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:50:29.300244 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Apr 30 00:50:29.302208 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:50:29.302269 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:50:29.302282 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:50:29.305198 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:50:29.305260 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:50:29.307585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:50:29.336351 ignition[958]: INFO : Ignition 2.19.0 Apr 30 00:50:29.336351 ignition[958]: INFO : Stage: files Apr 30 00:50:29.337582 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:50:29.337582 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:50:29.339129 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:50:29.339773 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:50:29.339773 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:50:29.343639 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:50:29.345272 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:50:29.345272 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:50:29.344111 unknown[958]: wrote ssh authorized keys file for user: core Apr 30 00:50:29.347866 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:50:29.347866 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:50:29.347866 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:50:29.347866 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 00:50:29.463812 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:50:29.683623 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:50:29.683623 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:50:29.686737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 00:50:29.755794 systemd-networkd[778]: eth0: Gained IPv6LL Apr 30 00:50:30.139630 systemd-networkd[778]: eth1: Gained IPv6LL Apr 30 00:50:30.262838 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 00:50:30.468487 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:50:30.468487 ignition[958]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 30 00:50:30.471847 ignition[958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:50:30.474394 ignition[958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:50:30.474394 ignition[958]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 30 00:50:30.474394 ignition[958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 30 00:50:30.474394 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:50:30.474394 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:50:30.474394 ignition[958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 30 00:50:30.474394 ignition[958]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 30 00:50:30.474394 ignition[958]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 00:50:30.474394 ignition[958]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 00:50:30.474394 ignition[958]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 30 00:50:30.474394 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:50:30.474394 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:50:30.474394 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:50:30.474394 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:50:30.474394 ignition[958]: INFO : files: files passed Apr 30 00:50:30.474394 ignition[958]: INFO : Ignition finished successfully Apr 30 00:50:30.475758 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:50:30.482502 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:50:30.486927 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:50:30.494058 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:50:30.494270 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:50:30.508404 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:50:30.508404 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:50:30.511336 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:50:30.514436 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:50:30.516467 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:50:30.521457 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:50:30.556046 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:50:30.556235 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:50:30.558470 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:50:30.560269 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:50:30.562256 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:50:30.567658 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:50:30.592435 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:50:30.599574 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:50:30.613632 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:50:30.614468 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:50:30.615216 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:50:30.617722 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:50:30.618093 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:50:30.620446 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:50:30.621016 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:50:30.622604 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:50:30.623529 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:50:30.624583 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:50:30.625803 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:50:30.626771 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:50:30.627820 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:50:30.628909 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:50:30.629856 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:50:30.630646 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:50:30.630785 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:50:30.631948 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:50:30.632584 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:50:30.633577 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:50:30.635211 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:50:30.636457 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:50:30.636590 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:50:30.638033 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:50:30.638158 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:50:30.639295 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:50:30.639434 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:50:30.640323 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:50:30.640432 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:50:30.649577 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:50:30.650061 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:50:30.650215 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:50:30.656091 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:50:30.656716 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:50:30.658349 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:50:30.662268 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:50:30.663095 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:50:30.668466 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:50:30.672740 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:50:30.676810 ignition[1010]: INFO : Ignition 2.19.0 Apr 30 00:50:30.676810 ignition[1010]: INFO : Stage: umount Apr 30 00:50:30.676810 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:50:30.676810 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:50:30.676810 ignition[1010]: INFO : umount: umount passed Apr 30 00:50:30.676810 ignition[1010]: INFO : Ignition finished successfully Apr 30 00:50:30.676060 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:50:30.676152 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:50:30.679838 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:50:30.679964 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:50:30.683108 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:50:30.683209 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:50:30.684084 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:50:30.684130 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:50:30.685342 systemd[1]: Stopped target network.target - Network. Apr 30 00:50:30.686025 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:50:30.686098 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:50:30.687124 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:50:30.689208 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:50:30.693276 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:50:30.695425 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:50:30.695880 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:50:30.696919 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:50:30.696970 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:50:30.702008 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:50:30.702065 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:50:30.703165 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:50:30.703988 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:50:30.705266 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:50:30.705324 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:50:30.709770 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:50:30.711001 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:50:30.714309 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:50:30.716784 systemd-networkd[778]: eth0: DHCPv6 lease lost Apr 30 00:50:30.722344 systemd-networkd[778]: eth1: DHCPv6 lease lost Apr 30 00:50:30.724659 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:50:30.724783 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:50:30.726527 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:50:30.726615 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:50:30.738460 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:50:30.738920 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:50:30.738992 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:50:30.740156 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:50:30.741891 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:50:30.741999 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:50:30.748704 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:50:30.749329 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:50:30.761013 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:50:30.761152 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:50:30.763059 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:50:30.763143 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:50:30.764164 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:50:30.764340 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:50:30.765748 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:50:30.765806 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:50:30.767195 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:50:30.767351 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:50:30.770288 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:50:30.770442 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:50:30.773568 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:50:30.773648 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:50:30.774750 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:50:30.774783 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:50:30.775834 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:50:30.775890 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:50:30.777285 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:50:30.777334 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:50:30.778805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:50:30.778853 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:50:30.785422 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:50:30.785948 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:50:30.786018 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:50:30.787060 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:50:30.787098 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:50:30.799355 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:50:30.799513 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:50:30.801334 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:50:30.805818 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:50:30.819869 systemd[1]: Switching root. Apr 30 00:50:30.859401 systemd-journald[236]: Journal stopped Apr 30 00:50:31.796426 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Apr 30 00:50:31.796484 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:50:31.796496 kernel: SELinux: policy capability open_perms=1 Apr 30 00:50:31.796510 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:50:31.796523 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:50:31.796537 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:50:31.796548 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:50:31.796561 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:50:31.796571 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:50:31.796580 kernel: audit: type=1403 audit(1745974231.031:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:50:31.796591 systemd[1]: Successfully loaded SELinux policy in 35.424ms. Apr 30 00:50:31.796612 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.053ms. Apr 30 00:50:31.796626 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:50:31.796638 systemd[1]: Detected virtualization kvm. Apr 30 00:50:31.796648 systemd[1]: Detected architecture arm64. Apr 30 00:50:31.796659 systemd[1]: Detected first boot. Apr 30 00:50:31.796670 systemd[1]: Hostname set to . Apr 30 00:50:31.796679 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:50:31.796690 zram_generator::config[1071]: No configuration found. Apr 30 00:50:31.796705 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:50:31.796715 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:50:31.796725 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 00:50:31.796737 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:50:31.796748 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:50:31.796760 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:50:31.796770 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:50:31.796780 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:50:31.796791 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:50:31.796801 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:50:31.796811 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:50:31.796822 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:50:31.796832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:50:31.796845 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:50:31.796857 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:50:31.796867 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:50:31.796878 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:50:31.796890 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:50:31.796901 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:50:31.796912 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:50:31.796922 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:50:31.796935 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:50:31.796946 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:50:31.796958 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:50:31.796968 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:50:31.796979 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:50:31.796989 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:50:31.796999 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:50:31.797010 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:50:31.797022 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:50:31.797032 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:50:31.797042 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:50:31.797053 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:50:31.797063 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:50:31.797073 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:50:31.797085 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:50:31.797095 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:50:31.797109 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:50:31.797122 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:50:31.797132 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:50:31.797142 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:50:31.797153 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:50:31.797163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:50:31.797175 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:50:31.797203 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:50:31.797214 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:50:31.797224 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:50:31.797235 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:50:31.797246 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 00:50:31.797257 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 00:50:31.797267 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:50:31.797279 kernel: ACPI: bus type drm_connector registered Apr 30 00:50:31.797288 kernel: loop: module loaded Apr 30 00:50:31.797299 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:50:31.797309 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:50:31.797319 kernel: fuse: init (API version 7.39) Apr 30 00:50:31.797330 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:50:31.797340 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:50:31.797406 systemd-journald[1153]: Collecting audit messages is disabled. Apr 30 00:50:31.797437 systemd-journald[1153]: Journal started Apr 30 00:50:31.797463 systemd-journald[1153]: Runtime Journal (/run/log/journal/be257bf57664480f8ff7b2d2ce7ef527) is 8.0M, max 76.6M, 68.6M free. Apr 30 00:50:31.802290 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:50:31.805113 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:50:31.806693 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:50:31.807503 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:50:31.812450 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:50:31.813064 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:50:31.814961 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:50:31.817448 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:50:31.819641 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:50:31.819833 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:50:31.820894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:50:31.821088 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:50:31.822145 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:50:31.824557 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:50:31.825609 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:50:31.826703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:50:31.826879 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:50:31.827860 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:50:31.828010 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:50:31.828873 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:50:31.830707 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:50:31.832941 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:50:31.835784 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:50:31.836754 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:50:31.849746 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:50:31.859332 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:50:31.864127 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:50:31.867326 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:50:31.873767 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:50:31.879580 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:50:31.882311 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:50:31.889778 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:50:31.890623 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:50:31.892981 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:50:31.908544 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:50:31.909971 systemd-journald[1153]: Time spent on flushing to /var/log/journal/be257bf57664480f8ff7b2d2ce7ef527 is 60.568ms for 1113 entries. Apr 30 00:50:31.909971 systemd-journald[1153]: System Journal (/var/log/journal/be257bf57664480f8ff7b2d2ce7ef527) is 8.0M, max 584.8M, 576.8M free. Apr 30 00:50:31.996712 systemd-journald[1153]: Received client request to flush runtime journal. Apr 30 00:50:31.911857 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:50:31.913391 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:50:31.921416 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:50:31.930410 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:50:31.943119 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:50:31.946651 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:50:31.957398 udevadm[1212]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:50:31.971670 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:50:31.988063 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Apr 30 00:50:31.988075 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Apr 30 00:50:31.993487 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:50:32.001471 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:50:32.013777 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:50:32.055005 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:50:32.064538 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:50:32.079402 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Apr 30 00:50:32.079755 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Apr 30 00:50:32.086725 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:50:32.475291 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:50:32.482794 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:50:32.506787 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Apr 30 00:50:32.530130 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:50:32.542062 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:50:32.562451 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:50:32.615065 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Apr 30 00:50:32.630580 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:50:32.716282 systemd-networkd[1242]: lo: Link UP Apr 30 00:50:32.716748 systemd-networkd[1242]: lo: Gained carrier Apr 30 00:50:32.718796 systemd-networkd[1242]: Enumeration completed Apr 30 00:50:32.720856 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:50:32.725525 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:50:32.726254 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:50:32.731594 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:50:32.733379 systemd-networkd[1242]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:50:32.733396 systemd-networkd[1242]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:50:32.737354 systemd-networkd[1242]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:50:32.737398 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:50:32.737428 systemd-networkd[1242]: eth0: Link UP Apr 30 00:50:32.737432 systemd-networkd[1242]: eth0: Gained carrier Apr 30 00:50:32.737440 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:50:32.751442 systemd-networkd[1242]: eth1: Link UP Apr 30 00:50:32.751451 systemd-networkd[1242]: eth1: Gained carrier Apr 30 00:50:32.751471 systemd-networkd[1242]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:50:32.760200 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:50:32.778697 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1246) Apr 30 00:50:32.819437 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Apr 30 00:50:32.819472 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 00:50:32.819645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:50:32.838674 systemd-networkd[1242]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:50:32.840676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:50:32.841761 systemd-networkd[1242]: eth0: DHCPv4 address 49.12.45.4/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 00:50:32.858481 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:50:32.861608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:50:32.863771 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:50:32.863819 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:50:32.864706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:50:32.868730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:50:32.869685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:50:32.869840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:50:32.883677 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:50:32.885826 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:50:32.919369 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 30 00:50:32.919457 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 00:50:32.919474 kernel: [drm] features: -context_init Apr 30 00:50:32.920236 kernel: [drm] number of scanouts: 1 Apr 30 00:50:32.920290 kernel: [drm] number of cap sets: 0 Apr 30 00:50:32.924219 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 00:50:32.928595 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 00:50:32.932223 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 00:50:32.933940 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 00:50:32.936699 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:50:32.938039 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:50:32.944522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:50:32.951316 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:50:32.951674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:50:32.957542 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:50:33.026393 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:50:33.073158 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:50:33.082487 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:50:33.095546 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:50:33.123282 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:50:33.124300 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:50:33.133572 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:50:33.139859 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:50:33.164824 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:50:33.165684 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:50:33.166610 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:50:33.166708 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:50:33.167315 systemd[1]: Reached target machines.target - Containers. Apr 30 00:50:33.169691 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:50:33.176490 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:50:33.180411 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:50:33.182133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:50:33.184555 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:50:33.189649 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:50:33.200297 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:50:33.206425 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:50:33.225564 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:50:33.234233 kernel: loop0: detected capacity change from 0 to 194096 Apr 30 00:50:33.240456 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:50:33.241356 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:50:33.262360 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:50:33.287219 kernel: loop1: detected capacity change from 0 to 8 Apr 30 00:50:33.311314 kernel: loop2: detected capacity change from 0 to 114328 Apr 30 00:50:33.350263 kernel: loop3: detected capacity change from 0 to 114432 Apr 30 00:50:33.384474 kernel: loop4: detected capacity change from 0 to 194096 Apr 30 00:50:33.404503 kernel: loop5: detected capacity change from 0 to 8 Apr 30 00:50:33.410224 kernel: loop6: detected capacity change from 0 to 114328 Apr 30 00:50:33.420503 kernel: loop7: detected capacity change from 0 to 114432 Apr 30 00:50:33.428968 (sd-merge)[1328]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 00:50:33.429538 (sd-merge)[1328]: Merged extensions into '/usr'. Apr 30 00:50:33.442837 systemd[1]: Reloading requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:50:33.442855 systemd[1]: Reloading... Apr 30 00:50:33.535215 zram_generator::config[1357]: No configuration found. Apr 30 00:50:33.660933 ldconfig[1311]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:50:33.675714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:50:33.737886 systemd[1]: Reloading finished in 294 ms. Apr 30 00:50:33.756160 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:50:33.760709 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:50:33.769464 systemd[1]: Starting ensure-sysext.service... Apr 30 00:50:33.774462 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:50:33.778041 systemd[1]: Reloading requested from client PID 1401 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:50:33.778057 systemd[1]: Reloading... Apr 30 00:50:33.802094 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:50:33.803072 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:50:33.804018 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:50:33.804449 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Apr 30 00:50:33.804589 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Apr 30 00:50:33.808634 systemd-tmpfiles[1402]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:50:33.808921 systemd-tmpfiles[1402]: Skipping /boot Apr 30 00:50:33.819230 systemd-tmpfiles[1402]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:50:33.819410 systemd-tmpfiles[1402]: Skipping /boot Apr 30 00:50:33.866261 zram_generator::config[1437]: No configuration found. Apr 30 00:50:33.968983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:50:34.031956 systemd[1]: Reloading finished in 253 ms. Apr 30 00:50:34.049171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:50:34.068486 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:50:34.078475 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:50:34.083428 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:50:34.089165 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:50:34.095620 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:50:34.110383 systemd-networkd[1242]: eth0: Gained IPv6LL Apr 30 00:50:34.116119 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:50:34.124948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:50:34.145531 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:50:34.151591 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:50:34.153961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:50:34.157938 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:50:34.164041 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:50:34.166375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:50:34.175853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:50:34.176044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:50:34.183858 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:50:34.191646 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:50:34.194461 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:50:34.206499 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:50:34.206765 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:50:34.214101 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:50:34.220764 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:50:34.223650 augenrules[1511]: No rules Apr 30 00:50:34.229626 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:50:34.241563 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:50:34.247606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:50:34.250403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:50:34.253111 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:50:34.253415 systemd-resolved[1480]: Positive Trust Anchors: Apr 30 00:50:34.253430 systemd-resolved[1480]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:50:34.253464 systemd-resolved[1480]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:50:34.260349 systemd-resolved[1480]: Using system hostname 'ci-4081-3-3-6-32a99953eb'. Apr 30 00:50:34.267733 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:50:34.273405 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:50:34.275046 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:50:34.278625 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:50:34.279846 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:50:34.280032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:50:34.281377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:50:34.281560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:50:34.282738 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:50:34.282946 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:50:34.294029 systemd[1]: Reached target network.target - Network. Apr 30 00:50:34.294725 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:50:34.295478 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:50:34.296299 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:50:34.301514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:50:34.305680 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:50:34.312730 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:50:34.318588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:50:34.320561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:50:34.320733 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:50:34.323631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:50:34.323845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:50:34.324983 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:50:34.325151 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:50:34.331764 systemd[1]: Finished ensure-sysext.service. Apr 30 00:50:34.335774 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:50:34.336856 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:50:34.341658 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:50:34.342090 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:50:34.344714 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:50:34.344804 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:50:34.351530 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:50:34.404858 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:50:34.408533 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:50:34.410163 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:50:34.412568 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:50:34.414353 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:50:34.415690 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:50:34.415764 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:50:34.416748 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:50:34.418034 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:50:34.419280 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:50:34.420447 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:50:34.422734 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:50:34.425828 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:50:34.429750 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:50:34.432957 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:50:34.433573 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:50:34.434016 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:50:34.434759 systemd[1]: System is tainted: cgroupsv1 Apr 30 00:50:34.434807 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:50:34.434837 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:50:34.438382 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:50:34.443423 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:50:34.445156 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:50:34.456413 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:50:34.460897 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:50:34.461786 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:50:34.470877 jq[1557]: false Apr 30 00:50:34.475384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:50:34.973111 systemd-resolved[1480]: Clock change detected. Flushing caches. Apr 30 00:50:34.973287 systemd-timesyncd[1549]: Contacted time server 78.46.60.40:123 (0.flatcar.pool.ntp.org). Apr 30 00:50:34.973348 systemd-timesyncd[1549]: Initial clock synchronization to Wed 2025-04-30 00:50:34.973040 UTC. Apr 30 00:50:34.981592 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:50:34.987884 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:50:34.997069 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:50:35.006674 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 00:50:35.018563 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:50:35.020710 dbus-daemon[1556]: [system] SELinux support is enabled Apr 30 00:50:35.025248 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:50:35.042722 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:50:35.046598 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:50:35.053320 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:50:35.057686 coreos-metadata[1554]: Apr 30 00:50:35.057 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 00:50:35.066398 coreos-metadata[1554]: Apr 30 00:50:35.059 INFO Fetch successful Apr 30 00:50:35.066398 coreos-metadata[1554]: Apr 30 00:50:35.060 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 00:50:35.066398 coreos-metadata[1554]: Apr 30 00:50:35.061 INFO Fetch successful Apr 30 00:50:35.066535 extend-filesystems[1558]: Found loop4 Apr 30 00:50:35.066535 extend-filesystems[1558]: Found loop5 Apr 30 00:50:35.066535 extend-filesystems[1558]: Found loop6 Apr 30 00:50:35.066535 extend-filesystems[1558]: Found loop7 Apr 30 00:50:35.066535 extend-filesystems[1558]: Found sda Apr 30 00:50:35.066535 extend-filesystems[1558]: Found sda1 Apr 30 00:50:35.066535 extend-filesystems[1558]: Found sda2 Apr 30 00:50:35.066535 extend-filesystems[1558]: Found sda3 Apr 30 00:50:35.066535 extend-filesystems[1558]: Found usr Apr 30 00:50:35.066535 extend-filesystems[1558]: Found sda4 Apr 30 00:50:35.066535 extend-filesystems[1558]: Found sda6 Apr 30 00:50:35.066535 extend-filesystems[1558]: Found sda7 Apr 30 00:50:35.066535 extend-filesystems[1558]: Found sda9 Apr 30 00:50:35.066535 extend-filesystems[1558]: Checking size of /dev/sda9 Apr 30 00:50:35.068320 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:50:35.091049 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:50:35.098449 jq[1583]: true Apr 30 00:50:35.111220 update_engine[1582]: I20250430 00:50:35.109606 1582 main.cc:92] Flatcar Update Engine starting Apr 30 00:50:35.111532 extend-filesystems[1558]: Resized partition /dev/sda9 Apr 30 00:50:35.114335 extend-filesystems[1598]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:50:35.115888 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:50:35.126242 update_engine[1582]: I20250430 00:50:35.117420 1582 update_check_scheduler.cc:74] Next update check in 9m50s Apr 30 00:50:35.117913 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:50:35.125239 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:50:35.125543 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:50:35.128993 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 00:50:35.130679 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:50:35.152007 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:50:35.152343 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:50:35.182723 systemd-networkd[1242]: eth1: Gained IPv6LL Apr 30 00:50:35.194455 (ntainerd)[1612]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:50:35.211011 jq[1607]: true Apr 30 00:50:35.228137 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:50:35.229629 tar[1605]: linux-arm64/helm Apr 30 00:50:35.249502 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1249) Apr 30 00:50:35.248785 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:50:35.248817 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:50:35.250648 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:50:35.250843 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:50:35.254579 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:50:35.256794 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:50:35.300375 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:50:35.305676 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:50:35.400989 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 00:50:35.400863 systemd-logind[1580]: New seat seat0. Apr 30 00:50:35.414479 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:50:35.414502 systemd-logind[1580]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 30 00:50:35.415137 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:50:35.424538 extend-filesystems[1598]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 00:50:35.424538 extend-filesystems[1598]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 00:50:35.424538 extend-filesystems[1598]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 00:50:35.449209 bash[1649]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:50:35.426632 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:50:35.449388 extend-filesystems[1558]: Resized filesystem in /dev/sda9 Apr 30 00:50:35.449388 extend-filesystems[1558]: Found sr0 Apr 30 00:50:35.456802 systemd[1]: Starting sshkeys.service... Apr 30 00:50:35.458305 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:50:35.459381 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:50:35.497916 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 00:50:35.518677 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 00:50:35.556979 coreos-metadata[1659]: Apr 30 00:50:35.556 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 00:50:35.565621 coreos-metadata[1659]: Apr 30 00:50:35.562 INFO Fetch successful Apr 30 00:50:35.567325 unknown[1659]: wrote ssh authorized keys file for user: core Apr 30 00:50:35.611310 update-ssh-keys[1667]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:50:35.615816 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 00:50:35.620718 systemd[1]: Finished sshkeys.service. Apr 30 00:50:35.735357 containerd[1612]: time="2025-04-30T00:50:35.735253506Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:50:35.802261 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:50:35.816507 containerd[1612]: time="2025-04-30T00:50:35.816145386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:50:35.823443 containerd[1612]: time="2025-04-30T00:50:35.823210066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:50:35.823443 containerd[1612]: time="2025-04-30T00:50:35.823262626Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:50:35.823443 containerd[1612]: time="2025-04-30T00:50:35.823282866Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:50:35.823919 containerd[1612]: time="2025-04-30T00:50:35.823454146Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:50:35.823919 containerd[1612]: time="2025-04-30T00:50:35.823473706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:50:35.823919 containerd[1612]: time="2025-04-30T00:50:35.823537026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:50:35.823919 containerd[1612]: time="2025-04-30T00:50:35.823550426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:50:35.823919 containerd[1612]: time="2025-04-30T00:50:35.823759586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:50:35.823919 containerd[1612]: time="2025-04-30T00:50:35.823777026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:50:35.823919 containerd[1612]: time="2025-04-30T00:50:35.823796506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:50:35.823919 containerd[1612]: time="2025-04-30T00:50:35.823805906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:50:35.823919 containerd[1612]: time="2025-04-30T00:50:35.823874586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:50:35.826345 containerd[1612]: time="2025-04-30T00:50:35.826189226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:50:35.826536 containerd[1612]: time="2025-04-30T00:50:35.826375946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:50:35.826536 containerd[1612]: time="2025-04-30T00:50:35.826390466Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:50:35.826536 containerd[1612]: time="2025-04-30T00:50:35.826480146Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:50:35.826536 containerd[1612]: time="2025-04-30T00:50:35.826526626Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:50:35.834746 containerd[1612]: time="2025-04-30T00:50:35.834141786Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:50:35.834746 containerd[1612]: time="2025-04-30T00:50:35.834220786Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:50:35.834746 containerd[1612]: time="2025-04-30T00:50:35.834238666Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:50:35.834746 containerd[1612]: time="2025-04-30T00:50:35.834254946Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:50:35.834746 containerd[1612]: time="2025-04-30T00:50:35.834273346Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:50:35.834746 containerd[1612]: time="2025-04-30T00:50:35.834462026Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.834775666Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.834886666Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.834904186Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.834917746Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.834963866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.834980306Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.834994026Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.835008346Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.835023626Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.835036426Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.835049106Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.835063306Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.835135586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835376 containerd[1612]: time="2025-04-30T00:50:35.835151186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835164186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835178186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835190306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835204386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835217746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835243626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835260466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835276226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835288626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835300826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835314346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835335626Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835359546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835372746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835620 containerd[1612]: time="2025-04-30T00:50:35.835384826Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:50:35.835876 containerd[1612]: time="2025-04-30T00:50:35.835509626Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:50:35.835876 containerd[1612]: time="2025-04-30T00:50:35.835529346Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:50:35.835876 containerd[1612]: time="2025-04-30T00:50:35.835541186Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:50:35.835876 containerd[1612]: time="2025-04-30T00:50:35.835553186Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:50:35.835876 containerd[1612]: time="2025-04-30T00:50:35.835562506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.835876 containerd[1612]: time="2025-04-30T00:50:35.835575306Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:50:35.835876 containerd[1612]: time="2025-04-30T00:50:35.835585906Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:50:35.835876 containerd[1612]: time="2025-04-30T00:50:35.835596466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:50:35.842209 containerd[1612]: time="2025-04-30T00:50:35.840131786Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:50:35.842209 containerd[1612]: time="2025-04-30T00:50:35.840225506Z" level=info msg="Connect containerd service" Apr 30 00:50:35.842209 containerd[1612]: time="2025-04-30T00:50:35.840265986Z" level=info msg="using legacy CRI server" Apr 30 00:50:35.842209 containerd[1612]: time="2025-04-30T00:50:35.840273346Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:50:35.842209 containerd[1612]: time="2025-04-30T00:50:35.840378066Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:50:35.842209 containerd[1612]: time="2025-04-30T00:50:35.841110626Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:50:35.842209 containerd[1612]: time="2025-04-30T00:50:35.841621386Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:50:35.842209 containerd[1612]: time="2025-04-30T00:50:35.841660546Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:50:35.842209 containerd[1612]: time="2025-04-30T00:50:35.841799626Z" level=info msg="Start subscribing containerd event" Apr 30 00:50:35.842209 containerd[1612]: time="2025-04-30T00:50:35.841838386Z" level=info msg="Start recovering state" Apr 30 00:50:35.842209 containerd[1612]: time="2025-04-30T00:50:35.841910146Z" level=info msg="Start event monitor" Apr 30 00:50:35.842209 containerd[1612]: time="2025-04-30T00:50:35.841921666Z" level=info msg="Start snapshots syncer" Apr 30 00:50:35.848664 containerd[1612]: time="2025-04-30T00:50:35.845958426Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:50:35.848664 containerd[1612]: time="2025-04-30T00:50:35.845985666Z" level=info msg="Start streaming server" Apr 30 00:50:35.848664 containerd[1612]: time="2025-04-30T00:50:35.846157146Z" level=info msg="containerd successfully booted in 0.116096s" Apr 30 00:50:35.846298 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:50:36.143664 tar[1605]: linux-arm64/LICENSE Apr 30 00:50:36.143664 tar[1605]: linux-arm64/README.md Apr 30 00:50:36.169433 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:50:36.246040 sshd_keygen[1595]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:50:36.281058 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:50:36.287154 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:50:36.317206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:50:36.318373 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:50:36.318635 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:50:36.318688 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:50:36.327372 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:50:36.353681 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:50:36.364502 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:50:36.368158 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:50:36.368975 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:50:36.369574 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:50:36.372334 systemd[1]: Startup finished in 6.136s (kernel) + 4.881s (userspace) = 11.018s. Apr 30 00:50:36.936390 kubelet[1705]: E0430 00:50:36.936314 1705 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:50:36.941310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:50:36.941641 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:50:47.192258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:50:47.203436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:50:47.320249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:50:47.327580 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:50:47.385662 kubelet[1738]: E0430 00:50:47.385613 1738 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:50:47.392180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:50:47.392773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:50:57.642887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:50:57.648560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:50:57.779372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:50:57.779469 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:50:57.839442 kubelet[1759]: E0430 00:50:57.839373 1759 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:50:57.844465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:50:57.844785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:51:08.095498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:51:08.110189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:51:08.225224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:51:08.231066 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:51:08.282513 kubelet[1780]: E0430 00:51:08.282459 1780 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:51:08.284987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:51:08.285158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:51:18.300854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 00:51:18.310544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:51:18.440250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:51:18.444998 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:51:18.495524 kubelet[1801]: E0430 00:51:18.495458 1801 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:51:18.499515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:51:18.499772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:51:20.398153 update_engine[1582]: I20250430 00:51:20.397986 1582 update_attempter.cc:509] Updating boot flags... Apr 30 00:51:20.465159 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1819) Apr 30 00:51:20.519961 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1822) Apr 30 00:51:20.580955 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1822) Apr 30 00:51:28.550141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 00:51:28.558219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:51:28.679717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:51:28.690662 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:51:28.745980 kubelet[1843]: E0430 00:51:28.745903 1843 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:51:28.750837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:51:28.751168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:51:38.801343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 00:51:38.809273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:51:38.930122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:51:38.934491 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:51:38.987069 kubelet[1864]: E0430 00:51:38.986876 1864 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:51:38.990175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:51:38.990567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:51:49.049827 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 00:51:49.067312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:51:49.212267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:51:49.213431 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:51:49.270767 kubelet[1885]: E0430 00:51:49.270703 1885 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:51:49.274832 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:51:49.275176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:51:59.299607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 00:51:59.305344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:51:59.469242 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:51:59.481569 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:51:59.536441 kubelet[1906]: E0430 00:51:59.536372 1906 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:51:59.541756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:51:59.542526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:52:07.747454 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:52:07.756435 systemd[1]: Started sshd@0-49.12.45.4:22-139.178.68.195:38064.service - OpenSSH per-connection server daemon (139.178.68.195:38064). Apr 30 00:52:08.744141 sshd[1914]: Accepted publickey for core from 139.178.68.195 port 38064 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:08.747163 sshd[1914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:08.759132 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:52:08.765423 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:52:08.769012 systemd-logind[1580]: New session 1 of user core. Apr 30 00:52:08.787239 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:52:08.794699 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:52:08.800746 (systemd)[1920]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:52:08.916322 systemd[1920]: Queued start job for default target default.target. Apr 30 00:52:08.916771 systemd[1920]: Created slice app.slice - User Application Slice. Apr 30 00:52:08.916795 systemd[1920]: Reached target paths.target - Paths. Apr 30 00:52:08.916807 systemd[1920]: Reached target timers.target - Timers. Apr 30 00:52:08.932250 systemd[1920]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:52:08.942375 systemd[1920]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:52:08.943803 systemd[1920]: Reached target sockets.target - Sockets. Apr 30 00:52:08.943857 systemd[1920]: Reached target basic.target - Basic System. Apr 30 00:52:08.944010 systemd[1920]: Reached target default.target - Main User Target. Apr 30 00:52:08.944072 systemd[1920]: Startup finished in 135ms. Apr 30 00:52:08.944844 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:52:08.957649 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:52:09.549831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 00:52:09.566515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:52:09.651124 systemd[1]: Started sshd@1-49.12.45.4:22-139.178.68.195:38078.service - OpenSSH per-connection server daemon (139.178.68.195:38078). Apr 30 00:52:09.743337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:52:09.744388 (kubelet)[1946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:52:09.796644 kubelet[1946]: E0430 00:52:09.796576 1946 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:52:09.799102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:52:09.799269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:52:10.625805 sshd[1936]: Accepted publickey for core from 139.178.68.195 port 38078 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:10.628979 sshd[1936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:10.636976 systemd-logind[1580]: New session 2 of user core. Apr 30 00:52:10.643891 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:52:11.307848 sshd[1936]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:11.311881 systemd-logind[1580]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:52:11.312655 systemd[1]: sshd@1-49.12.45.4:22-139.178.68.195:38078.service: Deactivated successfully. Apr 30 00:52:11.317106 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:52:11.318132 systemd-logind[1580]: Removed session 2. Apr 30 00:52:11.474382 systemd[1]: Started sshd@2-49.12.45.4:22-139.178.68.195:38082.service - OpenSSH per-connection server daemon (139.178.68.195:38082). Apr 30 00:52:12.453248 sshd[1961]: Accepted publickey for core from 139.178.68.195 port 38082 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:12.455505 sshd[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:12.462018 systemd-logind[1580]: New session 3 of user core. Apr 30 00:52:12.473615 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:52:13.132490 sshd[1961]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:13.136498 systemd[1]: sshd@2-49.12.45.4:22-139.178.68.195:38082.service: Deactivated successfully. Apr 30 00:52:13.140122 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:52:13.140739 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:52:13.142673 systemd-logind[1580]: Removed session 3. Apr 30 00:52:13.300443 systemd[1]: Started sshd@3-49.12.45.4:22-139.178.68.195:38094.service - OpenSSH per-connection server daemon (139.178.68.195:38094). Apr 30 00:52:14.290686 sshd[1969]: Accepted publickey for core from 139.178.68.195 port 38094 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:14.292143 sshd[1969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:14.298103 systemd-logind[1580]: New session 4 of user core. Apr 30 00:52:14.312524 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:52:14.976690 sshd[1969]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:14.981187 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:52:14.982526 systemd[1]: sshd@3-49.12.45.4:22-139.178.68.195:38094.service: Deactivated successfully. Apr 30 00:52:14.987071 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:52:14.988964 systemd-logind[1580]: Removed session 4. Apr 30 00:52:15.140561 systemd[1]: Started sshd@4-49.12.45.4:22-139.178.68.195:38096.service - OpenSSH per-connection server daemon (139.178.68.195:38096). Apr 30 00:52:16.121211 sshd[1977]: Accepted publickey for core from 139.178.68.195 port 38096 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:16.123397 sshd[1977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:16.132041 systemd-logind[1580]: New session 5 of user core. Apr 30 00:52:16.142398 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:52:16.656552 sudo[1981]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:52:16.656884 sudo[1981]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:52:16.672877 sudo[1981]: pam_unix(sudo:session): session closed for user root Apr 30 00:52:16.835674 sshd[1977]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:16.841822 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:52:16.843067 systemd[1]: sshd@4-49.12.45.4:22-139.178.68.195:38096.service: Deactivated successfully. Apr 30 00:52:16.846820 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:52:16.849207 systemd-logind[1580]: Removed session 5. Apr 30 00:52:17.004767 systemd[1]: Started sshd@5-49.12.45.4:22-139.178.68.195:58640.service - OpenSSH per-connection server daemon (139.178.68.195:58640). Apr 30 00:52:17.979099 sshd[1986]: Accepted publickey for core from 139.178.68.195 port 58640 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:17.981837 sshd[1986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:17.989370 systemd-logind[1580]: New session 6 of user core. Apr 30 00:52:18.001031 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:52:18.503996 sudo[1991]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:52:18.506553 sudo[1991]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:52:18.514329 sudo[1991]: pam_unix(sudo:session): session closed for user root Apr 30 00:52:18.522363 sudo[1990]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:52:18.522656 sudo[1990]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:52:18.541535 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:52:18.543971 auditctl[1994]: No rules Apr 30 00:52:18.544476 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:52:18.544869 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:52:18.554323 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:52:18.583580 augenrules[2013]: No rules Apr 30 00:52:18.586368 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:52:18.588982 sudo[1990]: pam_unix(sudo:session): session closed for user root Apr 30 00:52:18.749445 sshd[1986]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:18.754836 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:52:18.756165 systemd[1]: sshd@5-49.12.45.4:22-139.178.68.195:58640.service: Deactivated successfully. Apr 30 00:52:18.759302 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:52:18.760728 systemd-logind[1580]: Removed session 6. Apr 30 00:52:18.932597 systemd[1]: Started sshd@6-49.12.45.4:22-139.178.68.195:58654.service - OpenSSH per-connection server daemon (139.178.68.195:58654). Apr 30 00:52:19.913421 sshd[2022]: Accepted publickey for core from 139.178.68.195 port 58654 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:52:19.915461 sshd[2022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:19.916526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 00:52:19.925296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:52:19.931069 systemd-logind[1580]: New session 7 of user core. Apr 30 00:52:19.936989 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:52:20.044257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:52:20.049318 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:52:20.099158 kubelet[2038]: E0430 00:52:20.099096 2038 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:52:20.104373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:52:20.104780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:52:20.436015 sudo[2046]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:52:20.436332 sudo[2046]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:52:20.756723 (dockerd)[2061]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:52:20.756770 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:52:21.025329 dockerd[2061]: time="2025-04-30T00:52:21.024575700Z" level=info msg="Starting up" Apr 30 00:52:21.134845 dockerd[2061]: time="2025-04-30T00:52:21.134800950Z" level=info msg="Loading containers: start." Apr 30 00:52:21.261965 kernel: Initializing XFRM netlink socket Apr 30 00:52:21.356143 systemd-networkd[1242]: docker0: Link UP Apr 30 00:52:21.383987 dockerd[2061]: time="2025-04-30T00:52:21.383672972Z" level=info msg="Loading containers: done." Apr 30 00:52:21.400178 dockerd[2061]: time="2025-04-30T00:52:21.400116229Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:52:21.400441 dockerd[2061]: time="2025-04-30T00:52:21.400253322Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:52:21.400441 dockerd[2061]: time="2025-04-30T00:52:21.400389174Z" level=info msg="Daemon has completed initialization" Apr 30 00:52:21.451845 dockerd[2061]: time="2025-04-30T00:52:21.449825676Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:52:21.451092 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:52:22.103206 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4153853777-merged.mount: Deactivated successfully. Apr 30 00:52:22.644838 containerd[1612]: time="2025-04-30T00:52:22.644787434Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 00:52:23.286005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1953016799.mount: Deactivated successfully. Apr 30 00:52:24.417699 containerd[1612]: time="2025-04-30T00:52:24.417636831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:24.418993 containerd[1612]: time="2025-04-30T00:52:24.418946978Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794242" Apr 30 00:52:24.420434 containerd[1612]: time="2025-04-30T00:52:24.420359853Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:24.426213 containerd[1612]: time="2025-04-30T00:52:24.426082841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:24.427849 containerd[1612]: time="2025-04-30T00:52:24.427785220Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.782937221s" Apr 30 00:52:24.428493 containerd[1612]: time="2025-04-30T00:52:24.428044801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 00:52:24.452233 containerd[1612]: time="2025-04-30T00:52:24.452140731Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 00:52:25.718573 containerd[1612]: time="2025-04-30T00:52:25.718474417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:25.721270 containerd[1612]: time="2025-04-30T00:52:25.721062223Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855570" Apr 30 00:52:25.723024 containerd[1612]: time="2025-04-30T00:52:25.722968335Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:25.731917 containerd[1612]: time="2025-04-30T00:52:25.731795357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:25.734297 containerd[1612]: time="2025-04-30T00:52:25.734024215Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.281510774s" Apr 30 00:52:25.734297 containerd[1612]: time="2025-04-30T00:52:25.734111461Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 00:52:25.759370 containerd[1612]: time="2025-04-30T00:52:25.759104530Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 00:52:26.694038 containerd[1612]: time="2025-04-30T00:52:26.693987973Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263965" Apr 30 00:52:26.695128 containerd[1612]: time="2025-04-30T00:52:26.695053095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:26.697831 containerd[1612]: time="2025-04-30T00:52:26.697225944Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:26.698379 containerd[1612]: time="2025-04-30T00:52:26.698337710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:26.699712 containerd[1612]: time="2025-04-30T00:52:26.699676694Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 940.52508ms" Apr 30 00:52:26.699839 containerd[1612]: time="2025-04-30T00:52:26.699821665Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 00:52:26.727899 containerd[1612]: time="2025-04-30T00:52:26.727860798Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 00:52:27.663510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount568015636.mount: Deactivated successfully. Apr 30 00:52:28.018744 containerd[1612]: time="2025-04-30T00:52:28.015911603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:28.018744 containerd[1612]: time="2025-04-30T00:52:28.018626283Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775731" Apr 30 00:52:28.020922 containerd[1612]: time="2025-04-30T00:52:28.020875648Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:28.024991 containerd[1612]: time="2025-04-30T00:52:28.024879582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:28.026497 containerd[1612]: time="2025-04-30T00:52:28.026424056Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.297930849s" Apr 30 00:52:28.026497 containerd[1612]: time="2025-04-30T00:52:28.026495301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 00:52:28.051819 containerd[1612]: time="2025-04-30T00:52:28.051774359Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:52:28.640598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063160232.mount: Deactivated successfully. Apr 30 00:52:29.226365 containerd[1612]: time="2025-04-30T00:52:29.226248775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:29.228601 containerd[1612]: time="2025-04-30T00:52:29.228077866Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Apr 30 00:52:29.229736 containerd[1612]: time="2025-04-30T00:52:29.229695302Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:29.235677 containerd[1612]: time="2025-04-30T00:52:29.235185335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:29.236754 containerd[1612]: time="2025-04-30T00:52:29.236700203Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.184683827s" Apr 30 00:52:29.236901 containerd[1612]: time="2025-04-30T00:52:29.236882617Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 00:52:29.259648 containerd[1612]: time="2025-04-30T00:52:29.259528438Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 00:52:29.760331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170746605.mount: Deactivated successfully. Apr 30 00:52:29.768510 containerd[1612]: time="2025-04-30T00:52:29.768437115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:29.770628 containerd[1612]: time="2025-04-30T00:52:29.770577188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Apr 30 00:52:29.773858 containerd[1612]: time="2025-04-30T00:52:29.773789458Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:29.778520 containerd[1612]: time="2025-04-30T00:52:29.778418950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:29.779981 containerd[1612]: time="2025-04-30T00:52:29.779684281Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 520.107399ms" Apr 30 00:52:29.779981 containerd[1612]: time="2025-04-30T00:52:29.779734044Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 00:52:29.804707 containerd[1612]: time="2025-04-30T00:52:29.804670270Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 00:52:30.299214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 00:52:30.309219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:52:30.424341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1047516877.mount: Deactivated successfully. Apr 30 00:52:30.461443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:52:30.475707 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:52:30.541686 kubelet[2373]: E0430 00:52:30.541639 2373 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:52:30.546164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:52:30.546408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:52:31.862965 containerd[1612]: time="2025-04-30T00:52:31.861654146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:31.864533 containerd[1612]: time="2025-04-30T00:52:31.864495539Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Apr 30 00:52:31.865999 containerd[1612]: time="2025-04-30T00:52:31.865931197Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:31.872531 containerd[1612]: time="2025-04-30T00:52:31.872457240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:31.875295 containerd[1612]: time="2025-04-30T00:52:31.875225949Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.070511836s" Apr 30 00:52:31.875295 containerd[1612]: time="2025-04-30T00:52:31.875291313Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 00:52:37.125687 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:52:37.131507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:52:37.157555 systemd[1]: Reloading requested from client PID 2485 ('systemctl') (unit session-7.scope)... Apr 30 00:52:37.157722 systemd[1]: Reloading... Apr 30 00:52:37.294986 zram_generator::config[2534]: No configuration found. Apr 30 00:52:37.416361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:52:37.486999 systemd[1]: Reloading finished in 328 ms. Apr 30 00:52:37.543676 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:52:37.543867 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:52:37.544820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:52:37.551998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:52:37.664216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:52:37.676364 (kubelet)[2585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:52:37.730262 kubelet[2585]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:52:37.730262 kubelet[2585]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:52:37.730262 kubelet[2585]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:52:37.730712 kubelet[2585]: I0430 00:52:37.730425 2585 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:52:38.896895 kubelet[2585]: I0430 00:52:38.896458 2585 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:52:38.896895 kubelet[2585]: I0430 00:52:38.896546 2585 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:52:38.896895 kubelet[2585]: I0430 00:52:38.896841 2585 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:52:38.919306 kubelet[2585]: E0430 00:52:38.918982 2585 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://49.12.45.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:38.919306 kubelet[2585]: I0430 00:52:38.919110 2585 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:52:38.934810 kubelet[2585]: I0430 00:52:38.934771 2585 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:52:38.937217 kubelet[2585]: I0430 00:52:38.937126 2585 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:52:38.937481 kubelet[2585]: I0430 00:52:38.937216 2585 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-6-32a99953eb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:52:38.937678 kubelet[2585]: I0430 00:52:38.937544 2585 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:52:38.937678 kubelet[2585]: I0430 00:52:38.937556 2585 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:52:38.937919 kubelet[2585]: I0430 00:52:38.937875 2585 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:52:38.940988 kubelet[2585]: I0430 00:52:38.939350 2585 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:52:38.940988 kubelet[2585]: I0430 00:52:38.939389 2585 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:52:38.940988 kubelet[2585]: I0430 00:52:38.939659 2585 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:52:38.940988 kubelet[2585]: I0430 00:52:38.939678 2585 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:52:38.942085 kubelet[2585]: I0430 00:52:38.942050 2585 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:52:38.942822 kubelet[2585]: I0430 00:52:38.942787 2585 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:52:38.943067 kubelet[2585]: W0430 00:52:38.943048 2585 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:52:38.944279 kubelet[2585]: I0430 00:52:38.944232 2585 server.go:1264] "Started kubelet" Apr 30 00:52:38.944618 kubelet[2585]: W0430 00:52:38.944563 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.12.45.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:38.944745 kubelet[2585]: E0430 00:52:38.944731 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.12.45.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:38.944897 kubelet[2585]: W0430 00:52:38.944865 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.12.45.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-6-32a99953eb&limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:38.945015 kubelet[2585]: E0430 00:52:38.945002 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.12.45.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-6-32a99953eb&limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:38.952009 kubelet[2585]: E0430 00:52:38.951477 2585 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.12.45.4:6443/api/v1/namespaces/default/events\": dial tcp 49.12.45.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-6-32a99953eb.183af26dcbcf33e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-6-32a99953eb,UID:ci-4081-3-3-6-32a99953eb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-6-32a99953eb,},FirstTimestamp:2025-04-30 00:52:38.944199652 +0000 UTC m=+1.264060163,LastTimestamp:2025-04-30 00:52:38.944199652 +0000 UTC m=+1.264060163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-6-32a99953eb,}" Apr 30 00:52:38.952009 kubelet[2585]: I0430 00:52:38.951930 2585 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:52:38.953345 kubelet[2585]: I0430 00:52:38.953297 2585 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:52:38.954071 kubelet[2585]: I0430 00:52:38.954040 2585 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:52:38.954365 kubelet[2585]: I0430 00:52:38.954298 2585 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:52:38.954600 kubelet[2585]: I0430 00:52:38.954573 2585 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:52:38.961534 kubelet[2585]: I0430 00:52:38.961471 2585 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:52:38.964159 kubelet[2585]: I0430 00:52:38.963988 2585 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:52:38.966348 kubelet[2585]: I0430 00:52:38.965400 2585 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:52:38.967089 kubelet[2585]: I0430 00:52:38.967060 2585 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:52:38.967305 kubelet[2585]: E0430 00:52:38.967263 2585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.45.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-6-32a99953eb?timeout=10s\": dial tcp 49.12.45.4:6443: connect: connection refused" interval="200ms" Apr 30 00:52:38.967693 kubelet[2585]: W0430 00:52:38.967128 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.12.45.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:38.967841 kubelet[2585]: E0430 00:52:38.967826 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.12.45.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:38.968726 kubelet[2585]: E0430 00:52:38.968681 2585 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:52:38.970108 kubelet[2585]: I0430 00:52:38.970062 2585 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:52:38.970108 kubelet[2585]: I0430 00:52:38.970090 2585 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:52:38.998163 kubelet[2585]: I0430 00:52:38.998067 2585 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:52:39.002230 kubelet[2585]: I0430 00:52:39.001895 2585 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:52:39.002230 kubelet[2585]: I0430 00:52:39.002104 2585 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:52:39.002230 kubelet[2585]: I0430 00:52:39.002133 2585 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:52:39.002230 kubelet[2585]: E0430 00:52:39.002184 2585 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:52:39.003652 kubelet[2585]: W0430 00:52:39.003608 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.12.45.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:39.005449 kubelet[2585]: E0430 00:52:39.005133 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.12.45.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:39.009123 kubelet[2585]: I0430 00:52:39.009040 2585 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:52:39.009123 kubelet[2585]: I0430 00:52:39.009066 2585 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:52:39.009905 kubelet[2585]: I0430 00:52:39.009586 2585 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:52:39.013605 kubelet[2585]: I0430 00:52:39.013568 2585 policy_none.go:49] "None policy: Start" Apr 30 00:52:39.014890 kubelet[2585]: I0430 00:52:39.014861 2585 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:52:39.015549 kubelet[2585]: I0430 00:52:39.015129 2585 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:52:39.022985 kubelet[2585]: I0430 00:52:39.022344 2585 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:52:39.022985 kubelet[2585]: I0430 00:52:39.022640 2585 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:52:39.022985 kubelet[2585]: I0430 00:52:39.022766 2585 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:52:39.026062 kubelet[2585]: E0430 00:52:39.026031 2585 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-6-32a99953eb\" not found" Apr 30 00:52:39.065175 kubelet[2585]: I0430 00:52:39.065129 2585 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.065697 kubelet[2585]: E0430 00:52:39.065552 2585 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.12.45.4:6443/api/v1/nodes\": dial tcp 49.12.45.4:6443: connect: connection refused" node="ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.103095 kubelet[2585]: I0430 00:52:39.102912 2585 topology_manager.go:215] "Topology Admit Handler" podUID="523ddb53e4e89ae6e8842f74387fd547" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.107300 kubelet[2585]: I0430 00:52:39.107105 2585 topology_manager.go:215] "Topology Admit Handler" podUID="80b4003c4d3ec64b43bcb1c0005bb4a0" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.112358 kubelet[2585]: I0430 00:52:39.111800 2585 topology_manager.go:215] "Topology Admit Handler" podUID="841ffa88f55286aabc5aea08e000001c" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.167036 kubelet[2585]: I0430 00:52:39.166141 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80b4003c4d3ec64b43bcb1c0005bb4a0-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-6-32a99953eb\" (UID: \"80b4003c4d3ec64b43bcb1c0005bb4a0\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.167036 kubelet[2585]: I0430 00:52:39.166207 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80b4003c4d3ec64b43bcb1c0005bb4a0-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-6-32a99953eb\" (UID: \"80b4003c4d3ec64b43bcb1c0005bb4a0\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.167036 kubelet[2585]: I0430 00:52:39.166239 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80b4003c4d3ec64b43bcb1c0005bb4a0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-6-32a99953eb\" (UID: \"80b4003c4d3ec64b43bcb1c0005bb4a0\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.167036 kubelet[2585]: I0430 00:52:39.166267 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/523ddb53e4e89ae6e8842f74387fd547-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-6-32a99953eb\" (UID: \"523ddb53e4e89ae6e8842f74387fd547\") " pod="kube-system/kube-apiserver-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.167036 kubelet[2585]: I0430 00:52:39.166293 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/523ddb53e4e89ae6e8842f74387fd547-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-6-32a99953eb\" (UID: \"523ddb53e4e89ae6e8842f74387fd547\") " pod="kube-system/kube-apiserver-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.167339 kubelet[2585]: I0430 00:52:39.166318 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/523ddb53e4e89ae6e8842f74387fd547-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-6-32a99953eb\" (UID: \"523ddb53e4e89ae6e8842f74387fd547\") " pod="kube-system/kube-apiserver-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.167339 kubelet[2585]: I0430 00:52:39.166341 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80b4003c4d3ec64b43bcb1c0005bb4a0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-6-32a99953eb\" (UID: \"80b4003c4d3ec64b43bcb1c0005bb4a0\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.167339 kubelet[2585]: I0430 00:52:39.166364 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80b4003c4d3ec64b43bcb1c0005bb4a0-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-6-32a99953eb\" (UID: \"80b4003c4d3ec64b43bcb1c0005bb4a0\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.167339 kubelet[2585]: I0430 00:52:39.166387 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/841ffa88f55286aabc5aea08e000001c-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-6-32a99953eb\" (UID: \"841ffa88f55286aabc5aea08e000001c\") " pod="kube-system/kube-scheduler-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.168152 kubelet[2585]: E0430 00:52:39.168108 2585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.45.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-6-32a99953eb?timeout=10s\": dial tcp 49.12.45.4:6443: connect: connection refused" interval="400ms" Apr 30 00:52:39.269745 kubelet[2585]: I0430 00:52:39.269251 2585 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.270087 kubelet[2585]: E0430 00:52:39.270047 2585 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.12.45.4:6443/api/v1/nodes\": dial tcp 49.12.45.4:6443: connect: connection refused" node="ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.418088 containerd[1612]: time="2025-04-30T00:52:39.417590473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-6-32a99953eb,Uid:80b4003c4d3ec64b43bcb1c0005bb4a0,Namespace:kube-system,Attempt:0,}" Apr 30 00:52:39.419073 containerd[1612]: time="2025-04-30T00:52:39.418626291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-6-32a99953eb,Uid:523ddb53e4e89ae6e8842f74387fd547,Namespace:kube-system,Attempt:0,}" Apr 30 00:52:39.422767 containerd[1612]: time="2025-04-30T00:52:39.422597312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-6-32a99953eb,Uid:841ffa88f55286aabc5aea08e000001c,Namespace:kube-system,Attempt:0,}" Apr 30 00:52:39.569427 kubelet[2585]: E0430 00:52:39.569381 2585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.45.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-6-32a99953eb?timeout=10s\": dial tcp 49.12.45.4:6443: connect: connection refused" interval="800ms" Apr 30 00:52:39.673530 kubelet[2585]: I0430 00:52:39.673054 2585 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.673530 kubelet[2585]: E0430 00:52:39.673395 2585 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.12.45.4:6443/api/v1/nodes\": dial tcp 49.12.45.4:6443: connect: connection refused" node="ci-4081-3-3-6-32a99953eb" Apr 30 00:52:39.786034 kubelet[2585]: W0430 00:52:39.785878 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.12.45.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-6-32a99953eb&limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:39.786034 kubelet[2585]: E0430 00:52:39.785986 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.12.45.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-6-32a99953eb&limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:39.801719 kubelet[2585]: W0430 00:52:39.801596 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.12.45.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:39.801719 kubelet[2585]: E0430 00:52:39.801689 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.12.45.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:39.932320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount830554593.mount: Deactivated successfully. Apr 30 00:52:39.941647 containerd[1612]: time="2025-04-30T00:52:39.941586268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:52:39.942786 containerd[1612]: time="2025-04-30T00:52:39.942731532Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 30 00:52:39.943543 containerd[1612]: time="2025-04-30T00:52:39.943507855Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:52:39.944374 containerd[1612]: time="2025-04-30T00:52:39.944338861Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:52:39.945324 containerd[1612]: time="2025-04-30T00:52:39.945280313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:52:39.945868 containerd[1612]: time="2025-04-30T00:52:39.945777621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:52:39.946972 containerd[1612]: time="2025-04-30T00:52:39.946681511Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:52:39.952335 containerd[1612]: time="2025-04-30T00:52:39.951864359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:52:39.953271 containerd[1612]: time="2025-04-30T00:52:39.953236075Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.555437ms" Apr 30 00:52:39.956198 containerd[1612]: time="2025-04-30T00:52:39.956147197Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 533.470281ms" Apr 30 00:52:39.965064 containerd[1612]: time="2025-04-30T00:52:39.964900603Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.202469ms" Apr 30 00:52:40.024928 kubelet[2585]: W0430 00:52:40.024839 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.12.45.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:40.024928 kubelet[2585]: E0430 00:52:40.024907 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.12.45.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:40.098696 containerd[1612]: time="2025-04-30T00:52:40.098597821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:52:40.098945 containerd[1612]: time="2025-04-30T00:52:40.098706387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:52:40.098945 containerd[1612]: time="2025-04-30T00:52:40.098752549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:52:40.098945 containerd[1612]: time="2025-04-30T00:52:40.098874716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:52:40.101070 containerd[1612]: time="2025-04-30T00:52:40.099521631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:52:40.101070 containerd[1612]: time="2025-04-30T00:52:40.099578994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:52:40.101070 containerd[1612]: time="2025-04-30T00:52:40.099589875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:52:40.101070 containerd[1612]: time="2025-04-30T00:52:40.099698761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:52:40.101070 containerd[1612]: time="2025-04-30T00:52:40.098262563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:52:40.101070 containerd[1612]: time="2025-04-30T00:52:40.099541672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:52:40.101070 containerd[1612]: time="2025-04-30T00:52:40.099556353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:52:40.101070 containerd[1612]: time="2025-04-30T00:52:40.099638397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:52:40.184123 containerd[1612]: time="2025-04-30T00:52:40.183267732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-6-32a99953eb,Uid:523ddb53e4e89ae6e8842f74387fd547,Namespace:kube-system,Attempt:0,} returns sandbox id \"097a9088971d06593e85db1bcfbbcd55f6f38a7863e7ff796f41bb6051f2e892\"" Apr 30 00:52:40.192652 containerd[1612]: time="2025-04-30T00:52:40.192317102Z" level=info msg="CreateContainer within sandbox \"097a9088971d06593e85db1bcfbbcd55f6f38a7863e7ff796f41bb6051f2e892\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:52:40.195580 containerd[1612]: time="2025-04-30T00:52:40.195479514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-6-32a99953eb,Uid:841ffa88f55286aabc5aea08e000001c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a30b6b02f95c5738b248a787312aa3ea6f052435a6e8ff85777f8e1fee806c9\"" Apr 30 00:52:40.201507 containerd[1612]: time="2025-04-30T00:52:40.201285989Z" level=info msg="CreateContainer within sandbox \"7a30b6b02f95c5738b248a787312aa3ea6f052435a6e8ff85777f8e1fee806c9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:52:40.205821 containerd[1612]: time="2025-04-30T00:52:40.205647105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-6-32a99953eb,Uid:80b4003c4d3ec64b43bcb1c0005bb4a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9314b1fdfe46a6e7f8c18655d557a092c5f2a374c4ed2b28add56713e2ab14ef\"" Apr 30 00:52:40.211553 containerd[1612]: time="2025-04-30T00:52:40.211515783Z" level=info msg="CreateContainer within sandbox \"9314b1fdfe46a6e7f8c18655d557a092c5f2a374c4ed2b28add56713e2ab14ef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:52:40.217467 containerd[1612]: time="2025-04-30T00:52:40.217275896Z" level=info msg="CreateContainer within sandbox \"097a9088971d06593e85db1bcfbbcd55f6f38a7863e7ff796f41bb6051f2e892\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d3494e0f788b47111455d60dc6ee18b8bfd1f55bd69e28539bd3ec549ce6e5f8\"" Apr 30 00:52:40.218875 containerd[1612]: time="2025-04-30T00:52:40.218828420Z" level=info msg="StartContainer for \"d3494e0f788b47111455d60dc6ee18b8bfd1f55bd69e28539bd3ec549ce6e5f8\"" Apr 30 00:52:40.232866 containerd[1612]: time="2025-04-30T00:52:40.232816378Z" level=info msg="CreateContainer within sandbox \"7a30b6b02f95c5738b248a787312aa3ea6f052435a6e8ff85777f8e1fee806c9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fa415790e5e072ccdf5cfa4fa4a089266199db8c20929a79a01246bc539bd7b0\"" Apr 30 00:52:40.233834 containerd[1612]: time="2025-04-30T00:52:40.233793871Z" level=info msg="StartContainer for \"fa415790e5e072ccdf5cfa4fa4a089266199db8c20929a79a01246bc539bd7b0\"" Apr 30 00:52:40.245171 containerd[1612]: time="2025-04-30T00:52:40.245023600Z" level=info msg="CreateContainer within sandbox \"9314b1fdfe46a6e7f8c18655d557a092c5f2a374c4ed2b28add56713e2ab14ef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df7f87ab076e49e1bca6a076e13ebbb089a78e0a329897ab30c02d9127d55612\"" Apr 30 00:52:40.245713 containerd[1612]: time="2025-04-30T00:52:40.245560789Z" level=info msg="StartContainer for \"df7f87ab076e49e1bca6a076e13ebbb089a78e0a329897ab30c02d9127d55612\"" Apr 30 00:52:40.257208 kubelet[2585]: W0430 00:52:40.257109 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.12.45.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:40.257208 kubelet[2585]: E0430 00:52:40.257185 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.12.45.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.45.4:6443: connect: connection refused Apr 30 00:52:40.328154 containerd[1612]: time="2025-04-30T00:52:40.325235749Z" level=info msg="StartContainer for \"d3494e0f788b47111455d60dc6ee18b8bfd1f55bd69e28539bd3ec549ce6e5f8\" returns successfully" Apr 30 00:52:40.368696 containerd[1612]: time="2025-04-30T00:52:40.368637783Z" level=info msg="StartContainer for \"df7f87ab076e49e1bca6a076e13ebbb089a78e0a329897ab30c02d9127d55612\" returns successfully" Apr 30 00:52:40.370158 containerd[1612]: time="2025-04-30T00:52:40.368732388Z" level=info msg="StartContainer for \"fa415790e5e072ccdf5cfa4fa4a089266199db8c20929a79a01246bc539bd7b0\" returns successfully" Apr 30 00:52:40.370254 kubelet[2585]: E0430 00:52:40.370200 2585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.45.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-6-32a99953eb?timeout=10s\": dial tcp 49.12.45.4:6443: connect: connection refused" interval="1.6s" Apr 30 00:52:40.476614 kubelet[2585]: I0430 00:52:40.476510 2585 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-6-32a99953eb" Apr 30 00:52:42.408134 kubelet[2585]: E0430 00:52:42.408079 2585 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-6-32a99953eb\" not found" node="ci-4081-3-3-6-32a99953eb" Apr 30 00:52:42.463950 kubelet[2585]: E0430 00:52:42.461920 2585 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-3-6-32a99953eb.183af26dcbcf33e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-6-32a99953eb,UID:ci-4081-3-3-6-32a99953eb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-6-32a99953eb,},FirstTimestamp:2025-04-30 00:52:38.944199652 +0000 UTC m=+1.264060163,LastTimestamp:2025-04-30 00:52:38.944199652 +0000 UTC m=+1.264060163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-6-32a99953eb,}" Apr 30 00:52:42.494153 kubelet[2585]: I0430 00:52:42.494082 2585 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-6-32a99953eb" Apr 30 00:52:42.554594 kubelet[2585]: E0430 00:52:42.554462 2585 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-3-6-32a99953eb.183af26dcd446e95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-6-32a99953eb,UID:ci-4081-3-3-6-32a99953eb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-6-32a99953eb,},FirstTimestamp:2025-04-30 00:52:38.968659605 +0000 UTC m=+1.288520156,LastTimestamp:2025-04-30 00:52:38.968659605 +0000 UTC m=+1.288520156,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-6-32a99953eb,}" Apr 30 00:52:42.563078 kubelet[2585]: E0430 00:52:42.562971 2585 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-6-32a99953eb\" not found" Apr 30 00:52:42.944203 kubelet[2585]: I0430 00:52:42.943920 2585 apiserver.go:52] "Watching apiserver" Apr 30 00:52:42.964514 kubelet[2585]: I0430 00:52:42.964476 2585 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:52:44.620621 systemd[1]: Reloading requested from client PID 2859 ('systemctl') (unit session-7.scope)... Apr 30 00:52:44.620643 systemd[1]: Reloading... Apr 30 00:52:44.737982 zram_generator::config[2902]: No configuration found. Apr 30 00:52:44.861773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:52:44.943955 systemd[1]: Reloading finished in 322 ms. Apr 30 00:52:44.979743 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:52:44.992515 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:52:44.993023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:52:45.005193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:52:45.128505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:52:45.145722 (kubelet)[2954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:52:45.231477 kubelet[2954]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:52:45.231861 kubelet[2954]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:52:45.231999 kubelet[2954]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:52:45.232199 kubelet[2954]: I0430 00:52:45.232163 2954 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:52:45.239419 kubelet[2954]: I0430 00:52:45.239379 2954 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:52:45.239576 kubelet[2954]: I0430 00:52:45.239559 2954 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:52:45.240366 kubelet[2954]: I0430 00:52:45.239968 2954 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:52:45.241859 kubelet[2954]: I0430 00:52:45.241834 2954 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:52:45.243490 kubelet[2954]: I0430 00:52:45.243412 2954 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:52:45.251896 kubelet[2954]: I0430 00:52:45.251688 2954 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:52:45.252321 kubelet[2954]: I0430 00:52:45.252211 2954 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:52:45.252488 kubelet[2954]: I0430 00:52:45.252300 2954 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-6-32a99953eb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:52:45.252575 kubelet[2954]: I0430 00:52:45.252498 2954 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:52:45.252575 kubelet[2954]: I0430 00:52:45.252509 2954 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:52:45.252575 kubelet[2954]: I0430 00:52:45.252550 2954 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:52:45.252721 kubelet[2954]: I0430 00:52:45.252706 2954 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:52:45.252764 kubelet[2954]: I0430 00:52:45.252728 2954 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:52:45.253340 kubelet[2954]: I0430 00:52:45.253316 2954 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:52:45.253419 kubelet[2954]: I0430 00:52:45.253351 2954 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:52:45.254832 kubelet[2954]: I0430 00:52:45.254801 2954 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:52:45.255211 kubelet[2954]: I0430 00:52:45.255024 2954 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:52:45.255938 kubelet[2954]: I0430 00:52:45.255506 2954 server.go:1264] "Started kubelet" Apr 30 00:52:45.259764 kubelet[2954]: I0430 00:52:45.259737 2954 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:52:45.277971 kubelet[2954]: I0430 00:52:45.276492 2954 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:52:45.277971 kubelet[2954]: I0430 00:52:45.277624 2954 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:52:45.278598 kubelet[2954]: I0430 00:52:45.278535 2954 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:52:45.278777 kubelet[2954]: I0430 00:52:45.278753 2954 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:52:45.283954 kubelet[2954]: I0430 00:52:45.283722 2954 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:52:45.288782 kubelet[2954]: I0430 00:52:45.288753 2954 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:52:45.288961 kubelet[2954]: I0430 00:52:45.288948 2954 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:52:45.293035 kubelet[2954]: I0430 00:52:45.292368 2954 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:52:45.294837 kubelet[2954]: I0430 00:52:45.294797 2954 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:52:45.294837 kubelet[2954]: I0430 00:52:45.294846 2954 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:52:45.295044 kubelet[2954]: I0430 00:52:45.294868 2954 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:52:45.295044 kubelet[2954]: E0430 00:52:45.294912 2954 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:52:45.299111 kubelet[2954]: I0430 00:52:45.299039 2954 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:52:45.299326 kubelet[2954]: I0430 00:52:45.299159 2954 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:52:45.309452 kubelet[2954]: I0430 00:52:45.309422 2954 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:52:45.329318 kubelet[2954]: E0430 00:52:45.329273 2954 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:52:45.378557 kubelet[2954]: I0430 00:52:45.378291 2954 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:52:45.378557 kubelet[2954]: I0430 00:52:45.378313 2954 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:52:45.378557 kubelet[2954]: I0430 00:52:45.378335 2954 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:52:45.379506 kubelet[2954]: I0430 00:52:45.378639 2954 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:52:45.379506 kubelet[2954]: I0430 00:52:45.378654 2954 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:52:45.379506 kubelet[2954]: I0430 00:52:45.378674 2954 policy_none.go:49] "None policy: Start" Apr 30 00:52:45.380911 kubelet[2954]: I0430 00:52:45.380812 2954 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:52:45.380911 kubelet[2954]: I0430 00:52:45.380848 2954 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:52:45.380911 kubelet[2954]: I0430 00:52:45.381076 2954 state_mem.go:75] "Updated machine memory state" Apr 30 00:52:45.382975 kubelet[2954]: I0430 00:52:45.382447 2954 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:52:45.382975 kubelet[2954]: I0430 00:52:45.382637 2954 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:52:45.382975 kubelet[2954]: I0430 00:52:45.382747 2954 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:52:45.388686 kubelet[2954]: I0430 00:52:45.388655 2954 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.395789 kubelet[2954]: I0430 00:52:45.395223 2954 topology_manager.go:215] "Topology Admit Handler" podUID="523ddb53e4e89ae6e8842f74387fd547" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.395789 kubelet[2954]: I0430 00:52:45.395387 2954 topology_manager.go:215] "Topology Admit Handler" podUID="80b4003c4d3ec64b43bcb1c0005bb4a0" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.395789 kubelet[2954]: I0430 00:52:45.395427 2954 topology_manager.go:215] "Topology Admit Handler" podUID="841ffa88f55286aabc5aea08e000001c" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.409054 kubelet[2954]: I0430 00:52:45.408888 2954 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.409054 kubelet[2954]: I0430 00:52:45.409000 2954 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.490208 kubelet[2954]: I0430 00:52:45.490095 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/523ddb53e4e89ae6e8842f74387fd547-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-6-32a99953eb\" (UID: \"523ddb53e4e89ae6e8842f74387fd547\") " pod="kube-system/kube-apiserver-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.490639 kubelet[2954]: I0430 00:52:45.490408 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80b4003c4d3ec64b43bcb1c0005bb4a0-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-6-32a99953eb\" (UID: \"80b4003c4d3ec64b43bcb1c0005bb4a0\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.490639 kubelet[2954]: I0430 00:52:45.490458 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80b4003c4d3ec64b43bcb1c0005bb4a0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-6-32a99953eb\" (UID: \"80b4003c4d3ec64b43bcb1c0005bb4a0\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.490639 kubelet[2954]: I0430 00:52:45.490486 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/841ffa88f55286aabc5aea08e000001c-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-6-32a99953eb\" (UID: \"841ffa88f55286aabc5aea08e000001c\") " pod="kube-system/kube-scheduler-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.490639 kubelet[2954]: I0430 00:52:45.490503 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/523ddb53e4e89ae6e8842f74387fd547-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-6-32a99953eb\" (UID: \"523ddb53e4e89ae6e8842f74387fd547\") " pod="kube-system/kube-apiserver-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.490639 kubelet[2954]: I0430 00:52:45.490530 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/523ddb53e4e89ae6e8842f74387fd547-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-6-32a99953eb\" (UID: \"523ddb53e4e89ae6e8842f74387fd547\") " pod="kube-system/kube-apiserver-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.491017 kubelet[2954]: I0430 00:52:45.490547 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80b4003c4d3ec64b43bcb1c0005bb4a0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-6-32a99953eb\" (UID: \"80b4003c4d3ec64b43bcb1c0005bb4a0\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.491017 kubelet[2954]: I0430 00:52:45.490563 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80b4003c4d3ec64b43bcb1c0005bb4a0-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-6-32a99953eb\" (UID: \"80b4003c4d3ec64b43bcb1c0005bb4a0\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:45.491017 kubelet[2954]: I0430 00:52:45.490579 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80b4003c4d3ec64b43bcb1c0005bb4a0-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-6-32a99953eb\" (UID: \"80b4003c4d3ec64b43bcb1c0005bb4a0\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-6-32a99953eb" Apr 30 00:52:46.265020 kubelet[2954]: I0430 00:52:46.264915 2954 apiserver.go:52] "Watching apiserver" Apr 30 00:52:46.289874 kubelet[2954]: I0430 00:52:46.289810 2954 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:52:46.410968 kubelet[2954]: I0430 00:52:46.409202 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-6-32a99953eb" podStartSLOduration=1.409179749 podStartE2EDuration="1.409179749s" podCreationTimestamp="2025-04-30 00:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:52:46.384915968 +0000 UTC m=+1.229542299" watchObservedRunningTime="2025-04-30 00:52:46.409179749 +0000 UTC m=+1.253806080" Apr 30 00:52:46.412746 kubelet[2954]: I0430 00:52:46.411776 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-6-32a99953eb" podStartSLOduration=1.41175827 podStartE2EDuration="1.41175827s" podCreationTimestamp="2025-04-30 00:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:52:46.411592662 +0000 UTC m=+1.256218993" watchObservedRunningTime="2025-04-30 00:52:46.41175827 +0000 UTC m=+1.256384641" Apr 30 00:52:46.439310 kubelet[2954]: I0430 00:52:46.438552 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-6-32a99953eb" podStartSLOduration=1.438536649 podStartE2EDuration="1.438536649s" podCreationTimestamp="2025-04-30 00:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:52:46.438263396 +0000 UTC m=+1.282889727" watchObservedRunningTime="2025-04-30 00:52:46.438536649 +0000 UTC m=+1.283162940" Apr 30 00:52:50.951469 sudo[2046]: pam_unix(sudo:session): session closed for user root Apr 30 00:52:51.113369 sshd[2022]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:51.121672 systemd[1]: sshd@6-49.12.45.4:22-139.178.68.195:58654.service: Deactivated successfully. Apr 30 00:52:51.128350 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:52:51.132181 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:52:51.133722 systemd-logind[1580]: Removed session 7. Apr 30 00:52:59.811560 kubelet[2954]: I0430 00:52:59.811526 2954 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:52:59.812921 containerd[1612]: time="2025-04-30T00:52:59.812632656Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:52:59.814176 kubelet[2954]: I0430 00:52:59.813916 2954 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:52:59.876765 kubelet[2954]: I0430 00:52:59.872991 2954 topology_manager.go:215] "Topology Admit Handler" podUID="9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de" podNamespace="kube-system" podName="kube-proxy-xnznk" Apr 30 00:52:59.891560 kubelet[2954]: W0430 00:52:59.890533 2954 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-3-6-32a99953eb" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-6-32a99953eb' and this object Apr 30 00:52:59.891560 kubelet[2954]: E0430 00:52:59.890587 2954 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-3-6-32a99953eb" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-6-32a99953eb' and this object Apr 30 00:52:59.891560 kubelet[2954]: I0430 00:52:59.890405 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de-kube-proxy\") pod \"kube-proxy-xnznk\" (UID: \"9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de\") " pod="kube-system/kube-proxy-xnznk" Apr 30 00:52:59.891560 kubelet[2954]: I0430 00:52:59.890852 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de-xtables-lock\") pod \"kube-proxy-xnznk\" (UID: \"9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de\") " pod="kube-system/kube-proxy-xnznk" Apr 30 00:52:59.891560 kubelet[2954]: I0430 00:52:59.891217 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skbfz\" (UniqueName: \"kubernetes.io/projected/9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de-kube-api-access-skbfz\") pod \"kube-proxy-xnznk\" (UID: \"9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de\") " pod="kube-system/kube-proxy-xnznk" Apr 30 00:52:59.891815 kubelet[2954]: I0430 00:52:59.891253 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de-lib-modules\") pod \"kube-proxy-xnznk\" (UID: \"9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de\") " pod="kube-system/kube-proxy-xnznk" Apr 30 00:52:59.894961 kubelet[2954]: W0430 00:52:59.893759 2954 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-3-3-6-32a99953eb" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-6-32a99953eb' and this object Apr 30 00:52:59.894961 kubelet[2954]: E0430 00:52:59.893809 2954 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-3-3-6-32a99953eb" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-6-32a99953eb' and this object Apr 30 00:53:00.406755 kubelet[2954]: I0430 00:53:00.406698 2954 topology_manager.go:215] "Topology Admit Handler" podUID="fce9f0d6-a616-4b2b-8ff1-0b890f47e602" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-j9j9n" Apr 30 00:53:00.498011 kubelet[2954]: I0430 00:53:00.497867 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fce9f0d6-a616-4b2b-8ff1-0b890f47e602-var-lib-calico\") pod \"tigera-operator-797db67f8-j9j9n\" (UID: \"fce9f0d6-a616-4b2b-8ff1-0b890f47e602\") " pod="tigera-operator/tigera-operator-797db67f8-j9j9n" Apr 30 00:53:00.498345 kubelet[2954]: I0430 00:53:00.498048 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64l55\" (UniqueName: \"kubernetes.io/projected/fce9f0d6-a616-4b2b-8ff1-0b890f47e602-kube-api-access-64l55\") pod \"tigera-operator-797db67f8-j9j9n\" (UID: \"fce9f0d6-a616-4b2b-8ff1-0b890f47e602\") " pod="tigera-operator/tigera-operator-797db67f8-j9j9n" Apr 30 00:53:00.719640 containerd[1612]: time="2025-04-30T00:53:00.719425676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-j9j9n,Uid:fce9f0d6-a616-4b2b-8ff1-0b890f47e602,Namespace:tigera-operator,Attempt:0,}" Apr 30 00:53:00.751768 containerd[1612]: time="2025-04-30T00:53:00.751394705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:00.751768 containerd[1612]: time="2025-04-30T00:53:00.751451707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:00.751768 containerd[1612]: time="2025-04-30T00:53:00.751463947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:00.751768 containerd[1612]: time="2025-04-30T00:53:00.751549070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:00.806309 containerd[1612]: time="2025-04-30T00:53:00.805956717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-j9j9n,Uid:fce9f0d6-a616-4b2b-8ff1-0b890f47e602,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2d5d0aa98ef8114bee8fd5ab65dcb795fbe3ea41c39b870a7fdaa8ac0ef74efd\"" Apr 30 00:53:00.809967 containerd[1612]: time="2025-04-30T00:53:00.809564442Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 00:53:00.995110 kubelet[2954]: E0430 00:53:00.993853 2954 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:53:00.995110 kubelet[2954]: E0430 00:53:00.993996 2954 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de-kube-proxy podName:9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de nodeName:}" failed. No retries permitted until 2025-04-30 00:53:01.493968517 +0000 UTC m=+16.338594848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de-kube-proxy") pod "kube-proxy-xnznk" (UID: "9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:53:01.003859 kubelet[2954]: E0430 00:53:01.003437 2954 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:53:01.003859 kubelet[2954]: E0430 00:53:01.003491 2954 projected.go:200] Error preparing data for projected volume kube-api-access-skbfz for pod kube-system/kube-proxy-xnznk: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:53:01.003859 kubelet[2954]: E0430 00:53:01.003586 2954 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de-kube-api-access-skbfz podName:9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de nodeName:}" failed. No retries permitted until 2025-04-30 00:53:01.503562127 +0000 UTC m=+16.348188458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-skbfz" (UniqueName: "kubernetes.io/projected/9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de-kube-api-access-skbfz") pod "kube-proxy-xnznk" (UID: "9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:53:01.701205 containerd[1612]: time="2025-04-30T00:53:01.699669748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xnznk,Uid:9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de,Namespace:kube-system,Attempt:0,}" Apr 30 00:53:01.741110 containerd[1612]: time="2025-04-30T00:53:01.740908869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:01.741110 containerd[1612]: time="2025-04-30T00:53:01.740999873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:01.741110 containerd[1612]: time="2025-04-30T00:53:01.741015753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:01.741110 containerd[1612]: time="2025-04-30T00:53:01.741113556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:01.762077 systemd[1]: run-containerd-runc-k8s.io-244200b8102d3e73924080ec06f5617e29de654501a1a5b4387bc1c72d2b49f1-runc.aTH057.mount: Deactivated successfully. Apr 30 00:53:01.790260 containerd[1612]: time="2025-04-30T00:53:01.790211905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xnznk,Uid:9dd1f1c1-4e4d-4837-a3d3-6f001ec6f2de,Namespace:kube-system,Attempt:0,} returns sandbox id \"244200b8102d3e73924080ec06f5617e29de654501a1a5b4387bc1c72d2b49f1\"" Apr 30 00:53:01.797059 containerd[1612]: time="2025-04-30T00:53:01.797005336Z" level=info msg="CreateContainer within sandbox \"244200b8102d3e73924080ec06f5617e29de654501a1a5b4387bc1c72d2b49f1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:53:01.818660 containerd[1612]: time="2025-04-30T00:53:01.818596950Z" level=info msg="CreateContainer within sandbox \"244200b8102d3e73924080ec06f5617e29de654501a1a5b4387bc1c72d2b49f1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eed2f9112b4790f582a134ce421e64cfdd05f4340cda24d240e4c18a819ef199\"" Apr 30 00:53:01.819644 containerd[1612]: time="2025-04-30T00:53:01.819583264Z" level=info msg="StartContainer for \"eed2f9112b4790f582a134ce421e64cfdd05f4340cda24d240e4c18a819ef199\"" Apr 30 00:53:01.892582 containerd[1612]: time="2025-04-30T00:53:01.892523183Z" level=info msg="StartContainer for \"eed2f9112b4790f582a134ce421e64cfdd05f4340cda24d240e4c18a819ef199\" returns successfully" Apr 30 00:53:02.428091 kubelet[2954]: I0430 00:53:02.427990 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xnznk" podStartSLOduration=3.427952177 podStartE2EDuration="3.427952177s" podCreationTimestamp="2025-04-30 00:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:53:02.42353331 +0000 UTC m=+17.268159641" watchObservedRunningTime="2025-04-30 00:53:02.427952177 +0000 UTC m=+17.272578508" Apr 30 00:53:03.305916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3318054597.mount: Deactivated successfully. Apr 30 00:53:03.621234 containerd[1612]: time="2025-04-30T00:53:03.621089814Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:03.624631 containerd[1612]: time="2025-04-30T00:53:03.624577248Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" Apr 30 00:53:03.626319 containerd[1612]: time="2025-04-30T00:53:03.626215822Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:03.632001 containerd[1612]: time="2025-04-30T00:53:03.630522123Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:03.632509 containerd[1612]: time="2025-04-30T00:53:03.632437825Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.822813262s" Apr 30 00:53:03.632644 containerd[1612]: time="2025-04-30T00:53:03.632619791Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" Apr 30 00:53:03.637345 containerd[1612]: time="2025-04-30T00:53:03.637301624Z" level=info msg="CreateContainer within sandbox \"2d5d0aa98ef8114bee8fd5ab65dcb795fbe3ea41c39b870a7fdaa8ac0ef74efd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 00:53:03.653904 containerd[1612]: time="2025-04-30T00:53:03.653835444Z" level=info msg="CreateContainer within sandbox \"2d5d0aa98ef8114bee8fd5ab65dcb795fbe3ea41c39b870a7fdaa8ac0ef74efd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5be8f62062b67200eb6196c8a753478276337fdc3d12b7b032870ab5f955a1a4\"" Apr 30 00:53:03.659344 containerd[1612]: time="2025-04-30T00:53:03.659259702Z" level=info msg="StartContainer for \"5be8f62062b67200eb6196c8a753478276337fdc3d12b7b032870ab5f955a1a4\"" Apr 30 00:53:03.723273 containerd[1612]: time="2025-04-30T00:53:03.723219552Z" level=info msg="StartContainer for \"5be8f62062b67200eb6196c8a753478276337fdc3d12b7b032870ab5f955a1a4\" returns successfully" Apr 30 00:53:08.021232 kubelet[2954]: I0430 00:53:08.021153 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-j9j9n" podStartSLOduration=5.195610592 podStartE2EDuration="8.021133545s" podCreationTimestamp="2025-04-30 00:53:00 +0000 UTC" firstStartedPulling="2025-04-30 00:53:00.808137912 +0000 UTC m=+15.652764243" lastFinishedPulling="2025-04-30 00:53:03.633660865 +0000 UTC m=+18.478287196" observedRunningTime="2025-04-30 00:53:04.4250717 +0000 UTC m=+19.269698031" watchObservedRunningTime="2025-04-30 00:53:08.021133545 +0000 UTC m=+22.865759876" Apr 30 00:53:08.021728 kubelet[2954]: I0430 00:53:08.021538 2954 topology_manager.go:215] "Topology Admit Handler" podUID="850b6ee0-cd1a-4ef9-8789-7799cb206094" podNamespace="calico-system" podName="calico-typha-7748cf48db-x8495" Apr 30 00:53:08.054632 kubelet[2954]: I0430 00:53:08.054573 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4gtr\" (UniqueName: \"kubernetes.io/projected/850b6ee0-cd1a-4ef9-8789-7799cb206094-kube-api-access-c4gtr\") pod \"calico-typha-7748cf48db-x8495\" (UID: \"850b6ee0-cd1a-4ef9-8789-7799cb206094\") " pod="calico-system/calico-typha-7748cf48db-x8495" Apr 30 00:53:08.054632 kubelet[2954]: I0430 00:53:08.054630 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/850b6ee0-cd1a-4ef9-8789-7799cb206094-tigera-ca-bundle\") pod \"calico-typha-7748cf48db-x8495\" (UID: \"850b6ee0-cd1a-4ef9-8789-7799cb206094\") " pod="calico-system/calico-typha-7748cf48db-x8495" Apr 30 00:53:08.054811 kubelet[2954]: I0430 00:53:08.054653 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/850b6ee0-cd1a-4ef9-8789-7799cb206094-typha-certs\") pod \"calico-typha-7748cf48db-x8495\" (UID: \"850b6ee0-cd1a-4ef9-8789-7799cb206094\") " pod="calico-system/calico-typha-7748cf48db-x8495" Apr 30 00:53:08.236808 kubelet[2954]: I0430 00:53:08.236747 2954 topology_manager.go:215] "Topology Admit Handler" podUID="bd7869e7-5729-4e4f-8fe7-2daf97220a78" podNamespace="calico-system" podName="calico-node-j7b4g" Apr 30 00:53:08.256066 kubelet[2954]: I0430 00:53:08.256016 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bd7869e7-5729-4e4f-8fe7-2daf97220a78-policysync\") pod \"calico-node-j7b4g\" (UID: \"bd7869e7-5729-4e4f-8fe7-2daf97220a78\") " pod="calico-system/calico-node-j7b4g" Apr 30 00:53:08.256066 kubelet[2954]: I0430 00:53:08.256064 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bd7869e7-5729-4e4f-8fe7-2daf97220a78-cni-bin-dir\") pod \"calico-node-j7b4g\" (UID: \"bd7869e7-5729-4e4f-8fe7-2daf97220a78\") " pod="calico-system/calico-node-j7b4g" Apr 30 00:53:08.256258 kubelet[2954]: I0430 00:53:08.256085 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bd7869e7-5729-4e4f-8fe7-2daf97220a78-cni-net-dir\") pod \"calico-node-j7b4g\" (UID: \"bd7869e7-5729-4e4f-8fe7-2daf97220a78\") " pod="calico-system/calico-node-j7b4g" Apr 30 00:53:08.256258 kubelet[2954]: I0430 00:53:08.256101 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bd7869e7-5729-4e4f-8fe7-2daf97220a78-cni-log-dir\") pod \"calico-node-j7b4g\" (UID: \"bd7869e7-5729-4e4f-8fe7-2daf97220a78\") " pod="calico-system/calico-node-j7b4g" Apr 30 00:53:08.256258 kubelet[2954]: I0430 00:53:08.256118 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd7869e7-5729-4e4f-8fe7-2daf97220a78-xtables-lock\") pod \"calico-node-j7b4g\" (UID: \"bd7869e7-5729-4e4f-8fe7-2daf97220a78\") " pod="calico-system/calico-node-j7b4g" Apr 30 00:53:08.256258 kubelet[2954]: I0430 00:53:08.256132 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bd7869e7-5729-4e4f-8fe7-2daf97220a78-node-certs\") pod \"calico-node-j7b4g\" (UID: \"bd7869e7-5729-4e4f-8fe7-2daf97220a78\") " pod="calico-system/calico-node-j7b4g" Apr 30 00:53:08.256258 kubelet[2954]: I0430 00:53:08.256150 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd7869e7-5729-4e4f-8fe7-2daf97220a78-tigera-ca-bundle\") pod \"calico-node-j7b4g\" (UID: \"bd7869e7-5729-4e4f-8fe7-2daf97220a78\") " pod="calico-system/calico-node-j7b4g" Apr 30 00:53:08.256378 kubelet[2954]: I0430 00:53:08.256169 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bd7869e7-5729-4e4f-8fe7-2daf97220a78-var-run-calico\") pod \"calico-node-j7b4g\" (UID: \"bd7869e7-5729-4e4f-8fe7-2daf97220a78\") " pod="calico-system/calico-node-j7b4g" Apr 30 00:53:08.256378 kubelet[2954]: I0430 00:53:08.256188 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bd7869e7-5729-4e4f-8fe7-2daf97220a78-flexvol-driver-host\") pod \"calico-node-j7b4g\" (UID: \"bd7869e7-5729-4e4f-8fe7-2daf97220a78\") " pod="calico-system/calico-node-j7b4g" Apr 30 00:53:08.256378 kubelet[2954]: I0430 00:53:08.256207 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd7869e7-5729-4e4f-8fe7-2daf97220a78-lib-modules\") pod \"calico-node-j7b4g\" (UID: \"bd7869e7-5729-4e4f-8fe7-2daf97220a78\") " pod="calico-system/calico-node-j7b4g" Apr 30 00:53:08.256378 kubelet[2954]: I0430 00:53:08.256221 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd7869e7-5729-4e4f-8fe7-2daf97220a78-var-lib-calico\") pod \"calico-node-j7b4g\" (UID: \"bd7869e7-5729-4e4f-8fe7-2daf97220a78\") " pod="calico-system/calico-node-j7b4g" Apr 30 00:53:08.256378 kubelet[2954]: I0430 00:53:08.256235 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t96r\" (UniqueName: \"kubernetes.io/projected/bd7869e7-5729-4e4f-8fe7-2daf97220a78-kube-api-access-9t96r\") pod \"calico-node-j7b4g\" (UID: \"bd7869e7-5729-4e4f-8fe7-2daf97220a78\") " pod="calico-system/calico-node-j7b4g" Apr 30 00:53:08.334530 containerd[1612]: time="2025-04-30T00:53:08.334362899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7748cf48db-x8495,Uid:850b6ee0-cd1a-4ef9-8789-7799cb206094,Namespace:calico-system,Attempt:0,}" Apr 30 00:53:08.362572 kubelet[2954]: E0430 00:53:08.359561 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.362572 kubelet[2954]: W0430 00:53:08.359587 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.362572 kubelet[2954]: E0430 00:53:08.359611 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.362572 kubelet[2954]: E0430 00:53:08.360232 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.362572 kubelet[2954]: W0430 00:53:08.360248 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.362572 kubelet[2954]: E0430 00:53:08.360263 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.362572 kubelet[2954]: E0430 00:53:08.360429 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.362572 kubelet[2954]: W0430 00:53:08.360437 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.362572 kubelet[2954]: E0430 00:53:08.360446 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.362572 kubelet[2954]: E0430 00:53:08.360687 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.368890 kubelet[2954]: W0430 00:53:08.360697 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.368890 kubelet[2954]: E0430 00:53:08.360712 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.368890 kubelet[2954]: E0430 00:53:08.361577 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.368890 kubelet[2954]: W0430 00:53:08.363615 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.368890 kubelet[2954]: E0430 00:53:08.363650 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.375884 kubelet[2954]: E0430 00:53:08.375691 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.375884 kubelet[2954]: W0430 00:53:08.375716 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.375884 kubelet[2954]: E0430 00:53:08.375748 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.377729 kubelet[2954]: E0430 00:53:08.376884 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.377729 kubelet[2954]: W0430 00:53:08.376908 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.377729 kubelet[2954]: E0430 00:53:08.376945 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.377729 kubelet[2954]: E0430 00:53:08.377315 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.377729 kubelet[2954]: W0430 00:53:08.377330 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.377729 kubelet[2954]: E0430 00:53:08.377345 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.383749 kubelet[2954]: I0430 00:53:08.383581 2954 topology_manager.go:215] "Topology Admit Handler" podUID="128e4be7-9f7d-4c2d-8a19-50ffaa3dc839" podNamespace="calico-system" podName="csi-node-driver-4568b" Apr 30 00:53:08.387658 kubelet[2954]: E0430 00:53:08.384776 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.387658 kubelet[2954]: W0430 00:53:08.384802 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.387658 kubelet[2954]: E0430 00:53:08.384830 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.391056 kubelet[2954]: E0430 00:53:08.389577 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4568b" podUID="128e4be7-9f7d-4c2d-8a19-50ffaa3dc839" Apr 30 00:53:08.402698 kubelet[2954]: E0430 00:53:08.402669 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.403193 kubelet[2954]: W0430 00:53:08.402975 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.403339 kubelet[2954]: E0430 00:53:08.403280 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.404157 kubelet[2954]: E0430 00:53:08.403805 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.404230 kubelet[2954]: W0430 00:53:08.404160 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.404230 kubelet[2954]: E0430 00:53:08.404183 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.427308 containerd[1612]: time="2025-04-30T00:53:08.427123378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:08.427308 containerd[1612]: time="2025-04-30T00:53:08.427247141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:08.427308 containerd[1612]: time="2025-04-30T00:53:08.427265062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:08.427663 containerd[1612]: time="2025-04-30T00:53:08.427396306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:08.447725 kubelet[2954]: E0430 00:53:08.447602 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.448101 kubelet[2954]: W0430 00:53:08.447902 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.448101 kubelet[2954]: E0430 00:53:08.448042 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.448648 kubelet[2954]: E0430 00:53:08.448437 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.448648 kubelet[2954]: W0430 00:53:08.448583 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.448648 kubelet[2954]: E0430 00:53:08.448600 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.449524 kubelet[2954]: E0430 00:53:08.449491 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.449524 kubelet[2954]: W0430 00:53:08.449507 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.449981 kubelet[2954]: E0430 00:53:08.449642 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.450405 kubelet[2954]: E0430 00:53:08.450390 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.450878 kubelet[2954]: W0430 00:53:08.450535 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.450878 kubelet[2954]: E0430 00:53:08.450558 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.451729 kubelet[2954]: E0430 00:53:08.451699 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.451839 kubelet[2954]: W0430 00:53:08.451824 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.452291 kubelet[2954]: E0430 00:53:08.452111 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.453001 kubelet[2954]: E0430 00:53:08.452755 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.453222 kubelet[2954]: W0430 00:53:08.453203 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.453363 kubelet[2954]: E0430 00:53:08.453299 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.453814 kubelet[2954]: E0430 00:53:08.453790 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.454246 kubelet[2954]: W0430 00:53:08.453803 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.454246 kubelet[2954]: E0430 00:53:08.453993 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.454980 kubelet[2954]: E0430 00:53:08.454864 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.454980 kubelet[2954]: W0430 00:53:08.454888 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.454980 kubelet[2954]: E0430 00:53:08.454901 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.456468 kubelet[2954]: E0430 00:53:08.456449 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.458693 kubelet[2954]: W0430 00:53:08.458399 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.460666 kubelet[2954]: E0430 00:53:08.460059 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.463281 kubelet[2954]: E0430 00:53:08.463094 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.463281 kubelet[2954]: W0430 00:53:08.463132 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.463281 kubelet[2954]: E0430 00:53:08.463156 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.464513 kubelet[2954]: E0430 00:53:08.463742 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.464513 kubelet[2954]: W0430 00:53:08.463757 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.464513 kubelet[2954]: E0430 00:53:08.463771 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.466275 kubelet[2954]: E0430 00:53:08.465515 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.466275 kubelet[2954]: W0430 00:53:08.465534 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.466275 kubelet[2954]: E0430 00:53:08.465560 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.468171 kubelet[2954]: E0430 00:53:08.467725 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.468171 kubelet[2954]: W0430 00:53:08.467747 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.468171 kubelet[2954]: E0430 00:53:08.467769 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.468735 kubelet[2954]: E0430 00:53:08.468387 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.468735 kubelet[2954]: W0430 00:53:08.468401 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.468735 kubelet[2954]: E0430 00:53:08.468415 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.469165 kubelet[2954]: E0430 00:53:08.469075 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.469165 kubelet[2954]: W0430 00:53:08.469089 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.469165 kubelet[2954]: E0430 00:53:08.469114 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.469691 kubelet[2954]: E0430 00:53:08.469537 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.469691 kubelet[2954]: W0430 00:53:08.469550 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.469691 kubelet[2954]: E0430 00:53:08.469562 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.471458 kubelet[2954]: E0430 00:53:08.471205 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.472091 kubelet[2954]: W0430 00:53:08.471758 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.472091 kubelet[2954]: E0430 00:53:08.471787 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.474963 kubelet[2954]: E0430 00:53:08.473813 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.474963 kubelet[2954]: W0430 00:53:08.473833 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.474963 kubelet[2954]: E0430 00:53:08.473894 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.476803 kubelet[2954]: E0430 00:53:08.476767 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.477133 kubelet[2954]: W0430 00:53:08.476975 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.477133 kubelet[2954]: E0430 00:53:08.477005 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.478581 kubelet[2954]: E0430 00:53:08.477998 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.478581 kubelet[2954]: W0430 00:53:08.478019 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.478581 kubelet[2954]: E0430 00:53:08.478039 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.481366 kubelet[2954]: E0430 00:53:08.480808 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.481366 kubelet[2954]: W0430 00:53:08.480834 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.481366 kubelet[2954]: E0430 00:53:08.480898 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.481366 kubelet[2954]: I0430 00:53:08.480971 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/128e4be7-9f7d-4c2d-8a19-50ffaa3dc839-registration-dir\") pod \"csi-node-driver-4568b\" (UID: \"128e4be7-9f7d-4c2d-8a19-50ffaa3dc839\") " pod="calico-system/csi-node-driver-4568b" Apr 30 00:53:08.481366 kubelet[2954]: E0430 00:53:08.481264 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.481366 kubelet[2954]: W0430 00:53:08.481278 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.481366 kubelet[2954]: E0430 00:53:08.481303 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.482144 kubelet[2954]: E0430 00:53:08.481815 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.482144 kubelet[2954]: W0430 00:53:08.481828 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.482144 kubelet[2954]: E0430 00:53:08.481843 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.483566 kubelet[2954]: E0430 00:53:08.483484 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.483566 kubelet[2954]: W0430 00:53:08.483512 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.483566 kubelet[2954]: E0430 00:53:08.483535 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.483566 kubelet[2954]: I0430 00:53:08.483573 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128e4be7-9f7d-4c2d-8a19-50ffaa3dc839-kubelet-dir\") pod \"csi-node-driver-4568b\" (UID: \"128e4be7-9f7d-4c2d-8a19-50ffaa3dc839\") " pod="calico-system/csi-node-driver-4568b" Apr 30 00:53:08.484839 kubelet[2954]: E0430 00:53:08.484560 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.484839 kubelet[2954]: W0430 00:53:08.484592 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.484839 kubelet[2954]: E0430 00:53:08.484621 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.484839 kubelet[2954]: I0430 00:53:08.484648 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/128e4be7-9f7d-4c2d-8a19-50ffaa3dc839-socket-dir\") pod \"csi-node-driver-4568b\" (UID: \"128e4be7-9f7d-4c2d-8a19-50ffaa3dc839\") " pod="calico-system/csi-node-driver-4568b" Apr 30 00:53:08.485613 kubelet[2954]: E0430 00:53:08.485434 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.485613 kubelet[2954]: W0430 00:53:08.485461 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.485613 kubelet[2954]: E0430 00:53:08.485480 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.485613 kubelet[2954]: I0430 00:53:08.485506 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwq2g\" (UniqueName: \"kubernetes.io/projected/128e4be7-9f7d-4c2d-8a19-50ffaa3dc839-kube-api-access-jwq2g\") pod \"csi-node-driver-4568b\" (UID: \"128e4be7-9f7d-4c2d-8a19-50ffaa3dc839\") " pod="calico-system/csi-node-driver-4568b" Apr 30 00:53:08.487254 kubelet[2954]: E0430 00:53:08.487227 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.487254 kubelet[2954]: W0430 00:53:08.487251 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.487351 kubelet[2954]: E0430 00:53:08.487278 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.487351 kubelet[2954]: I0430 00:53:08.487306 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/128e4be7-9f7d-4c2d-8a19-50ffaa3dc839-varrun\") pod \"csi-node-driver-4568b\" (UID: \"128e4be7-9f7d-4c2d-8a19-50ffaa3dc839\") " pod="calico-system/csi-node-driver-4568b" Apr 30 00:53:08.489436 kubelet[2954]: E0430 00:53:08.489405 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.489436 kubelet[2954]: W0430 00:53:08.489428 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.489764 kubelet[2954]: E0430 00:53:08.489616 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.489764 kubelet[2954]: W0430 00:53:08.489623 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.489764 kubelet[2954]: E0430 00:53:08.489747 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.489764 kubelet[2954]: W0430 00:53:08.489753 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.490744 kubelet[2954]: E0430 00:53:08.489929 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.490744 kubelet[2954]: W0430 00:53:08.489981 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.490744 kubelet[2954]: E0430 00:53:08.489993 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.490744 kubelet[2954]: E0430 00:53:08.489534 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.490744 kubelet[2954]: E0430 00:53:08.490471 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.490744 kubelet[2954]: E0430 00:53:08.490487 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.490744 kubelet[2954]: E0430 00:53:08.490737 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.490744 kubelet[2954]: W0430 00:53:08.490750 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.491277 kubelet[2954]: E0430 00:53:08.490763 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.492654 kubelet[2954]: E0430 00:53:08.491417 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.492654 kubelet[2954]: W0430 00:53:08.491434 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.492654 kubelet[2954]: E0430 00:53:08.491448 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.492654 kubelet[2954]: E0430 00:53:08.492277 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.492654 kubelet[2954]: W0430 00:53:08.492293 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.492654 kubelet[2954]: E0430 00:53:08.492309 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.494071 kubelet[2954]: E0430 00:53:08.494039 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.494071 kubelet[2954]: W0430 00:53:08.494063 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.494071 kubelet[2954]: E0430 00:53:08.494084 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.535158 containerd[1612]: time="2025-04-30T00:53:08.535052587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7748cf48db-x8495,Uid:850b6ee0-cd1a-4ef9-8789-7799cb206094,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d8154c7d82db36142de8d9fe297d0b8deadc243d6cb2070c3e48f8f31993f2e\"" Apr 30 00:53:08.539587 containerd[1612]: time="2025-04-30T00:53:08.539449198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 00:53:08.549555 containerd[1612]: time="2025-04-30T00:53:08.549498537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j7b4g,Uid:bd7869e7-5729-4e4f-8fe7-2daf97220a78,Namespace:calico-system,Attempt:0,}" Apr 30 00:53:08.593739 kubelet[2954]: E0430 00:53:08.593265 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.593739 kubelet[2954]: W0430 00:53:08.593295 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.593739 kubelet[2954]: E0430 00:53:08.593320 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.594000 kubelet[2954]: E0430 00:53:08.593759 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.594000 kubelet[2954]: W0430 00:53:08.593770 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.594000 kubelet[2954]: E0430 00:53:08.593799 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.596360 kubelet[2954]: E0430 00:53:08.594350 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.596360 kubelet[2954]: W0430 00:53:08.594392 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.596360 kubelet[2954]: E0430 00:53:08.594411 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.596360 kubelet[2954]: E0430 00:53:08.594969 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.596360 kubelet[2954]: W0430 00:53:08.594987 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.596360 kubelet[2954]: E0430 00:53:08.595001 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.597194 kubelet[2954]: E0430 00:53:08.596757 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.597194 kubelet[2954]: W0430 00:53:08.596780 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.597766 kubelet[2954]: E0430 00:53:08.597471 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.600025 kubelet[2954]: E0430 00:53:08.599929 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.600025 kubelet[2954]: W0430 00:53:08.599981 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.600959 kubelet[2954]: E0430 00:53:08.600296 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.602368 kubelet[2954]: E0430 00:53:08.602040 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.602368 kubelet[2954]: W0430 00:53:08.602062 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.602368 kubelet[2954]: E0430 00:53:08.602198 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.602709 kubelet[2954]: E0430 00:53:08.602389 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.602709 kubelet[2954]: W0430 00:53:08.602398 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.602709 kubelet[2954]: E0430 00:53:08.602492 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.604078 kubelet[2954]: E0430 00:53:08.602889 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.604078 kubelet[2954]: W0430 00:53:08.602902 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.604752 kubelet[2954]: E0430 00:53:08.604603 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.605093 kubelet[2954]: E0430 00:53:08.605063 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.605093 kubelet[2954]: W0430 00:53:08.605086 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.605390 kubelet[2954]: E0430 00:53:08.605225 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.605906 kubelet[2954]: E0430 00:53:08.605457 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.605906 kubelet[2954]: W0430 00:53:08.605479 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.605906 kubelet[2954]: E0430 00:53:08.605678 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.605906 kubelet[2954]: W0430 00:53:08.605685 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.605906 kubelet[2954]: E0430 00:53:08.605763 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.605906 kubelet[2954]: E0430 00:53:08.605800 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.606464 kubelet[2954]: E0430 00:53:08.606438 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.606464 kubelet[2954]: W0430 00:53:08.606454 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.606828 containerd[1612]: time="2025-04-30T00:53:08.606724678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:08.606925 containerd[1612]: time="2025-04-30T00:53:08.606790960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:08.606925 containerd[1612]: time="2025-04-30T00:53:08.606850082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:08.607135 containerd[1612]: time="2025-04-30T00:53:08.607011327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:08.607748 kubelet[2954]: E0430 00:53:08.607720 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.608437 kubelet[2954]: E0430 00:53:08.608414 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.608437 kubelet[2954]: W0430 00:53:08.608433 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.608796 kubelet[2954]: E0430 00:53:08.608538 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.609008 kubelet[2954]: E0430 00:53:08.608980 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.609008 kubelet[2954]: W0430 00:53:08.609002 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.612338 kubelet[2954]: E0430 00:53:08.612298 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.612451 kubelet[2954]: E0430 00:53:08.612405 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.612451 kubelet[2954]: W0430 00:53:08.612416 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.612639 kubelet[2954]: E0430 00:53:08.612551 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.612770 kubelet[2954]: E0430 00:53:08.612670 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.612770 kubelet[2954]: W0430 00:53:08.612679 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.612770 kubelet[2954]: E0430 00:53:08.612765 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.613182 kubelet[2954]: E0430 00:53:08.613163 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.613182 kubelet[2954]: W0430 00:53:08.613180 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.613913 kubelet[2954]: E0430 00:53:08.613272 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.614262 kubelet[2954]: E0430 00:53:08.614240 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.614479 kubelet[2954]: W0430 00:53:08.614266 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.614479 kubelet[2954]: E0430 00:53:08.614305 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.614823 kubelet[2954]: E0430 00:53:08.614788 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.614823 kubelet[2954]: W0430 00:53:08.614812 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.616750 kubelet[2954]: E0430 00:53:08.616020 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.616750 kubelet[2954]: E0430 00:53:08.616602 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.616750 kubelet[2954]: W0430 00:53:08.616618 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.617055 kubelet[2954]: E0430 00:53:08.616808 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.617055 kubelet[2954]: W0430 00:53:08.616815 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.617113 kubelet[2954]: E0430 00:53:08.617091 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.617113 kubelet[2954]: W0430 00:53:08.617101 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.617624 kubelet[2954]: E0430 00:53:08.617241 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.617624 kubelet[2954]: W0430 00:53:08.617259 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.617624 kubelet[2954]: E0430 00:53:08.617273 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.617624 kubelet[2954]: E0430 00:53:08.617303 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.617624 kubelet[2954]: E0430 00:53:08.617316 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.617624 kubelet[2954]: E0430 00:53:08.617327 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.618789 kubelet[2954]: E0430 00:53:08.618128 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.618789 kubelet[2954]: W0430 00:53:08.618146 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.618789 kubelet[2954]: E0430 00:53:08.618269 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.638980 kubelet[2954]: E0430 00:53:08.638389 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:08.638980 kubelet[2954]: W0430 00:53:08.638412 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:08.638980 kubelet[2954]: E0430 00:53:08.638487 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:08.677420 containerd[1612]: time="2025-04-30T00:53:08.677366179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j7b4g,Uid:bd7869e7-5729-4e4f-8fe7-2daf97220a78,Namespace:calico-system,Attempt:0,} returns sandbox id \"55359e6e69f10e634f23e9e721e2f29959ff57a2be2ea176cc52cbe67d0b98f7\"" Apr 30 00:53:10.295625 kubelet[2954]: E0430 00:53:10.295580 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4568b" podUID="128e4be7-9f7d-4c2d-8a19-50ffaa3dc839" Apr 30 00:53:10.908470 containerd[1612]: time="2025-04-30T00:53:10.908421833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:10.911014 containerd[1612]: time="2025-04-30T00:53:10.909884794Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:10.911014 containerd[1612]: time="2025-04-30T00:53:10.910079800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" Apr 30 00:53:10.917272 containerd[1612]: time="2025-04-30T00:53:10.916393141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:10.918844 containerd[1612]: time="2025-04-30T00:53:10.918638926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.379121406s" Apr 30 00:53:10.918844 containerd[1612]: time="2025-04-30T00:53:10.918839251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" Apr 30 00:53:10.920382 containerd[1612]: time="2025-04-30T00:53:10.920348335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 00:53:10.942985 containerd[1612]: time="2025-04-30T00:53:10.941517782Z" level=info msg="CreateContainer within sandbox \"1d8154c7d82db36142de8d9fe297d0b8deadc243d6cb2070c3e48f8f31993f2e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 00:53:10.966399 containerd[1612]: time="2025-04-30T00:53:10.965973523Z" level=info msg="CreateContainer within sandbox \"1d8154c7d82db36142de8d9fe297d0b8deadc243d6cb2070c3e48f8f31993f2e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5858dc7006db2af654294b6609c85553997940a274a72d9649eca7f89b8f2e6d\"" Apr 30 00:53:10.967613 containerd[1612]: time="2025-04-30T00:53:10.967570649Z" level=info msg="StartContainer for \"5858dc7006db2af654294b6609c85553997940a274a72d9649eca7f89b8f2e6d\"" Apr 30 00:53:11.045681 containerd[1612]: time="2025-04-30T00:53:11.045540183Z" level=info msg="StartContainer for \"5858dc7006db2af654294b6609c85553997940a274a72d9649eca7f89b8f2e6d\" returns successfully" Apr 30 00:53:11.500123 kubelet[2954]: E0430 00:53:11.500061 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.500123 kubelet[2954]: W0430 00:53:11.500105 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.500754 kubelet[2954]: E0430 00:53:11.500155 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.500754 kubelet[2954]: E0430 00:53:11.500505 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.500754 kubelet[2954]: W0430 00:53:11.500545 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.500754 kubelet[2954]: E0430 00:53:11.500565 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.500999 kubelet[2954]: E0430 00:53:11.500903 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.500999 kubelet[2954]: W0430 00:53:11.500964 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.501088 kubelet[2954]: E0430 00:53:11.501007 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.501408 kubelet[2954]: E0430 00:53:11.501385 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.501408 kubelet[2954]: W0430 00:53:11.501408 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.501557 kubelet[2954]: E0430 00:53:11.501427 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.501805 kubelet[2954]: E0430 00:53:11.501785 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.501856 kubelet[2954]: W0430 00:53:11.501808 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.501856 kubelet[2954]: E0430 00:53:11.501846 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.502282 kubelet[2954]: E0430 00:53:11.502261 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.502369 kubelet[2954]: W0430 00:53:11.502284 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.502369 kubelet[2954]: E0430 00:53:11.502304 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.502673 kubelet[2954]: E0430 00:53:11.502656 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.502673 kubelet[2954]: W0430 00:53:11.502676 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.502807 kubelet[2954]: E0430 00:53:11.502711 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.503140 kubelet[2954]: E0430 00:53:11.503107 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.503183 kubelet[2954]: W0430 00:53:11.503145 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.503183 kubelet[2954]: E0430 00:53:11.503165 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.503526 kubelet[2954]: E0430 00:53:11.503506 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.503591 kubelet[2954]: W0430 00:53:11.503534 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.503591 kubelet[2954]: E0430 00:53:11.503554 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.503875 kubelet[2954]: E0430 00:53:11.503856 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.503983 kubelet[2954]: W0430 00:53:11.503878 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.503983 kubelet[2954]: E0430 00:53:11.503896 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.504268 kubelet[2954]: E0430 00:53:11.504250 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.504320 kubelet[2954]: W0430 00:53:11.504272 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.504320 kubelet[2954]: E0430 00:53:11.504293 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.504634 kubelet[2954]: E0430 00:53:11.504622 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.504634 kubelet[2954]: W0430 00:53:11.504632 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.504707 kubelet[2954]: E0430 00:53:11.504641 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.505246 kubelet[2954]: E0430 00:53:11.505037 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.505246 kubelet[2954]: W0430 00:53:11.505048 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.505246 kubelet[2954]: E0430 00:53:11.505057 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.505246 kubelet[2954]: E0430 00:53:11.505233 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.505246 kubelet[2954]: W0430 00:53:11.505241 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.505246 kubelet[2954]: E0430 00:53:11.505249 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.505666 kubelet[2954]: E0430 00:53:11.505396 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.505666 kubelet[2954]: W0430 00:53:11.505403 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.505666 kubelet[2954]: E0430 00:53:11.505411 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.519364 kubelet[2954]: E0430 00:53:11.519313 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.519364 kubelet[2954]: W0430 00:53:11.519347 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.519563 kubelet[2954]: E0430 00:53:11.519376 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.520061 kubelet[2954]: E0430 00:53:11.520000 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.520061 kubelet[2954]: W0430 00:53:11.520029 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.520394 kubelet[2954]: E0430 00:53:11.520059 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.520528 kubelet[2954]: E0430 00:53:11.520505 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.520616 kubelet[2954]: W0430 00:53:11.520527 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.520706 kubelet[2954]: E0430 00:53:11.520575 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.521218 kubelet[2954]: E0430 00:53:11.521189 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.521218 kubelet[2954]: W0430 00:53:11.521212 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.521571 kubelet[2954]: E0430 00:53:11.521238 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.521767 kubelet[2954]: E0430 00:53:11.521584 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.521767 kubelet[2954]: W0430 00:53:11.521594 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.521767 kubelet[2954]: E0430 00:53:11.521653 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.522236 kubelet[2954]: E0430 00:53:11.521793 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.522236 kubelet[2954]: W0430 00:53:11.521804 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.522236 kubelet[2954]: E0430 00:53:11.521893 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.522236 kubelet[2954]: E0430 00:53:11.522071 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.522236 kubelet[2954]: W0430 00:53:11.522081 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.522786 kubelet[2954]: E0430 00:53:11.522251 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.522786 kubelet[2954]: W0430 00:53:11.522260 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.522786 kubelet[2954]: E0430 00:53:11.522272 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.522786 kubelet[2954]: E0430 00:53:11.522483 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.522786 kubelet[2954]: W0430 00:53:11.522491 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.522786 kubelet[2954]: E0430 00:53:11.522505 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.522786 kubelet[2954]: E0430 00:53:11.522510 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.523816 kubelet[2954]: E0430 00:53:11.523650 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.523816 kubelet[2954]: W0430 00:53:11.523679 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.523816 kubelet[2954]: E0430 00:53:11.523720 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.525103 kubelet[2954]: E0430 00:53:11.524010 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.525103 kubelet[2954]: W0430 00:53:11.524026 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.525103 kubelet[2954]: E0430 00:53:11.524047 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.525103 kubelet[2954]: E0430 00:53:11.524253 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.525103 kubelet[2954]: W0430 00:53:11.524262 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.525103 kubelet[2954]: E0430 00:53:11.524279 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.525103 kubelet[2954]: E0430 00:53:11.524744 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.525103 kubelet[2954]: W0430 00:53:11.524758 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.525103 kubelet[2954]: E0430 00:53:11.524782 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.525103 kubelet[2954]: E0430 00:53:11.525104 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.525871 kubelet[2954]: W0430 00:53:11.525118 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.525871 kubelet[2954]: E0430 00:53:11.525171 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.526225 kubelet[2954]: E0430 00:53:11.526160 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.526225 kubelet[2954]: W0430 00:53:11.526183 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.526225 kubelet[2954]: E0430 00:53:11.526206 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.526461 kubelet[2954]: E0430 00:53:11.526443 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.526461 kubelet[2954]: W0430 00:53:11.526458 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.526558 kubelet[2954]: E0430 00:53:11.526479 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.527263 kubelet[2954]: E0430 00:53:11.527057 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.527263 kubelet[2954]: W0430 00:53:11.527086 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.527263 kubelet[2954]: E0430 00:53:11.527112 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:11.527559 kubelet[2954]: E0430 00:53:11.527543 2954 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:53:11.527693 kubelet[2954]: W0430 00:53:11.527636 2954 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:53:11.527693 kubelet[2954]: E0430 00:53:11.527661 2954 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:53:12.237370 containerd[1612]: time="2025-04-30T00:53:12.237316136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:12.239075 containerd[1612]: time="2025-04-30T00:53:12.238827178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" Apr 30 00:53:12.240097 containerd[1612]: time="2025-04-30T00:53:12.239803965Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:12.242408 containerd[1612]: time="2025-04-30T00:53:12.242375636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:12.244145 containerd[1612]: time="2025-04-30T00:53:12.244100004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.323545624s" Apr 30 00:53:12.244145 containerd[1612]: time="2025-04-30T00:53:12.244143565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" Apr 30 00:53:12.247893 containerd[1612]: time="2025-04-30T00:53:12.247812827Z" level=info msg="CreateContainer within sandbox \"55359e6e69f10e634f23e9e721e2f29959ff57a2be2ea176cc52cbe67d0b98f7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 00:53:12.264087 containerd[1612]: time="2025-04-30T00:53:12.264035396Z" level=info msg="CreateContainer within sandbox \"55359e6e69f10e634f23e9e721e2f29959ff57a2be2ea176cc52cbe67d0b98f7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"85af7b97c11c5be0b8b5e9d4ab382465f06d8c938d75d7cb21c8f0a856dc3619\"" Apr 30 00:53:12.266010 containerd[1612]: time="2025-04-30T00:53:12.264600932Z" level=info msg="StartContainer for \"85af7b97c11c5be0b8b5e9d4ab382465f06d8c938d75d7cb21c8f0a856dc3619\"" Apr 30 00:53:12.296016 kubelet[2954]: E0430 00:53:12.295913 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4568b" podUID="128e4be7-9f7d-4c2d-8a19-50ffaa3dc839" Apr 30 00:53:12.334028 containerd[1612]: time="2025-04-30T00:53:12.333902011Z" level=info msg="StartContainer for \"85af7b97c11c5be0b8b5e9d4ab382465f06d8c938d75d7cb21c8f0a856dc3619\" returns successfully" Apr 30 00:53:12.381753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85af7b97c11c5be0b8b5e9d4ab382465f06d8c938d75d7cb21c8f0a856dc3619-rootfs.mount: Deactivated successfully. Apr 30 00:53:12.446629 kubelet[2954]: I0430 00:53:12.446355 2954 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:53:12.470045 containerd[1612]: time="2025-04-30T00:53:12.469931059Z" level=info msg="shim disconnected" id=85af7b97c11c5be0b8b5e9d4ab382465f06d8c938d75d7cb21c8f0a856dc3619 namespace=k8s.io Apr 30 00:53:12.470045 containerd[1612]: time="2025-04-30T00:53:12.470005941Z" level=warning msg="cleaning up after shim disconnected" id=85af7b97c11c5be0b8b5e9d4ab382465f06d8c938d75d7cb21c8f0a856dc3619 namespace=k8s.io Apr 30 00:53:12.470045 containerd[1612]: time="2025-04-30T00:53:12.470015341Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:53:12.477006 kubelet[2954]: I0430 00:53:12.476467 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7748cf48db-x8495" podStartSLOduration=3.094970206 podStartE2EDuration="5.47644736s" podCreationTimestamp="2025-04-30 00:53:07 +0000 UTC" firstStartedPulling="2025-04-30 00:53:08.538757577 +0000 UTC m=+23.383383908" lastFinishedPulling="2025-04-30 00:53:10.920234731 +0000 UTC m=+25.764861062" observedRunningTime="2025-04-30 00:53:11.453274194 +0000 UTC m=+26.297900525" watchObservedRunningTime="2025-04-30 00:53:12.47644736 +0000 UTC m=+27.321073651" Apr 30 00:53:13.455986 containerd[1612]: time="2025-04-30T00:53:13.455914434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 00:53:14.295907 kubelet[2954]: E0430 00:53:14.295658 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4568b" podUID="128e4be7-9f7d-4c2d-8a19-50ffaa3dc839" Apr 30 00:53:16.175533 containerd[1612]: time="2025-04-30T00:53:16.175480815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:16.178074 containerd[1612]: time="2025-04-30T00:53:16.178022601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" Apr 30 00:53:16.179243 containerd[1612]: time="2025-04-30T00:53:16.178800381Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:16.184494 containerd[1612]: time="2025-04-30T00:53:16.183034091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:16.184494 containerd[1612]: time="2025-04-30T00:53:16.184410407Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.728118642s" Apr 30 00:53:16.184494 containerd[1612]: time="2025-04-30T00:53:16.184445528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" Apr 30 00:53:16.189896 containerd[1612]: time="2025-04-30T00:53:16.189847147Z" level=info msg="CreateContainer within sandbox \"55359e6e69f10e634f23e9e721e2f29959ff57a2be2ea176cc52cbe67d0b98f7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 00:53:16.208419 containerd[1612]: time="2025-04-30T00:53:16.208343667Z" level=info msg="CreateContainer within sandbox \"55359e6e69f10e634f23e9e721e2f29959ff57a2be2ea176cc52cbe67d0b98f7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8f98b3c2a01f2f224a25ee8211757dd7a47eea8ddc53478d36e995439c886dac\"" Apr 30 00:53:16.209957 containerd[1612]: time="2025-04-30T00:53:16.209897067Z" level=info msg="StartContainer for \"8f98b3c2a01f2f224a25ee8211757dd7a47eea8ddc53478d36e995439c886dac\"" Apr 30 00:53:16.296049 kubelet[2954]: E0430 00:53:16.295683 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4568b" podUID="128e4be7-9f7d-4c2d-8a19-50ffaa3dc839" Apr 30 00:53:16.306249 containerd[1612]: time="2025-04-30T00:53:16.306080278Z" level=info msg="StartContainer for \"8f98b3c2a01f2f224a25ee8211757dd7a47eea8ddc53478d36e995439c886dac\" returns successfully" Apr 30 00:53:16.956163 containerd[1612]: time="2025-04-30T00:53:16.955775507Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:53:16.973071 kubelet[2954]: I0430 00:53:16.972485 2954 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 00:53:16.990485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f98b3c2a01f2f224a25ee8211757dd7a47eea8ddc53478d36e995439c886dac-rootfs.mount: Deactivated successfully. Apr 30 00:53:17.015491 kubelet[2954]: I0430 00:53:17.013517 2954 topology_manager.go:215] "Topology Admit Handler" podUID="4c9552da-7cf1-4abb-b74a-aa79391aa549" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v4z9k" Apr 30 00:53:17.017804 kubelet[2954]: I0430 00:53:17.016901 2954 topology_manager.go:215] "Topology Admit Handler" podUID="3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9hnv8" Apr 30 00:53:17.035210 kubelet[2954]: I0430 00:53:17.033986 2954 topology_manager.go:215] "Topology Admit Handler" podUID="ef26787d-7604-4d5c-b737-eddfb5e0d093" podNamespace="calico-system" podName="calico-kube-controllers-6f9694b44d-f2nb6" Apr 30 00:53:17.038041 kubelet[2954]: I0430 00:53:17.037978 2954 topology_manager.go:215] "Topology Admit Handler" podUID="013c8471-b85b-43e8-91b6-3f1bd76d6d79" podNamespace="calico-apiserver" podName="calico-apiserver-5bb6fc6d5c-rwb45" Apr 30 00:53:17.039305 kubelet[2954]: I0430 00:53:17.039272 2954 topology_manager.go:215] "Topology Admit Handler" podUID="f8d0391c-5506-4c4e-a935-4ff4aee98d0c" podNamespace="calico-apiserver" podName="calico-apiserver-5bb6fc6d5c-65nrh" Apr 30 00:53:17.067350 kubelet[2954]: I0430 00:53:17.067038 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8d0391c-5506-4c4e-a935-4ff4aee98d0c-calico-apiserver-certs\") pod \"calico-apiserver-5bb6fc6d5c-65nrh\" (UID: \"f8d0391c-5506-4c4e-a935-4ff4aee98d0c\") " pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-65nrh" Apr 30 00:53:17.067350 kubelet[2954]: I0430 00:53:17.067078 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8-config-volume\") pod \"coredns-7db6d8ff4d-9hnv8\" (UID: \"3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8\") " pod="kube-system/coredns-7db6d8ff4d-9hnv8" Apr 30 00:53:17.067350 kubelet[2954]: I0430 00:53:17.067106 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c9552da-7cf1-4abb-b74a-aa79391aa549-config-volume\") pod \"coredns-7db6d8ff4d-v4z9k\" (UID: \"4c9552da-7cf1-4abb-b74a-aa79391aa549\") " pod="kube-system/coredns-7db6d8ff4d-v4z9k" Apr 30 00:53:17.067350 kubelet[2954]: I0430 00:53:17.067139 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef26787d-7604-4d5c-b737-eddfb5e0d093-tigera-ca-bundle\") pod \"calico-kube-controllers-6f9694b44d-f2nb6\" (UID: \"ef26787d-7604-4d5c-b737-eddfb5e0d093\") " pod="calico-system/calico-kube-controllers-6f9694b44d-f2nb6" Apr 30 00:53:17.067350 kubelet[2954]: I0430 00:53:17.067158 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cccfg\" (UniqueName: \"kubernetes.io/projected/013c8471-b85b-43e8-91b6-3f1bd76d6d79-kube-api-access-cccfg\") pod \"calico-apiserver-5bb6fc6d5c-rwb45\" (UID: \"013c8471-b85b-43e8-91b6-3f1bd76d6d79\") " pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-rwb45" Apr 30 00:53:17.067961 kubelet[2954]: I0430 00:53:17.067176 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bbbj\" (UniqueName: \"kubernetes.io/projected/4c9552da-7cf1-4abb-b74a-aa79391aa549-kube-api-access-7bbbj\") pod \"coredns-7db6d8ff4d-v4z9k\" (UID: \"4c9552da-7cf1-4abb-b74a-aa79391aa549\") " pod="kube-system/coredns-7db6d8ff4d-v4z9k" Apr 30 00:53:17.067961 kubelet[2954]: I0430 00:53:17.067194 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brhsh\" (UniqueName: \"kubernetes.io/projected/ef26787d-7604-4d5c-b737-eddfb5e0d093-kube-api-access-brhsh\") pod \"calico-kube-controllers-6f9694b44d-f2nb6\" (UID: \"ef26787d-7604-4d5c-b737-eddfb5e0d093\") " pod="calico-system/calico-kube-controllers-6f9694b44d-f2nb6" Apr 30 00:53:17.067961 kubelet[2954]: I0430 00:53:17.067214 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wz2r\" (UniqueName: \"kubernetes.io/projected/3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8-kube-api-access-6wz2r\") pod \"coredns-7db6d8ff4d-9hnv8\" (UID: \"3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8\") " pod="kube-system/coredns-7db6d8ff4d-9hnv8" Apr 30 00:53:17.067961 kubelet[2954]: I0430 00:53:17.067231 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/013c8471-b85b-43e8-91b6-3f1bd76d6d79-calico-apiserver-certs\") pod \"calico-apiserver-5bb6fc6d5c-rwb45\" (UID: \"013c8471-b85b-43e8-91b6-3f1bd76d6d79\") " pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-rwb45" Apr 30 00:53:17.067961 kubelet[2954]: I0430 00:53:17.067248 2954 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sphr7\" (UniqueName: \"kubernetes.io/projected/f8d0391c-5506-4c4e-a935-4ff4aee98d0c-kube-api-access-sphr7\") pod \"calico-apiserver-5bb6fc6d5c-65nrh\" (UID: \"f8d0391c-5506-4c4e-a935-4ff4aee98d0c\") " pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-65nrh" Apr 30 00:53:17.093489 containerd[1612]: time="2025-04-30T00:53:17.093177107Z" level=info msg="shim disconnected" id=8f98b3c2a01f2f224a25ee8211757dd7a47eea8ddc53478d36e995439c886dac namespace=k8s.io Apr 30 00:53:17.093489 containerd[1612]: time="2025-04-30T00:53:17.093261909Z" level=warning msg="cleaning up after shim disconnected" id=8f98b3c2a01f2f224a25ee8211757dd7a47eea8ddc53478d36e995439c886dac namespace=k8s.io Apr 30 00:53:17.093489 containerd[1612]: time="2025-04-30T00:53:17.093274750Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:53:17.324649 containerd[1612]: time="2025-04-30T00:53:17.323976350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v4z9k,Uid:4c9552da-7cf1-4abb-b74a-aa79391aa549,Namespace:kube-system,Attempt:0,}" Apr 30 00:53:17.324649 containerd[1612]: time="2025-04-30T00:53:17.324604446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hnv8,Uid:3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8,Namespace:kube-system,Attempt:0,}" Apr 30 00:53:17.344096 containerd[1612]: time="2025-04-30T00:53:17.343464807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9694b44d-f2nb6,Uid:ef26787d-7604-4d5c-b737-eddfb5e0d093,Namespace:calico-system,Attempt:0,}" Apr 30 00:53:17.350779 containerd[1612]: time="2025-04-30T00:53:17.350637429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb6fc6d5c-65nrh,Uid:f8d0391c-5506-4c4e-a935-4ff4aee98d0c,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:53:17.356515 containerd[1612]: time="2025-04-30T00:53:17.356374776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb6fc6d5c-rwb45,Uid:013c8471-b85b-43e8-91b6-3f1bd76d6d79,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:53:17.486610 containerd[1612]: time="2025-04-30T00:53:17.486049681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 00:53:17.567996 containerd[1612]: time="2025-04-30T00:53:17.567821605Z" level=error msg="Failed to destroy network for sandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.568677 containerd[1612]: time="2025-04-30T00:53:17.568640626Z" level=error msg="encountered an error cleaning up failed sandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.569108 containerd[1612]: time="2025-04-30T00:53:17.568980194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb6fc6d5c-65nrh,Uid:f8d0391c-5506-4c4e-a935-4ff4aee98d0c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.570020 kubelet[2954]: E0430 00:53:17.569663 2954 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.570020 kubelet[2954]: E0430 00:53:17.569889 2954 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-65nrh" Apr 30 00:53:17.570020 kubelet[2954]: E0430 00:53:17.569913 2954 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-65nrh" Apr 30 00:53:17.570832 kubelet[2954]: E0430 00:53:17.569964 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bb6fc6d5c-65nrh_calico-apiserver(f8d0391c-5506-4c4e-a935-4ff4aee98d0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bb6fc6d5c-65nrh_calico-apiserver(f8d0391c-5506-4c4e-a935-4ff4aee98d0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-65nrh" podUID="f8d0391c-5506-4c4e-a935-4ff4aee98d0c" Apr 30 00:53:17.585335 containerd[1612]: time="2025-04-30T00:53:17.584210183Z" level=error msg="Failed to destroy network for sandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.587350 containerd[1612]: time="2025-04-30T00:53:17.587168818Z" level=error msg="encountered an error cleaning up failed sandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.587350 containerd[1612]: time="2025-04-30T00:53:17.587244620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hnv8,Uid:3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.587509 kubelet[2954]: E0430 00:53:17.587474 2954 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.587552 kubelet[2954]: E0430 00:53:17.587530 2954 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9hnv8" Apr 30 00:53:17.587581 kubelet[2954]: E0430 00:53:17.587548 2954 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9hnv8" Apr 30 00:53:17.587606 kubelet[2954]: E0430 00:53:17.587586 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9hnv8_kube-system(3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9hnv8_kube-system(3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9hnv8" podUID="3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8" Apr 30 00:53:17.589609 containerd[1612]: time="2025-04-30T00:53:17.589120308Z" level=error msg="Failed to destroy network for sandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.590122 containerd[1612]: time="2025-04-30T00:53:17.590075652Z" level=error msg="encountered an error cleaning up failed sandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.590184 containerd[1612]: time="2025-04-30T00:53:17.590142374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v4z9k,Uid:4c9552da-7cf1-4abb-b74a-aa79391aa549,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.590511 kubelet[2954]: E0430 00:53:17.590393 2954 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.590578 kubelet[2954]: E0430 00:53:17.590529 2954 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-v4z9k" Apr 30 00:53:17.590578 kubelet[2954]: E0430 00:53:17.590552 2954 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-v4z9k" Apr 30 00:53:17.591091 kubelet[2954]: E0430 00:53:17.591045 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-v4z9k_kube-system(4c9552da-7cf1-4abb-b74a-aa79391aa549)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-v4z9k_kube-system(4c9552da-7cf1-4abb-b74a-aa79391aa549)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-v4z9k" podUID="4c9552da-7cf1-4abb-b74a-aa79391aa549" Apr 30 00:53:17.600552 containerd[1612]: time="2025-04-30T00:53:17.600315393Z" level=error msg="Failed to destroy network for sandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.601535 containerd[1612]: time="2025-04-30T00:53:17.601433622Z" level=error msg="encountered an error cleaning up failed sandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.601683 containerd[1612]: time="2025-04-30T00:53:17.601661267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb6fc6d5c-rwb45,Uid:013c8471-b85b-43e8-91b6-3f1bd76d6d79,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.602095 kubelet[2954]: E0430 00:53:17.602061 2954 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.603350 kubelet[2954]: E0430 00:53:17.602274 2954 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-rwb45" Apr 30 00:53:17.603350 kubelet[2954]: E0430 00:53:17.602301 2954 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-rwb45" Apr 30 00:53:17.603350 kubelet[2954]: E0430 00:53:17.602355 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bb6fc6d5c-rwb45_calico-apiserver(013c8471-b85b-43e8-91b6-3f1bd76d6d79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bb6fc6d5c-rwb45_calico-apiserver(013c8471-b85b-43e8-91b6-3f1bd76d6d79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-rwb45" podUID="013c8471-b85b-43e8-91b6-3f1bd76d6d79" Apr 30 00:53:17.617673 containerd[1612]: time="2025-04-30T00:53:17.617622754Z" level=error msg="Failed to destroy network for sandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.618428 containerd[1612]: time="2025-04-30T00:53:17.618233210Z" level=error msg="encountered an error cleaning up failed sandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.618428 containerd[1612]: time="2025-04-30T00:53:17.618319932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9694b44d-f2nb6,Uid:ef26787d-7604-4d5c-b737-eddfb5e0d093,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.618792 kubelet[2954]: E0430 00:53:17.618725 2954 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:17.618864 kubelet[2954]: E0430 00:53:17.618796 2954 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9694b44d-f2nb6" Apr 30 00:53:17.618864 kubelet[2954]: E0430 00:53:17.618814 2954 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9694b44d-f2nb6" Apr 30 00:53:17.618929 kubelet[2954]: E0430 00:53:17.618856 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f9694b44d-f2nb6_calico-system(ef26787d-7604-4d5c-b737-eddfb5e0d093)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f9694b44d-f2nb6_calico-system(ef26787d-7604-4d5c-b737-eddfb5e0d093)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f9694b44d-f2nb6" podUID="ef26787d-7604-4d5c-b737-eddfb5e0d093" Apr 30 00:53:18.209982 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b-shm.mount: Deactivated successfully. Apr 30 00:53:18.302161 containerd[1612]: time="2025-04-30T00:53:18.301670988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4568b,Uid:128e4be7-9f7d-4c2d-8a19-50ffaa3dc839,Namespace:calico-system,Attempt:0,}" Apr 30 00:53:18.376376 containerd[1612]: time="2025-04-30T00:53:18.376291620Z" level=error msg="Failed to destroy network for sandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:18.379104 containerd[1612]: time="2025-04-30T00:53:18.377470970Z" level=error msg="encountered an error cleaning up failed sandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:18.379104 containerd[1612]: time="2025-04-30T00:53:18.377560132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4568b,Uid:128e4be7-9f7d-4c2d-8a19-50ffaa3dc839,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:18.379294 kubelet[2954]: E0430 00:53:18.379200 2954 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:18.379294 kubelet[2954]: E0430 00:53:18.379269 2954 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4568b" Apr 30 00:53:18.379435 kubelet[2954]: E0430 00:53:18.379289 2954 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4568b" Apr 30 00:53:18.379435 kubelet[2954]: E0430 00:53:18.379339 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4568b_calico-system(128e4be7-9f7d-4c2d-8a19-50ffaa3dc839)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4568b_calico-system(128e4be7-9f7d-4c2d-8a19-50ffaa3dc839)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4568b" podUID="128e4be7-9f7d-4c2d-8a19-50ffaa3dc839" Apr 30 00:53:18.381101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224-shm.mount: Deactivated successfully. Apr 30 00:53:18.489751 kubelet[2954]: I0430 00:53:18.488749 2954 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:18.490917 kubelet[2954]: I0430 00:53:18.490728 2954 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:18.492401 containerd[1612]: time="2025-04-30T00:53:18.491628154Z" level=info msg="StopPodSandbox for \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\"" Apr 30 00:53:18.492401 containerd[1612]: time="2025-04-30T00:53:18.491809998Z" level=info msg="Ensure that sandbox 9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395 in task-service has been cleanup successfully" Apr 30 00:53:18.493452 containerd[1612]: time="2025-04-30T00:53:18.493391358Z" level=info msg="StopPodSandbox for \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\"" Apr 30 00:53:18.494381 containerd[1612]: time="2025-04-30T00:53:18.494346102Z" level=info msg="Ensure that sandbox ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074 in task-service has been cleanup successfully" Apr 30 00:53:18.494739 kubelet[2954]: I0430 00:53:18.494713 2954 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:18.495779 containerd[1612]: time="2025-04-30T00:53:18.495606773Z" level=info msg="StopPodSandbox for \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\"" Apr 30 00:53:18.497341 containerd[1612]: time="2025-04-30T00:53:18.497054050Z" level=info msg="Ensure that sandbox d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224 in task-service has been cleanup successfully" Apr 30 00:53:18.501224 kubelet[2954]: I0430 00:53:18.501169 2954 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:18.503706 containerd[1612]: time="2025-04-30T00:53:18.503325767Z" level=info msg="StopPodSandbox for \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\"" Apr 30 00:53:18.503706 containerd[1612]: time="2025-04-30T00:53:18.503497891Z" level=info msg="Ensure that sandbox 233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c in task-service has been cleanup successfully" Apr 30 00:53:18.510044 kubelet[2954]: I0430 00:53:18.509996 2954 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:18.511910 containerd[1612]: time="2025-04-30T00:53:18.511186844Z" level=info msg="StopPodSandbox for \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\"" Apr 30 00:53:18.511910 containerd[1612]: time="2025-04-30T00:53:18.511394729Z" level=info msg="Ensure that sandbox 125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a in task-service has been cleanup successfully" Apr 30 00:53:18.523438 kubelet[2954]: I0430 00:53:18.523410 2954 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:18.526187 containerd[1612]: time="2025-04-30T00:53:18.526146020Z" level=info msg="StopPodSandbox for \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\"" Apr 30 00:53:18.527890 containerd[1612]: time="2025-04-30T00:53:18.527462493Z" level=info msg="Ensure that sandbox c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b in task-service has been cleanup successfully" Apr 30 00:53:18.594000 containerd[1612]: time="2025-04-30T00:53:18.593894319Z" level=error msg="StopPodSandbox for \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\" failed" error="failed to destroy network for sandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:18.594877 kubelet[2954]: E0430 00:53:18.594250 2954 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:18.594877 kubelet[2954]: E0430 00:53:18.594304 2954 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074"} Apr 30 00:53:18.594877 kubelet[2954]: E0430 00:53:18.594375 2954 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:53:18.594877 kubelet[2954]: E0430 00:53:18.594396 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9hnv8" podUID="3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8" Apr 30 00:53:18.611749 containerd[1612]: time="2025-04-30T00:53:18.610413253Z" level=error msg="StopPodSandbox for \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\" failed" error="failed to destroy network for sandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:18.611918 kubelet[2954]: E0430 00:53:18.610818 2954 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:18.611918 kubelet[2954]: E0430 00:53:18.610885 2954 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c"} Apr 30 00:53:18.611918 kubelet[2954]: E0430 00:53:18.610923 2954 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"013c8471-b85b-43e8-91b6-3f1bd76d6d79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:53:18.611918 kubelet[2954]: E0430 00:53:18.610961 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"013c8471-b85b-43e8-91b6-3f1bd76d6d79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-rwb45" podUID="013c8471-b85b-43e8-91b6-3f1bd76d6d79" Apr 30 00:53:18.612606 containerd[1612]: time="2025-04-30T00:53:18.612551147Z" level=error msg="StopPodSandbox for \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\" failed" error="failed to destroy network for sandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:18.613037 kubelet[2954]: E0430 00:53:18.612831 2954 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:18.613037 kubelet[2954]: E0430 00:53:18.612882 2954 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224"} Apr 30 00:53:18.613037 kubelet[2954]: E0430 00:53:18.612921 2954 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"128e4be7-9f7d-4c2d-8a19-50ffaa3dc839\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:53:18.613037 kubelet[2954]: E0430 00:53:18.612988 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"128e4be7-9f7d-4c2d-8a19-50ffaa3dc839\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4568b" podUID="128e4be7-9f7d-4c2d-8a19-50ffaa3dc839" Apr 30 00:53:18.616148 containerd[1612]: time="2025-04-30T00:53:18.616095476Z" level=error msg="StopPodSandbox for \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\" failed" error="failed to destroy network for sandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:18.616646 kubelet[2954]: E0430 00:53:18.616504 2954 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:18.616646 kubelet[2954]: E0430 00:53:18.616561 2954 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395"} Apr 30 00:53:18.616646 kubelet[2954]: E0430 00:53:18.616592 2954 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef26787d-7604-4d5c-b737-eddfb5e0d093\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:53:18.616646 kubelet[2954]: E0430 00:53:18.616613 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef26787d-7604-4d5c-b737-eddfb5e0d093\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f9694b44d-f2nb6" podUID="ef26787d-7604-4d5c-b737-eddfb5e0d093" Apr 30 00:53:18.622498 containerd[1612]: time="2025-04-30T00:53:18.622431995Z" level=error msg="StopPodSandbox for \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\" failed" error="failed to destroy network for sandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:18.622905 kubelet[2954]: E0430 00:53:18.622755 2954 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:18.622905 kubelet[2954]: E0430 00:53:18.622816 2954 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a"} Apr 30 00:53:18.622905 kubelet[2954]: E0430 00:53:18.622850 2954 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8d0391c-5506-4c4e-a935-4ff4aee98d0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:53:18.622905 kubelet[2954]: E0430 00:53:18.622872 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8d0391c-5506-4c4e-a935-4ff4aee98d0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-65nrh" podUID="f8d0391c-5506-4c4e-a935-4ff4aee98d0c" Apr 30 00:53:18.627228 containerd[1612]: time="2025-04-30T00:53:18.627152993Z" level=error msg="StopPodSandbox for \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\" failed" error="failed to destroy network for sandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:53:18.627907 kubelet[2954]: E0430 00:53:18.627802 2954 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:18.628101 kubelet[2954]: E0430 00:53:18.627962 2954 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b"} Apr 30 00:53:18.628101 kubelet[2954]: E0430 00:53:18.628013 2954 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c9552da-7cf1-4abb-b74a-aa79391aa549\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:53:18.628101 kubelet[2954]: E0430 00:53:18.628069 2954 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c9552da-7cf1-4abb-b74a-aa79391aa549\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-v4z9k" podUID="4c9552da-7cf1-4abb-b74a-aa79391aa549" Apr 30 00:53:21.203772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3905021572.mount: Deactivated successfully. Apr 30 00:53:21.244221 containerd[1612]: time="2025-04-30T00:53:21.244148575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:21.246734 containerd[1612]: time="2025-04-30T00:53:21.246648475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" Apr 30 00:53:21.248964 containerd[1612]: time="2025-04-30T00:53:21.248001907Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:21.250609 containerd[1612]: time="2025-04-30T00:53:21.250548888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:21.251587 containerd[1612]: time="2025-04-30T00:53:21.251443950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.765346027s" Apr 30 00:53:21.251587 containerd[1612]: time="2025-04-30T00:53:21.251478870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" Apr 30 00:53:21.269198 containerd[1612]: time="2025-04-30T00:53:21.269127093Z" level=info msg="CreateContainer within sandbox \"55359e6e69f10e634f23e9e721e2f29959ff57a2be2ea176cc52cbe67d0b98f7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 00:53:21.290975 containerd[1612]: time="2025-04-30T00:53:21.290715490Z" level=info msg="CreateContainer within sandbox \"55359e6e69f10e634f23e9e721e2f29959ff57a2be2ea176cc52cbe67d0b98f7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"58e02cbff908733575c63647c75694dccb29d64e76d4297a6222bdfdc5c7411c\"" Apr 30 00:53:21.291621 containerd[1612]: time="2025-04-30T00:53:21.291590791Z" level=info msg="StartContainer for \"58e02cbff908733575c63647c75694dccb29d64e76d4297a6222bdfdc5c7411c\"" Apr 30 00:53:21.365176 containerd[1612]: time="2025-04-30T00:53:21.365137073Z" level=info msg="StartContainer for \"58e02cbff908733575c63647c75694dccb29d64e76d4297a6222bdfdc5c7411c\" returns successfully" Apr 30 00:53:21.493974 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 00:53:21.494271 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 00:53:21.579380 kubelet[2954]: I0430 00:53:21.579270 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j7b4g" podStartSLOduration=1.006359154 podStartE2EDuration="13.579237642s" podCreationTimestamp="2025-04-30 00:53:08 +0000 UTC" firstStartedPulling="2025-04-30 00:53:08.679842212 +0000 UTC m=+23.524468543" lastFinishedPulling="2025-04-30 00:53:21.2527207 +0000 UTC m=+36.097347031" observedRunningTime="2025-04-30 00:53:21.576668261 +0000 UTC m=+36.421294592" watchObservedRunningTime="2025-04-30 00:53:21.579237642 +0000 UTC m=+36.423863973" Apr 30 00:53:29.297733 containerd[1612]: time="2025-04-30T00:53:29.297387522Z" level=info msg="StopPodSandbox for \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\"" Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.386 [INFO][4364] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.386 [INFO][4364] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" iface="eth0" netns="/var/run/netns/cni-709371a7-c490-492f-c5d9-be1847ad5c64" Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.387 [INFO][4364] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" iface="eth0" netns="/var/run/netns/cni-709371a7-c490-492f-c5d9-be1847ad5c64" Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.387 [INFO][4364] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" iface="eth0" netns="/var/run/netns/cni-709371a7-c490-492f-c5d9-be1847ad5c64" Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.387 [INFO][4364] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.387 [INFO][4364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.431 [INFO][4372] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" HandleID="k8s-pod-network.233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.432 [INFO][4372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.432 [INFO][4372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.443 [WARNING][4372] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" HandleID="k8s-pod-network.233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.443 [INFO][4372] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" HandleID="k8s-pod-network.233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.449 [INFO][4372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:29.455198 containerd[1612]: 2025-04-30 00:53:29.452 [INFO][4364] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:29.455954 containerd[1612]: time="2025-04-30T00:53:29.455753034Z" level=info msg="TearDown network for sandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\" successfully" Apr 30 00:53:29.455954 containerd[1612]: time="2025-04-30T00:53:29.455801595Z" level=info msg="StopPodSandbox for \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\" returns successfully" Apr 30 00:53:29.457590 containerd[1612]: time="2025-04-30T00:53:29.457225386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb6fc6d5c-rwb45,Uid:013c8471-b85b-43e8-91b6-3f1bd76d6d79,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:53:29.458794 systemd[1]: run-netns-cni\x2d709371a7\x2dc490\x2d492f\x2dc5d9\x2dbe1847ad5c64.mount: Deactivated successfully. Apr 30 00:53:29.677136 systemd-networkd[1242]: cali5ef88a62320: Link UP Apr 30 00:53:29.677871 systemd-networkd[1242]: cali5ef88a62320: Gained carrier Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.513 [INFO][4386] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.536 [INFO][4386] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0 calico-apiserver-5bb6fc6d5c- calico-apiserver 013c8471-b85b-43e8-91b6-3f1bd76d6d79 771 0 2025-04-30 00:53:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bb6fc6d5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-6-32a99953eb calico-apiserver-5bb6fc6d5c-rwb45 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ef88a62320 [] []}} ContainerID="87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-rwb45" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-" Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.537 [INFO][4386] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-rwb45" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.589 [INFO][4410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" HandleID="k8s-pod-network.87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.615 [INFO][4410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" HandleID="k8s-pod-network.87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fa170), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-6-32a99953eb", "pod":"calico-apiserver-5bb6fc6d5c-rwb45", "timestamp":"2025-04-30 00:53:29.589922348 +0000 UTC"}, Hostname:"ci-4081-3-3-6-32a99953eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.616 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.616 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.616 [INFO][4410] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-6-32a99953eb' Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.619 [INFO][4410] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.626 [INFO][4410] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.632 [INFO][4410] ipam/ipam.go 489: Trying affinity for 192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.635 [INFO][4410] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.641 [INFO][4410] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.641 [INFO][4410] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.644 [INFO][4410] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838 Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.653 [INFO][4410] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.664 [INFO][4410] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.193/26] block=192.168.76.192/26 handle="k8s-pod-network.87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.664 [INFO][4410] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.193/26] handle="k8s-pod-network.87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.664 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:29.701928 containerd[1612]: 2025-04-30 00:53:29.664 [INFO][4410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.193/26] IPv6=[] ContainerID="87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" HandleID="k8s-pod-network.87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:29.703801 containerd[1612]: 2025-04-30 00:53:29.667 [INFO][4386] cni-plugin/k8s.go 386: Populated endpoint ContainerID="87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-rwb45" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0", GenerateName:"calico-apiserver-5bb6fc6d5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"013c8471-b85b-43e8-91b6-3f1bd76d6d79", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb6fc6d5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"", Pod:"calico-apiserver-5bb6fc6d5c-rwb45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ef88a62320", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:29.703801 containerd[1612]: 2025-04-30 00:53:29.667 [INFO][4386] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.193/32] ContainerID="87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-rwb45" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:29.703801 containerd[1612]: 2025-04-30 00:53:29.667 [INFO][4386] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ef88a62320 ContainerID="87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-rwb45" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:29.703801 containerd[1612]: 2025-04-30 00:53:29.678 [INFO][4386] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-rwb45" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:29.703801 containerd[1612]: 2025-04-30 00:53:29.678 [INFO][4386] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-rwb45" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0", GenerateName:"calico-apiserver-5bb6fc6d5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"013c8471-b85b-43e8-91b6-3f1bd76d6d79", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb6fc6d5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838", Pod:"calico-apiserver-5bb6fc6d5c-rwb45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ef88a62320", MAC:"ae:15:79:18:fe:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:29.703801 containerd[1612]: 2025-04-30 00:53:29.695 [INFO][4386] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-rwb45" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:29.732762 containerd[1612]: time="2025-04-30T00:53:29.732466641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:29.732762 containerd[1612]: time="2025-04-30T00:53:29.732530802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:29.732762 containerd[1612]: time="2025-04-30T00:53:29.732546283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:29.732762 containerd[1612]: time="2025-04-30T00:53:29.732639045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:29.786204 containerd[1612]: time="2025-04-30T00:53:29.786164711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb6fc6d5c-rwb45,Uid:013c8471-b85b-43e8-91b6-3f1bd76d6d79,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838\"" Apr 30 00:53:29.792829 containerd[1612]: time="2025-04-30T00:53:29.792773733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:53:30.456871 systemd[1]: run-containerd-runc-k8s.io-87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838-runc.VKJ7oZ.mount: Deactivated successfully. Apr 30 00:53:30.730224 systemd-networkd[1242]: cali5ef88a62320: Gained IPv6LL Apr 30 00:53:31.300289 containerd[1612]: time="2025-04-30T00:53:31.299837816Z" level=info msg="StopPodSandbox for \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\"" Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.369 [INFO][4513] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.369 [INFO][4513] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" iface="eth0" netns="/var/run/netns/cni-2240311c-bcd6-b29e-898f-d98a8e6e7a49" Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.371 [INFO][4513] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" iface="eth0" netns="/var/run/netns/cni-2240311c-bcd6-b29e-898f-d98a8e6e7a49" Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.372 [INFO][4513] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" iface="eth0" netns="/var/run/netns/cni-2240311c-bcd6-b29e-898f-d98a8e6e7a49" Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.372 [INFO][4513] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.372 [INFO][4513] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.399 [INFO][4521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" HandleID="k8s-pod-network.d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Workload="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.399 [INFO][4521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.399 [INFO][4521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.411 [WARNING][4521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" HandleID="k8s-pod-network.d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Workload="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.411 [INFO][4521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" HandleID="k8s-pod-network.d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Workload="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.415 [INFO][4521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:31.421975 containerd[1612]: 2025-04-30 00:53:31.419 [INFO][4513] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:31.426046 containerd[1612]: time="2025-04-30T00:53:31.424044129Z" level=info msg="TearDown network for sandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\" successfully" Apr 30 00:53:31.426046 containerd[1612]: time="2025-04-30T00:53:31.424081850Z" level=info msg="StopPodSandbox for \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\" returns successfully" Apr 30 00:53:31.426046 containerd[1612]: time="2025-04-30T00:53:31.424750424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4568b,Uid:128e4be7-9f7d-4c2d-8a19-50ffaa3dc839,Namespace:calico-system,Attempt:1,}" Apr 30 00:53:31.424509 systemd[1]: run-netns-cni\x2d2240311c\x2dbcd6\x2db29e\x2d898f\x2dd98a8e6e7a49.mount: Deactivated successfully. Apr 30 00:53:31.588529 systemd-networkd[1242]: calife66e6e5cc8: Link UP Apr 30 00:53:31.589728 systemd-networkd[1242]: calife66e6e5cc8: Gained carrier Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.471 [INFO][4531] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.492 [INFO][4531] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0 csi-node-driver- calico-system 128e4be7-9f7d-4c2d-8a19-50ffaa3dc839 780 0 2025-04-30 00:53:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-6-32a99953eb csi-node-driver-4568b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calife66e6e5cc8 [] []}} ContainerID="332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" Namespace="calico-system" Pod="csi-node-driver-4568b" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-" Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.493 [INFO][4531] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" Namespace="calico-system" Pod="csi-node-driver-4568b" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.531 [INFO][4540] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" HandleID="k8s-pod-network.332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" Workload="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.547 [INFO][4540] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" HandleID="k8s-pod-network.332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" Workload="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004212a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-6-32a99953eb", "pod":"csi-node-driver-4568b", "timestamp":"2025-04-30 00:53:31.531928182 +0000 UTC"}, Hostname:"ci-4081-3-3-6-32a99953eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.548 [INFO][4540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.548 [INFO][4540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.548 [INFO][4540] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-6-32a99953eb' Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.550 [INFO][4540] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.556 [INFO][4540] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.561 [INFO][4540] ipam/ipam.go 489: Trying affinity for 192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.564 [INFO][4540] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.567 [INFO][4540] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.567 [INFO][4540] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.569 [INFO][4540] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47 Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.575 [INFO][4540] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.583 [INFO][4540] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.194/26] block=192.168.76.192/26 handle="k8s-pod-network.332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.583 [INFO][4540] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.194/26] handle="k8s-pod-network.332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.583 [INFO][4540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:31.610307 containerd[1612]: 2025-04-30 00:53:31.583 [INFO][4540] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.194/26] IPv6=[] ContainerID="332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" HandleID="k8s-pod-network.332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" Workload="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:31.611449 containerd[1612]: 2025-04-30 00:53:31.585 [INFO][4531] cni-plugin/k8s.go 386: Populated endpoint ContainerID="332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" Namespace="calico-system" Pod="csi-node-driver-4568b" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"128e4be7-9f7d-4c2d-8a19-50ffaa3dc839", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"", Pod:"csi-node-driver-4568b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife66e6e5cc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:31.611449 containerd[1612]: 2025-04-30 00:53:31.585 [INFO][4531] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.194/32] ContainerID="332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" Namespace="calico-system" Pod="csi-node-driver-4568b" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:31.611449 containerd[1612]: 2025-04-30 00:53:31.585 [INFO][4531] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife66e6e5cc8 ContainerID="332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" Namespace="calico-system" Pod="csi-node-driver-4568b" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:31.611449 containerd[1612]: 2025-04-30 00:53:31.590 [INFO][4531] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" Namespace="calico-system" Pod="csi-node-driver-4568b" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:31.611449 containerd[1612]: 2025-04-30 00:53:31.590 [INFO][4531] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" Namespace="calico-system" Pod="csi-node-driver-4568b" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"128e4be7-9f7d-4c2d-8a19-50ffaa3dc839", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47", Pod:"csi-node-driver-4568b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife66e6e5cc8", MAC:"e6:19:07:5e:5d:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:31.611449 containerd[1612]: 2025-04-30 00:53:31.605 [INFO][4531] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47" Namespace="calico-system" Pod="csi-node-driver-4568b" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:31.635929 containerd[1612]: time="2025-04-30T00:53:31.635650587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:31.635929 containerd[1612]: time="2025-04-30T00:53:31.635778590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:31.635929 containerd[1612]: time="2025-04-30T00:53:31.635799470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:31.637395 containerd[1612]: time="2025-04-30T00:53:31.637325942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:31.788682 containerd[1612]: time="2025-04-30T00:53:31.788444977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4568b,Uid:128e4be7-9f7d-4c2d-8a19-50ffaa3dc839,Namespace:calico-system,Attempt:1,} returns sandbox id \"332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47\"" Apr 30 00:53:32.087129 kubelet[2954]: I0430 00:53:32.086947 2954 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:53:32.298581 containerd[1612]: time="2025-04-30T00:53:32.298417268Z" level=info msg="StopPodSandbox for \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\"" Apr 30 00:53:32.299914 containerd[1612]: time="2025-04-30T00:53:32.299846258Z" level=info msg="StopPodSandbox for \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\"" Apr 30 00:53:32.304703 containerd[1612]: time="2025-04-30T00:53:32.304401352Z" level=info msg="StopPodSandbox for \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\"" Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.415 [INFO][4661] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.419 [INFO][4661] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" iface="eth0" netns="/var/run/netns/cni-e94284f9-c518-bcb4-fa97-5a18bb75e995" Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.420 [INFO][4661] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" iface="eth0" netns="/var/run/netns/cni-e94284f9-c518-bcb4-fa97-5a18bb75e995" Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.422 [INFO][4661] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" iface="eth0" netns="/var/run/netns/cni-e94284f9-c518-bcb4-fa97-5a18bb75e995" Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.422 [INFO][4661] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.422 [INFO][4661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.513 [INFO][4690] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" HandleID="k8s-pod-network.ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.514 [INFO][4690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.514 [INFO][4690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.537 [WARNING][4690] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" HandleID="k8s-pod-network.ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.537 [INFO][4690] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" HandleID="k8s-pod-network.ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.542 [INFO][4690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:32.549116 containerd[1612]: 2025-04-30 00:53:32.545 [INFO][4661] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:32.553077 systemd[1]: run-netns-cni\x2de94284f9\x2dc518\x2dbcb4\x2dfa97\x2d5a18bb75e995.mount: Deactivated successfully. Apr 30 00:53:32.557690 containerd[1612]: time="2025-04-30T00:53:32.556802557Z" level=info msg="TearDown network for sandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\" successfully" Apr 30 00:53:32.557690 containerd[1612]: time="2025-04-30T00:53:32.556846398Z" level=info msg="StopPodSandbox for \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\" returns successfully" Apr 30 00:53:32.560885 containerd[1612]: time="2025-04-30T00:53:32.560734278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hnv8,Uid:3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8,Namespace:kube-system,Attempt:1,}" Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.417 [INFO][4664] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.418 [INFO][4664] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" iface="eth0" netns="/var/run/netns/cni-b213cdf5-5127-3d62-ded1-ffd9a2df8d9c" Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.419 [INFO][4664] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" iface="eth0" netns="/var/run/netns/cni-b213cdf5-5127-3d62-ded1-ffd9a2df8d9c" Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.422 [INFO][4664] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" iface="eth0" netns="/var/run/netns/cni-b213cdf5-5127-3d62-ded1-ffd9a2df8d9c" Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.422 [INFO][4664] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.425 [INFO][4664] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.529 [INFO][4687] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" HandleID="k8s-pod-network.c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.529 [INFO][4687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.542 [INFO][4687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.561 [WARNING][4687] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" HandleID="k8s-pod-network.c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.561 [INFO][4687] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" HandleID="k8s-pod-network.c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.564 [INFO][4687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:32.578658 containerd[1612]: 2025-04-30 00:53:32.571 [INFO][4664] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:32.580156 containerd[1612]: time="2025-04-30T00:53:32.579922113Z" level=info msg="TearDown network for sandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\" successfully" Apr 30 00:53:32.580156 containerd[1612]: time="2025-04-30T00:53:32.580011955Z" level=info msg="StopPodSandbox for \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\" returns successfully" Apr 30 00:53:32.588243 systemd[1]: run-netns-cni\x2db213cdf5\x2d5127\x2d3d62\x2dded1\x2dffd9a2df8d9c.mount: Deactivated successfully. Apr 30 00:53:32.600060 containerd[1612]: time="2025-04-30T00:53:32.597347833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v4z9k,Uid:4c9552da-7cf1-4abb-b74a-aa79391aa549,Namespace:kube-system,Attempt:1,}" Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.412 [INFO][4671] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.413 [INFO][4671] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" iface="eth0" netns="/var/run/netns/cni-79f74efd-1cf9-2c21-79aa-bd7a80b66faf" Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.417 [INFO][4671] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" iface="eth0" netns="/var/run/netns/cni-79f74efd-1cf9-2c21-79aa-bd7a80b66faf" Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.417 [INFO][4671] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" iface="eth0" netns="/var/run/netns/cni-79f74efd-1cf9-2c21-79aa-bd7a80b66faf" Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.417 [INFO][4671] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.417 [INFO][4671] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.530 [INFO][4683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" HandleID="k8s-pod-network.125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.530 [INFO][4683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.566 [INFO][4683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.589 [WARNING][4683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" HandleID="k8s-pod-network.125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.589 [INFO][4683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" HandleID="k8s-pod-network.125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.593 [INFO][4683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:32.617867 containerd[1612]: 2025-04-30 00:53:32.599 [INFO][4671] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:32.621342 containerd[1612]: time="2025-04-30T00:53:32.621032921Z" level=info msg="TearDown network for sandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\" successfully" Apr 30 00:53:32.621750 containerd[1612]: time="2025-04-30T00:53:32.621505771Z" level=info msg="StopPodSandbox for \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\" returns successfully" Apr 30 00:53:32.622892 containerd[1612]: time="2025-04-30T00:53:32.622796878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb6fc6d5c-65nrh,Uid:f8d0391c-5506-4c4e-a935-4ff4aee98d0c,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:53:32.646588 systemd[1]: run-netns-cni\x2d79f74efd\x2d1cf9\x2d2c21\x2d79aa\x2dbd7a80b66faf.mount: Deactivated successfully. Apr 30 00:53:32.650156 kernel: bpftool[4730]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 00:53:32.780035 systemd-networkd[1242]: calife66e6e5cc8: Gained IPv6LL Apr 30 00:53:32.931333 systemd-networkd[1242]: cali889e871d0f7: Link UP Apr 30 00:53:32.931526 systemd-networkd[1242]: cali889e871d0f7: Gained carrier Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.684 [INFO][4714] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0 coredns-7db6d8ff4d- kube-system 3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8 797 0 2025-04-30 00:53:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-6-32a99953eb coredns-7db6d8ff4d-9hnv8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali889e871d0f7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnv8" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-" Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.684 [INFO][4714] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnv8" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.773 [INFO][4735] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" HandleID="k8s-pod-network.1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.818 [INFO][4735] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" HandleID="k8s-pod-network.1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d420), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-6-32a99953eb", "pod":"coredns-7db6d8ff4d-9hnv8", "timestamp":"2025-04-30 00:53:32.77324566 +0000 UTC"}, Hostname:"ci-4081-3-3-6-32a99953eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.822 [INFO][4735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.822 [INFO][4735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.822 [INFO][4735] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-6-32a99953eb' Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.829 [INFO][4735] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.840 [INFO][4735] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.860 [INFO][4735] ipam/ipam.go 489: Trying affinity for 192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.869 [INFO][4735] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.875 [INFO][4735] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.875 [INFO][4735] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.880 [INFO][4735] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.886 [INFO][4735] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.913 [INFO][4735] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.195/26] block=192.168.76.192/26 handle="k8s-pod-network.1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.913 [INFO][4735] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.195/26] handle="k8s-pod-network.1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.913 [INFO][4735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:32.985057 containerd[1612]: 2025-04-30 00:53:32.913 [INFO][4735] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.195/26] IPv6=[] ContainerID="1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" HandleID="k8s-pod-network.1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:32.985694 containerd[1612]: 2025-04-30 00:53:32.922 [INFO][4714] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnv8" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"", Pod:"coredns-7db6d8ff4d-9hnv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali889e871d0f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:32.985694 containerd[1612]: 2025-04-30 00:53:32.923 [INFO][4714] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.195/32] ContainerID="1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnv8" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:32.985694 containerd[1612]: 2025-04-30 00:53:32.925 [INFO][4714] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali889e871d0f7 ContainerID="1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnv8" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:32.985694 containerd[1612]: 2025-04-30 00:53:32.929 [INFO][4714] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnv8" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:32.985694 containerd[1612]: 2025-04-30 00:53:32.931 [INFO][4714] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnv8" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d", Pod:"coredns-7db6d8ff4d-9hnv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali889e871d0f7", MAC:"76:c0:51:5f:7d:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:32.985694 containerd[1612]: 2025-04-30 00:53:32.977 [INFO][4714] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hnv8" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:33.035398 containerd[1612]: time="2025-04-30T00:53:33.034269274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:33.035398 containerd[1612]: time="2025-04-30T00:53:33.034429918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:33.035398 containerd[1612]: time="2025-04-30T00:53:33.034471918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:33.041069 containerd[1612]: time="2025-04-30T00:53:33.034594921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:33.118834 containerd[1612]: time="2025-04-30T00:53:33.118781756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hnv8,Uid:3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8,Namespace:kube-system,Attempt:1,} returns sandbox id \"1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d\"" Apr 30 00:53:33.129498 containerd[1612]: time="2025-04-30T00:53:33.129421693Z" level=info msg="CreateContainer within sandbox \"1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:53:33.159370 containerd[1612]: time="2025-04-30T00:53:33.159313662Z" level=info msg="CreateContainer within sandbox \"1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88364fd6a1afb145a28cae19b1c8bc7d542ae9ebe56ec1ea1b8522d973bd8dfd\"" Apr 30 00:53:33.164424 containerd[1612]: time="2025-04-30T00:53:33.160824292Z" level=info msg="StartContainer for \"88364fd6a1afb145a28cae19b1c8bc7d542ae9ebe56ec1ea1b8522d973bd8dfd\"" Apr 30 00:53:33.197319 systemd-networkd[1242]: cali3e5a1a1bdf4: Link UP Apr 30 00:53:33.201602 systemd-networkd[1242]: cali3e5a1a1bdf4: Gained carrier Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:32.894 [INFO][4741] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0 coredns-7db6d8ff4d- kube-system 4c9552da-7cf1-4abb-b74a-aa79391aa549 799 0 2025-04-30 00:53:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-6-32a99953eb coredns-7db6d8ff4d-v4z9k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3e5a1a1bdf4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v4z9k" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-" Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:32.895 [INFO][4741] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v4z9k" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:32.997 [INFO][4792] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" HandleID="k8s-pod-network.b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.062 [INFO][4792] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" HandleID="k8s-pod-network.b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039d510), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-6-32a99953eb", "pod":"coredns-7db6d8ff4d-v4z9k", "timestamp":"2025-04-30 00:53:32.99727752 +0000 UTC"}, Hostname:"ci-4081-3-3-6-32a99953eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.062 [INFO][4792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.064 [INFO][4792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.064 [INFO][4792] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-6-32a99953eb' Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.072 [INFO][4792] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.085 [INFO][4792] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.113 [INFO][4792] ipam/ipam.go 489: Trying affinity for 192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.122 [INFO][4792] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.143 [INFO][4792] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.143 [INFO][4792] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.149 [INFO][4792] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62 Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.167 [INFO][4792] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.186 [INFO][4792] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.196/26] block=192.168.76.192/26 handle="k8s-pod-network.b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.186 [INFO][4792] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.196/26] handle="k8s-pod-network.b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.186 [INFO][4792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:33.244012 containerd[1612]: 2025-04-30 00:53:33.186 [INFO][4792] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.196/26] IPv6=[] ContainerID="b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" HandleID="k8s-pod-network.b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:33.244737 containerd[1612]: 2025-04-30 00:53:33.190 [INFO][4741] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v4z9k" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4c9552da-7cf1-4abb-b74a-aa79391aa549", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"", Pod:"coredns-7db6d8ff4d-v4z9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e5a1a1bdf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:33.244737 containerd[1612]: 2025-04-30 00:53:33.191 [INFO][4741] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.196/32] ContainerID="b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v4z9k" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:33.244737 containerd[1612]: 2025-04-30 00:53:33.191 [INFO][4741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e5a1a1bdf4 ContainerID="b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v4z9k" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:33.244737 containerd[1612]: 2025-04-30 00:53:33.196 [INFO][4741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v4z9k" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:33.244737 containerd[1612]: 2025-04-30 00:53:33.198 [INFO][4741] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v4z9k" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4c9552da-7cf1-4abb-b74a-aa79391aa549", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62", Pod:"coredns-7db6d8ff4d-v4z9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e5a1a1bdf4", MAC:"7e:aa:8d:e3:d7:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:33.244737 containerd[1612]: 2025-04-30 00:53:33.222 [INFO][4741] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v4z9k" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:33.304752 containerd[1612]: time="2025-04-30T00:53:33.299760683Z" level=info msg="StopPodSandbox for \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\"" Apr 30 00:53:33.362597 systemd-networkd[1242]: cali6f6ff4fd6ba: Link UP Apr 30 00:53:33.362857 systemd-networkd[1242]: cali6f6ff4fd6ba: Gained carrier Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:32.843 [INFO][4744] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0 calico-apiserver-5bb6fc6d5c- calico-apiserver f8d0391c-5506-4c4e-a935-4ff4aee98d0c 798 0 2025-04-30 00:53:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bb6fc6d5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-6-32a99953eb calico-apiserver-5bb6fc6d5c-65nrh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6f6ff4fd6ba [] []}} ContainerID="b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-65nrh" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-" Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:32.843 [INFO][4744] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-65nrh" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.041 [INFO][4782] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" HandleID="k8s-pod-network.b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.074 [INFO][4782] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" HandleID="k8s-pod-network.b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000269b40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-6-32a99953eb", "pod":"calico-apiserver-5bb6fc6d5c-65nrh", "timestamp":"2025-04-30 00:53:33.028335673 +0000 UTC"}, Hostname:"ci-4081-3-3-6-32a99953eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.077 [INFO][4782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.186 [INFO][4782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.186 [INFO][4782] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-6-32a99953eb' Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.190 [INFO][4782] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.209 [INFO][4782] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.239 [INFO][4782] ipam/ipam.go 489: Trying affinity for 192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.247 [INFO][4782] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.254 [INFO][4782] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.254 [INFO][4782] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.260 [INFO][4782] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.281 [INFO][4782] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.301 [INFO][4782] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.197/26] block=192.168.76.192/26 handle="k8s-pod-network.b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.301 [INFO][4782] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.197/26] handle="k8s-pod-network.b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.301 [INFO][4782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:33.399924 containerd[1612]: 2025-04-30 00:53:33.301 [INFO][4782] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.197/26] IPv6=[] ContainerID="b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" HandleID="k8s-pod-network.b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:33.401123 containerd[1612]: 2025-04-30 00:53:33.354 [INFO][4744] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-65nrh" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0", GenerateName:"calico-apiserver-5bb6fc6d5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8d0391c-5506-4c4e-a935-4ff4aee98d0c", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb6fc6d5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"", Pod:"calico-apiserver-5bb6fc6d5c-65nrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f6ff4fd6ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:33.401123 containerd[1612]: 2025-04-30 00:53:33.354 [INFO][4744] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.197/32] ContainerID="b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-65nrh" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:33.401123 containerd[1612]: 2025-04-30 00:53:33.354 [INFO][4744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f6ff4fd6ba ContainerID="b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-65nrh" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:33.401123 containerd[1612]: 2025-04-30 00:53:33.362 [INFO][4744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-65nrh" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:33.401123 containerd[1612]: 2025-04-30 00:53:33.365 [INFO][4744] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-65nrh" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0", GenerateName:"calico-apiserver-5bb6fc6d5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8d0391c-5506-4c4e-a935-4ff4aee98d0c", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb6fc6d5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b", Pod:"calico-apiserver-5bb6fc6d5c-65nrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f6ff4fd6ba", MAC:"b6:58:1a:c0:4e:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:33.401123 containerd[1612]: 2025-04-30 00:53:33.392 [INFO][4744] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b" Namespace="calico-apiserver" Pod="calico-apiserver-5bb6fc6d5c-65nrh" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:33.402061 containerd[1612]: time="2025-04-30T00:53:33.401785521Z" level=info msg="StartContainer for \"88364fd6a1afb145a28cae19b1c8bc7d542ae9ebe56ec1ea1b8522d973bd8dfd\" returns successfully" Apr 30 00:53:33.405333 containerd[1612]: time="2025-04-30T00:53:33.399244110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:33.405333 containerd[1612]: time="2025-04-30T00:53:33.399319311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:33.405333 containerd[1612]: time="2025-04-30T00:53:33.399332831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:33.405333 containerd[1612]: time="2025-04-30T00:53:33.399441154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:33.509990 containerd[1612]: time="2025-04-30T00:53:33.509396394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v4z9k,Uid:4c9552da-7cf1-4abb-b74a-aa79391aa549,Namespace:kube-system,Attempt:1,} returns sandbox id \"b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62\"" Apr 30 00:53:33.520914 containerd[1612]: time="2025-04-30T00:53:33.519746204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:33.520914 containerd[1612]: time="2025-04-30T00:53:33.519819606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:33.520914 containerd[1612]: time="2025-04-30T00:53:33.519845966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:33.520914 containerd[1612]: time="2025-04-30T00:53:33.519996250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:33.533965 containerd[1612]: time="2025-04-30T00:53:33.533057116Z" level=info msg="CreateContainer within sandbox \"b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:53:33.608174 containerd[1612]: time="2025-04-30T00:53:33.608116725Z" level=info msg="CreateContainer within sandbox \"b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"271fa705c83e2a7621fead86a991c830a19759329c805bc2b6be52afb8d18b15\"" Apr 30 00:53:33.612555 containerd[1612]: time="2025-04-30T00:53:33.611910322Z" level=info msg="StartContainer for \"271fa705c83e2a7621fead86a991c830a19759329c805bc2b6be52afb8d18b15\"" Apr 30 00:53:33.687336 kubelet[2954]: I0430 00:53:33.684421 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9hnv8" podStartSLOduration=33.684174994 podStartE2EDuration="33.684174994s" podCreationTimestamp="2025-04-30 00:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:53:33.68300221 +0000 UTC m=+48.527628661" watchObservedRunningTime="2025-04-30 00:53:33.684174994 +0000 UTC m=+48.528801325" Apr 30 00:53:33.766212 systemd-networkd[1242]: vxlan.calico: Link UP Apr 30 00:53:33.766218 systemd-networkd[1242]: vxlan.calico: Gained carrier Apr 30 00:53:33.817744 systemd[1]: run-containerd-runc-k8s.io-271fa705c83e2a7621fead86a991c830a19759329c805bc2b6be52afb8d18b15-runc.s8AQlD.mount: Deactivated successfully. Apr 30 00:53:33.887904 containerd[1612]: time="2025-04-30T00:53:33.887842863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb6fc6d5c-65nrh,Uid:f8d0391c-5506-4c4e-a935-4ff4aee98d0c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b\"" Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.678 [INFO][5001] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.679 [INFO][5001] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" iface="eth0" netns="/var/run/netns/cni-872b8508-222c-0eec-a7a0-26c185215d5d" Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.681 [INFO][5001] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" iface="eth0" netns="/var/run/netns/cni-872b8508-222c-0eec-a7a0-26c185215d5d" Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.688 [INFO][5001] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" iface="eth0" netns="/var/run/netns/cni-872b8508-222c-0eec-a7a0-26c185215d5d" Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.692 [INFO][5001] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.692 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.862 [INFO][5052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" HandleID="k8s-pod-network.9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.862 [INFO][5052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.862 [INFO][5052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.882 [WARNING][5052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" HandleID="k8s-pod-network.9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.884 [INFO][5052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" HandleID="k8s-pod-network.9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.889 [INFO][5052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:33.927782 containerd[1612]: 2025-04-30 00:53:33.917 [INFO][5001] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:33.927782 containerd[1612]: time="2025-04-30T00:53:33.927246786Z" level=info msg="TearDown network for sandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\" successfully" Apr 30 00:53:33.927782 containerd[1612]: time="2025-04-30T00:53:33.927286787Z" level=info msg="StopPodSandbox for \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\" returns successfully" Apr 30 00:53:33.934745 containerd[1612]: time="2025-04-30T00:53:33.929298788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9694b44d-f2nb6,Uid:ef26787d-7604-4d5c-b737-eddfb5e0d093,Namespace:calico-system,Attempt:1,}" Apr 30 00:53:33.933548 systemd[1]: run-netns-cni\x2d872b8508\x2d222c\x2d0eec\x2da7a0\x2d26c185215d5d.mount: Deactivated successfully. Apr 30 00:53:33.940801 containerd[1612]: time="2025-04-30T00:53:33.940626139Z" level=info msg="StartContainer for \"271fa705c83e2a7621fead86a991c830a19759329c805bc2b6be52afb8d18b15\" returns successfully" Apr 30 00:53:34.146020 systemd-networkd[1242]: cali4b965c4fede: Link UP Apr 30 00:53:34.146745 systemd-networkd[1242]: cali4b965c4fede: Gained carrier Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.025 [INFO][5115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0 calico-kube-controllers-6f9694b44d- calico-system ef26787d-7604-4d5c-b737-eddfb5e0d093 816 0 2025-04-30 00:53:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f9694b44d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-6-32a99953eb calico-kube-controllers-6f9694b44d-f2nb6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4b965c4fede [] []}} ContainerID="13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" Namespace="calico-system" Pod="calico-kube-controllers-6f9694b44d-f2nb6" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-" Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.025 [INFO][5115] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" Namespace="calico-system" Pod="calico-kube-controllers-6f9694b44d-f2nb6" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.066 [INFO][5127] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" HandleID="k8s-pod-network.13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.086 [INFO][5127] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" HandleID="k8s-pod-network.13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000222ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-6-32a99953eb", "pod":"calico-kube-controllers-6f9694b44d-f2nb6", "timestamp":"2025-04-30 00:53:34.066441606 +0000 UTC"}, Hostname:"ci-4081-3-3-6-32a99953eb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.087 [INFO][5127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.088 [INFO][5127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.088 [INFO][5127] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-6-32a99953eb' Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.092 [INFO][5127] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.099 [INFO][5127] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.107 [INFO][5127] ipam/ipam.go 489: Trying affinity for 192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.110 [INFO][5127] ipam/ipam.go 155: Attempting to load block cidr=192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.114 [INFO][5127] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.114 [INFO][5127] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.117 [INFO][5127] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.124 [INFO][5127] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.137 [INFO][5127] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.76.198/26] block=192.168.76.192/26 handle="k8s-pod-network.13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.138 [INFO][5127] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.198/26] handle="k8s-pod-network.13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" host="ci-4081-3-3-6-32a99953eb" Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.138 [INFO][5127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:34.170018 containerd[1612]: 2025-04-30 00:53:34.138 [INFO][5127] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.198/26] IPv6=[] ContainerID="13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" HandleID="k8s-pod-network.13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:34.170888 containerd[1612]: 2025-04-30 00:53:34.143 [INFO][5115] cni-plugin/k8s.go 386: Populated endpoint ContainerID="13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" Namespace="calico-system" Pod="calico-kube-controllers-6f9694b44d-f2nb6" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0", GenerateName:"calico-kube-controllers-6f9694b44d-", Namespace:"calico-system", SelfLink:"", UID:"ef26787d-7604-4d5c-b737-eddfb5e0d093", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9694b44d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"", Pod:"calico-kube-controllers-6f9694b44d-f2nb6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b965c4fede", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:34.170888 containerd[1612]: 2025-04-30 00:53:34.143 [INFO][5115] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.76.198/32] ContainerID="13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" Namespace="calico-system" Pod="calico-kube-controllers-6f9694b44d-f2nb6" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:34.170888 containerd[1612]: 2025-04-30 00:53:34.143 [INFO][5115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b965c4fede ContainerID="13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" Namespace="calico-system" Pod="calico-kube-controllers-6f9694b44d-f2nb6" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:34.170888 containerd[1612]: 2025-04-30 00:53:34.147 [INFO][5115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" Namespace="calico-system" Pod="calico-kube-controllers-6f9694b44d-f2nb6" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:34.170888 containerd[1612]: 2025-04-30 00:53:34.147 [INFO][5115] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" Namespace="calico-system" Pod="calico-kube-controllers-6f9694b44d-f2nb6" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0", GenerateName:"calico-kube-controllers-6f9694b44d-", Namespace:"calico-system", SelfLink:"", UID:"ef26787d-7604-4d5c-b737-eddfb5e0d093", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9694b44d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b", Pod:"calico-kube-controllers-6f9694b44d-f2nb6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b965c4fede", MAC:"ee:64:72:b7:bd:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:34.170888 containerd[1612]: 2025-04-30 00:53:34.165 [INFO][5115] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b" Namespace="calico-system" Pod="calico-kube-controllers-6f9694b44d-f2nb6" WorkloadEndpoint="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:34.214073 containerd[1612]: time="2025-04-30T00:53:34.213443805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:34.214073 containerd[1612]: time="2025-04-30T00:53:34.213520807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:34.214073 containerd[1612]: time="2025-04-30T00:53:34.213536967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:34.214073 containerd[1612]: time="2025-04-30T00:53:34.213630889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:34.277661 containerd[1612]: time="2025-04-30T00:53:34.277585936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9694b44d-f2nb6,Uid:ef26787d-7604-4d5c-b737-eddfb5e0d093,Namespace:calico-system,Attempt:1,} returns sandbox id \"13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b\"" Apr 30 00:53:34.637550 systemd-networkd[1242]: cali889e871d0f7: Gained IPv6LL Apr 30 00:53:34.689263 kubelet[2954]: I0430 00:53:34.688874 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v4z9k" podStartSLOduration=34.688852295 podStartE2EDuration="34.688852295s" podCreationTimestamp="2025-04-30 00:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:53:34.68860665 +0000 UTC m=+49.533232981" watchObservedRunningTime="2025-04-30 00:53:34.688852295 +0000 UTC m=+49.533478626" Apr 30 00:53:34.762435 systemd-networkd[1242]: cali6f6ff4fd6ba: Gained IPv6LL Apr 30 00:53:34.954254 systemd-networkd[1242]: cali3e5a1a1bdf4: Gained IPv6LL Apr 30 00:53:35.402287 systemd-networkd[1242]: vxlan.calico: Gained IPv6LL Apr 30 00:53:35.978197 systemd-networkd[1242]: cali4b965c4fede: Gained IPv6LL Apr 30 00:53:45.316231 containerd[1612]: time="2025-04-30T00:53:45.316166878Z" level=info msg="StopPodSandbox for \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\"" Apr 30 00:53:45.418635 containerd[1612]: 2025-04-30 00:53:45.361 [WARNING][5263] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"128e4be7-9f7d-4c2d-8a19-50ffaa3dc839", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47", Pod:"csi-node-driver-4568b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife66e6e5cc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:45.418635 containerd[1612]: 2025-04-30 00:53:45.361 [INFO][5263] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:45.418635 containerd[1612]: 2025-04-30 00:53:45.361 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" iface="eth0" netns="" Apr 30 00:53:45.418635 containerd[1612]: 2025-04-30 00:53:45.361 [INFO][5263] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:45.418635 containerd[1612]: 2025-04-30 00:53:45.361 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:45.418635 containerd[1612]: 2025-04-30 00:53:45.396 [INFO][5270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" HandleID="k8s-pod-network.d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Workload="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:45.418635 containerd[1612]: 2025-04-30 00:53:45.396 [INFO][5270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:45.418635 containerd[1612]: 2025-04-30 00:53:45.396 [INFO][5270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:45.418635 containerd[1612]: 2025-04-30 00:53:45.412 [WARNING][5270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" HandleID="k8s-pod-network.d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Workload="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:45.418635 containerd[1612]: 2025-04-30 00:53:45.413 [INFO][5270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" HandleID="k8s-pod-network.d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Workload="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:45.418635 containerd[1612]: 2025-04-30 00:53:45.415 [INFO][5270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:45.418635 containerd[1612]: 2025-04-30 00:53:45.416 [INFO][5263] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:45.419858 containerd[1612]: time="2025-04-30T00:53:45.418790798Z" level=info msg="TearDown network for sandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\" successfully" Apr 30 00:53:45.419858 containerd[1612]: time="2025-04-30T00:53:45.418818238Z" level=info msg="StopPodSandbox for \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\" returns successfully" Apr 30 00:53:45.419858 containerd[1612]: time="2025-04-30T00:53:45.419742455Z" level=info msg="RemovePodSandbox for \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\"" Apr 30 00:53:45.419858 containerd[1612]: time="2025-04-30T00:53:45.419777656Z" level=info msg="Forcibly stopping sandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\"" Apr 30 00:53:45.510757 containerd[1612]: 2025-04-30 00:53:45.462 [WARNING][5288] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"128e4be7-9f7d-4c2d-8a19-50ffaa3dc839", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47", Pod:"csi-node-driver-4568b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife66e6e5cc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:45.510757 containerd[1612]: 2025-04-30 00:53:45.462 [INFO][5288] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:45.510757 containerd[1612]: 2025-04-30 00:53:45.462 [INFO][5288] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" iface="eth0" netns="" Apr 30 00:53:45.510757 containerd[1612]: 2025-04-30 00:53:45.462 [INFO][5288] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:45.510757 containerd[1612]: 2025-04-30 00:53:45.462 [INFO][5288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:45.510757 containerd[1612]: 2025-04-30 00:53:45.483 [INFO][5295] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" HandleID="k8s-pod-network.d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Workload="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:45.510757 containerd[1612]: 2025-04-30 00:53:45.484 [INFO][5295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:45.510757 containerd[1612]: 2025-04-30 00:53:45.484 [INFO][5295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:45.510757 containerd[1612]: 2025-04-30 00:53:45.499 [WARNING][5295] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" HandleID="k8s-pod-network.d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Workload="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:45.510757 containerd[1612]: 2025-04-30 00:53:45.499 [INFO][5295] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" HandleID="k8s-pod-network.d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Workload="ci--4081--3--3--6--32a99953eb-k8s-csi--node--driver--4568b-eth0" Apr 30 00:53:45.510757 containerd[1612]: 2025-04-30 00:53:45.505 [INFO][5295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:45.510757 containerd[1612]: 2025-04-30 00:53:45.508 [INFO][5288] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224" Apr 30 00:53:45.511230 containerd[1612]: time="2025-04-30T00:53:45.510862848Z" level=info msg="TearDown network for sandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\" successfully" Apr 30 00:53:45.522392 containerd[1612]: time="2025-04-30T00:53:45.522303813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:53:45.522556 containerd[1612]: time="2025-04-30T00:53:45.522419175Z" level=info msg="RemovePodSandbox \"d441b511c70e48759b29c24f0019713e806363f2e83c46f6c5bcdee6a1d37224\" returns successfully" Apr 30 00:53:45.523367 containerd[1612]: time="2025-04-30T00:53:45.523297311Z" level=info msg="StopPodSandbox for \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\"" Apr 30 00:53:45.647655 containerd[1612]: 2025-04-30 00:53:45.581 [WARNING][5313] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d", Pod:"coredns-7db6d8ff4d-9hnv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali889e871d0f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:45.647655 containerd[1612]: 2025-04-30 00:53:45.582 [INFO][5313] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:45.647655 containerd[1612]: 2025-04-30 00:53:45.582 [INFO][5313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" iface="eth0" netns="" Apr 30 00:53:45.647655 containerd[1612]: 2025-04-30 00:53:45.582 [INFO][5313] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:45.647655 containerd[1612]: 2025-04-30 00:53:45.582 [INFO][5313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:45.647655 containerd[1612]: 2025-04-30 00:53:45.627 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" HandleID="k8s-pod-network.ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:45.647655 containerd[1612]: 2025-04-30 00:53:45.627 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:45.647655 containerd[1612]: 2025-04-30 00:53:45.627 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:45.647655 containerd[1612]: 2025-04-30 00:53:45.638 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" HandleID="k8s-pod-network.ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:45.647655 containerd[1612]: 2025-04-30 00:53:45.639 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" HandleID="k8s-pod-network.ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:45.647655 containerd[1612]: 2025-04-30 00:53:45.642 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:45.647655 containerd[1612]: 2025-04-30 00:53:45.644 [INFO][5313] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:45.649218 containerd[1612]: time="2025-04-30T00:53:45.647611859Z" level=info msg="TearDown network for sandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\" successfully" Apr 30 00:53:45.649218 containerd[1612]: time="2025-04-30T00:53:45.648058107Z" level=info msg="StopPodSandbox for \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\" returns successfully" Apr 30 00:53:45.649218 containerd[1612]: time="2025-04-30T00:53:45.648810601Z" level=info msg="RemovePodSandbox for \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\"" Apr 30 00:53:45.649218 containerd[1612]: time="2025-04-30T00:53:45.648872402Z" level=info msg="Forcibly stopping sandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\"" Apr 30 00:53:45.748409 containerd[1612]: 2025-04-30 00:53:45.700 [WARNING][5340] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3f46f9d3-ecac-48f0-bbad-f7e9b2753fe8", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"1249d3285e7485413d0145ab28f49ae4a79d2ba0a34a19740fbde6867772ae8d", Pod:"coredns-7db6d8ff4d-9hnv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali889e871d0f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:45.748409 containerd[1612]: 2025-04-30 00:53:45.700 [INFO][5340] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:45.748409 containerd[1612]: 2025-04-30 00:53:45.700 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" iface="eth0" netns="" Apr 30 00:53:45.748409 containerd[1612]: 2025-04-30 00:53:45.700 [INFO][5340] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:45.748409 containerd[1612]: 2025-04-30 00:53:45.700 [INFO][5340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:45.748409 containerd[1612]: 2025-04-30 00:53:45.726 [INFO][5347] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" HandleID="k8s-pod-network.ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:45.748409 containerd[1612]: 2025-04-30 00:53:45.726 [INFO][5347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:45.748409 containerd[1612]: 2025-04-30 00:53:45.726 [INFO][5347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:45.748409 containerd[1612]: 2025-04-30 00:53:45.740 [WARNING][5347] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" HandleID="k8s-pod-network.ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:45.748409 containerd[1612]: 2025-04-30 00:53:45.740 [INFO][5347] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" HandleID="k8s-pod-network.ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--9hnv8-eth0" Apr 30 00:53:45.748409 containerd[1612]: 2025-04-30 00:53:45.743 [INFO][5347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:45.748409 containerd[1612]: 2025-04-30 00:53:45.745 [INFO][5340] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074" Apr 30 00:53:45.748409 containerd[1612]: time="2025-04-30T00:53:45.747664572Z" level=info msg="TearDown network for sandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\" successfully" Apr 30 00:53:45.753813 containerd[1612]: time="2025-04-30T00:53:45.753752882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:53:45.754071 containerd[1612]: time="2025-04-30T00:53:45.754048127Z" level=info msg="RemovePodSandbox \"ee87e87dfe8b10d7f17dafc0fc3c482739e6178b6c1b3aa53037d14d54595074\" returns successfully" Apr 30 00:53:45.754700 containerd[1612]: time="2025-04-30T00:53:45.754662378Z" level=info msg="StopPodSandbox for \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\"" Apr 30 00:53:45.856452 containerd[1612]: 2025-04-30 00:53:45.802 [WARNING][5365] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0", GenerateName:"calico-apiserver-5bb6fc6d5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8d0391c-5506-4c4e-a935-4ff4aee98d0c", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb6fc6d5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b", Pod:"calico-apiserver-5bb6fc6d5c-65nrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f6ff4fd6ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:45.856452 containerd[1612]: 2025-04-30 00:53:45.803 [INFO][5365] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:45.856452 containerd[1612]: 2025-04-30 00:53:45.803 [INFO][5365] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" iface="eth0" netns="" Apr 30 00:53:45.856452 containerd[1612]: 2025-04-30 00:53:45.803 [INFO][5365] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:45.856452 containerd[1612]: 2025-04-30 00:53:45.803 [INFO][5365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:45.856452 containerd[1612]: 2025-04-30 00:53:45.832 [INFO][5372] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" HandleID="k8s-pod-network.125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:45.856452 containerd[1612]: 2025-04-30 00:53:45.832 [INFO][5372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:45.856452 containerd[1612]: 2025-04-30 00:53:45.832 [INFO][5372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:45.856452 containerd[1612]: 2025-04-30 00:53:45.846 [WARNING][5372] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" HandleID="k8s-pod-network.125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:45.856452 containerd[1612]: 2025-04-30 00:53:45.846 [INFO][5372] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" HandleID="k8s-pod-network.125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:45.856452 containerd[1612]: 2025-04-30 00:53:45.850 [INFO][5372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:45.856452 containerd[1612]: 2025-04-30 00:53:45.853 [INFO][5365] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:45.856452 containerd[1612]: time="2025-04-30T00:53:45.856344480Z" level=info msg="TearDown network for sandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\" successfully" Apr 30 00:53:45.856452 containerd[1612]: time="2025-04-30T00:53:45.856400321Z" level=info msg="StopPodSandbox for \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\" returns successfully" Apr 30 00:53:45.857931 containerd[1612]: time="2025-04-30T00:53:45.857872748Z" level=info msg="RemovePodSandbox for \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\"" Apr 30 00:53:45.857931 containerd[1612]: time="2025-04-30T00:53:45.857919469Z" level=info msg="Forcibly stopping sandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\"" Apr 30 00:53:45.957732 containerd[1612]: 2025-04-30 00:53:45.911 [WARNING][5390] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0", GenerateName:"calico-apiserver-5bb6fc6d5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8d0391c-5506-4c4e-a935-4ff4aee98d0c", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb6fc6d5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b", Pod:"calico-apiserver-5bb6fc6d5c-65nrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f6ff4fd6ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:45.957732 containerd[1612]: 2025-04-30 00:53:45.911 [INFO][5390] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:45.957732 containerd[1612]: 2025-04-30 00:53:45.911 [INFO][5390] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" iface="eth0" netns="" Apr 30 00:53:45.957732 containerd[1612]: 2025-04-30 00:53:45.911 [INFO][5390] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:45.957732 containerd[1612]: 2025-04-30 00:53:45.911 [INFO][5390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:45.957732 containerd[1612]: 2025-04-30 00:53:45.935 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" HandleID="k8s-pod-network.125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:45.957732 containerd[1612]: 2025-04-30 00:53:45.935 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:45.957732 containerd[1612]: 2025-04-30 00:53:45.935 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:45.957732 containerd[1612]: 2025-04-30 00:53:45.949 [WARNING][5397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" HandleID="k8s-pod-network.125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:45.957732 containerd[1612]: 2025-04-30 00:53:45.949 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" HandleID="k8s-pod-network.125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--65nrh-eth0" Apr 30 00:53:45.957732 containerd[1612]: 2025-04-30 00:53:45.952 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:45.957732 containerd[1612]: 2025-04-30 00:53:45.954 [INFO][5390] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a" Apr 30 00:53:45.957732 containerd[1612]: time="2025-04-30T00:53:45.956893483Z" level=info msg="TearDown network for sandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\" successfully" Apr 30 00:53:45.960861 containerd[1612]: time="2025-04-30T00:53:45.960802473Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:53:45.961069 containerd[1612]: time="2025-04-30T00:53:45.960884794Z" level=info msg="RemovePodSandbox \"125a847d8efe29396cb8b20698255c990de27fc6b080a6eb80f370013334e28a\" returns successfully" Apr 30 00:53:45.962046 containerd[1612]: time="2025-04-30T00:53:45.961691209Z" level=info msg="StopPodSandbox for \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\"" Apr 30 00:53:46.059407 containerd[1612]: 2025-04-30 00:53:46.007 [WARNING][5416] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4c9552da-7cf1-4abb-b74a-aa79391aa549", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62", Pod:"coredns-7db6d8ff4d-v4z9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e5a1a1bdf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:46.059407 containerd[1612]: 2025-04-30 00:53:46.007 [INFO][5416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:46.059407 containerd[1612]: 2025-04-30 00:53:46.007 [INFO][5416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" iface="eth0" netns="" Apr 30 00:53:46.059407 containerd[1612]: 2025-04-30 00:53:46.007 [INFO][5416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:46.059407 containerd[1612]: 2025-04-30 00:53:46.007 [INFO][5416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:46.059407 containerd[1612]: 2025-04-30 00:53:46.037 [INFO][5423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" HandleID="k8s-pod-network.c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:46.059407 containerd[1612]: 2025-04-30 00:53:46.037 [INFO][5423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:46.059407 containerd[1612]: 2025-04-30 00:53:46.037 [INFO][5423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:46.059407 containerd[1612]: 2025-04-30 00:53:46.052 [WARNING][5423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" HandleID="k8s-pod-network.c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:46.059407 containerd[1612]: 2025-04-30 00:53:46.052 [INFO][5423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" HandleID="k8s-pod-network.c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:46.059407 containerd[1612]: 2025-04-30 00:53:46.055 [INFO][5423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:46.059407 containerd[1612]: 2025-04-30 00:53:46.057 [INFO][5416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:46.059407 containerd[1612]: time="2025-04-30T00:53:46.058978383Z" level=info msg="TearDown network for sandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\" successfully" Apr 30 00:53:46.059407 containerd[1612]: time="2025-04-30T00:53:46.059009743Z" level=info msg="StopPodSandbox for \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\" returns successfully" Apr 30 00:53:46.060843 containerd[1612]: time="2025-04-30T00:53:46.060579291Z" level=info msg="RemovePodSandbox for \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\"" Apr 30 00:53:46.060843 containerd[1612]: time="2025-04-30T00:53:46.060629612Z" level=info msg="Forcibly stopping sandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\"" Apr 30 00:53:46.158818 containerd[1612]: 2025-04-30 00:53:46.113 [WARNING][5441] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4c9552da-7cf1-4abb-b74a-aa79391aa549", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"b548aedd55c5d27826847532840a6103d492afc91072402d2aa946574291df62", Pod:"coredns-7db6d8ff4d-v4z9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e5a1a1bdf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:46.158818 containerd[1612]: 2025-04-30 00:53:46.113 [INFO][5441] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:46.158818 containerd[1612]: 2025-04-30 00:53:46.113 [INFO][5441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" iface="eth0" netns="" Apr 30 00:53:46.158818 containerd[1612]: 2025-04-30 00:53:46.113 [INFO][5441] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:46.158818 containerd[1612]: 2025-04-30 00:53:46.113 [INFO][5441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:46.158818 containerd[1612]: 2025-04-30 00:53:46.140 [INFO][5448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" HandleID="k8s-pod-network.c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:46.158818 containerd[1612]: 2025-04-30 00:53:46.140 [INFO][5448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:46.158818 containerd[1612]: 2025-04-30 00:53:46.140 [INFO][5448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:46.158818 containerd[1612]: 2025-04-30 00:53:46.151 [WARNING][5448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" HandleID="k8s-pod-network.c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:46.158818 containerd[1612]: 2025-04-30 00:53:46.151 [INFO][5448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" HandleID="k8s-pod-network.c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Workload="ci--4081--3--3--6--32a99953eb-k8s-coredns--7db6d8ff4d--v4z9k-eth0" Apr 30 00:53:46.158818 containerd[1612]: 2025-04-30 00:53:46.154 [INFO][5448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:46.158818 containerd[1612]: 2025-04-30 00:53:46.156 [INFO][5441] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b" Apr 30 00:53:46.160029 containerd[1612]: time="2025-04-30T00:53:46.158731034Z" level=info msg="TearDown network for sandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\" successfully" Apr 30 00:53:46.163848 containerd[1612]: time="2025-04-30T00:53:46.163657442Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:53:46.163848 containerd[1612]: time="2025-04-30T00:53:46.163732043Z" level=info msg="RemovePodSandbox \"c6d65399679025892d802dfabcbcccdbfcde8e0b1102d807039049bd2404bd2b\" returns successfully" Apr 30 00:53:46.165070 containerd[1612]: time="2025-04-30T00:53:46.164693100Z" level=info msg="StopPodSandbox for \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\"" Apr 30 00:53:46.282446 containerd[1612]: 2025-04-30 00:53:46.219 [WARNING][5466] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0", GenerateName:"calico-apiserver-5bb6fc6d5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"013c8471-b85b-43e8-91b6-3f1bd76d6d79", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb6fc6d5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838", Pod:"calico-apiserver-5bb6fc6d5c-rwb45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ef88a62320", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:46.282446 containerd[1612]: 2025-04-30 00:53:46.220 [INFO][5466] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:46.282446 containerd[1612]: 2025-04-30 00:53:46.220 [INFO][5466] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" iface="eth0" netns="" Apr 30 00:53:46.282446 containerd[1612]: 2025-04-30 00:53:46.220 [INFO][5466] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:46.282446 containerd[1612]: 2025-04-30 00:53:46.220 [INFO][5466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:46.282446 containerd[1612]: 2025-04-30 00:53:46.256 [INFO][5474] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" HandleID="k8s-pod-network.233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:46.282446 containerd[1612]: 2025-04-30 00:53:46.256 [INFO][5474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:46.282446 containerd[1612]: 2025-04-30 00:53:46.256 [INFO][5474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:46.282446 containerd[1612]: 2025-04-30 00:53:46.274 [WARNING][5474] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" HandleID="k8s-pod-network.233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:46.282446 containerd[1612]: 2025-04-30 00:53:46.275 [INFO][5474] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" HandleID="k8s-pod-network.233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:46.282446 containerd[1612]: 2025-04-30 00:53:46.277 [INFO][5474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:46.282446 containerd[1612]: 2025-04-30 00:53:46.279 [INFO][5466] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:46.283334 containerd[1612]: time="2025-04-30T00:53:46.283238125Z" level=info msg="TearDown network for sandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\" successfully" Apr 30 00:53:46.283334 containerd[1612]: time="2025-04-30T00:53:46.283273446Z" level=info msg="StopPodSandbox for \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\" returns successfully" Apr 30 00:53:46.285814 containerd[1612]: time="2025-04-30T00:53:46.285737130Z" level=info msg="RemovePodSandbox for \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\"" Apr 30 00:53:46.285814 containerd[1612]: time="2025-04-30T00:53:46.285795611Z" level=info msg="Forcibly stopping sandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\"" Apr 30 00:53:46.386341 containerd[1612]: 2025-04-30 00:53:46.338 [WARNING][5492] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0", GenerateName:"calico-apiserver-5bb6fc6d5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"013c8471-b85b-43e8-91b6-3f1bd76d6d79", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb6fc6d5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838", Pod:"calico-apiserver-5bb6fc6d5c-rwb45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ef88a62320", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:46.386341 containerd[1612]: 2025-04-30 00:53:46.338 [INFO][5492] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:46.386341 containerd[1612]: 2025-04-30 00:53:46.339 [INFO][5492] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" iface="eth0" netns="" Apr 30 00:53:46.386341 containerd[1612]: 2025-04-30 00:53:46.339 [INFO][5492] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:46.386341 containerd[1612]: 2025-04-30 00:53:46.339 [INFO][5492] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:46.386341 containerd[1612]: 2025-04-30 00:53:46.364 [INFO][5499] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" HandleID="k8s-pod-network.233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:46.386341 containerd[1612]: 2025-04-30 00:53:46.364 [INFO][5499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:46.386341 containerd[1612]: 2025-04-30 00:53:46.364 [INFO][5499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:46.386341 containerd[1612]: 2025-04-30 00:53:46.378 [WARNING][5499] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" HandleID="k8s-pod-network.233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:46.386341 containerd[1612]: 2025-04-30 00:53:46.378 [INFO][5499] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" HandleID="k8s-pod-network.233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--apiserver--5bb6fc6d5c--rwb45-eth0" Apr 30 00:53:46.386341 containerd[1612]: 2025-04-30 00:53:46.380 [INFO][5499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:46.386341 containerd[1612]: 2025-04-30 00:53:46.383 [INFO][5492] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c" Apr 30 00:53:46.386341 containerd[1612]: time="2025-04-30T00:53:46.385055573Z" level=info msg="TearDown network for sandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\" successfully" Apr 30 00:53:46.389994 containerd[1612]: time="2025-04-30T00:53:46.389875179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:53:46.390217 containerd[1612]: time="2025-04-30T00:53:46.390196465Z" level=info msg="RemovePodSandbox \"233daa7fcd6c0f10b7f688bef610be956278af49222965541eaadf8eb557264c\" returns successfully" Apr 30 00:53:46.392052 containerd[1612]: time="2025-04-30T00:53:46.391968176Z" level=info msg="StopPodSandbox for \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\"" Apr 30 00:53:46.485885 containerd[1612]: 2025-04-30 00:53:46.440 [WARNING][5517] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0", GenerateName:"calico-kube-controllers-6f9694b44d-", Namespace:"calico-system", SelfLink:"", UID:"ef26787d-7604-4d5c-b737-eddfb5e0d093", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9694b44d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b", Pod:"calico-kube-controllers-6f9694b44d-f2nb6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b965c4fede", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:46.485885 containerd[1612]: 2025-04-30 00:53:46.440 [INFO][5517] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:46.485885 containerd[1612]: 2025-04-30 00:53:46.440 [INFO][5517] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" iface="eth0" netns="" Apr 30 00:53:46.485885 containerd[1612]: 2025-04-30 00:53:46.440 [INFO][5517] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:46.485885 containerd[1612]: 2025-04-30 00:53:46.440 [INFO][5517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:46.485885 containerd[1612]: 2025-04-30 00:53:46.464 [INFO][5524] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" HandleID="k8s-pod-network.9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:46.485885 containerd[1612]: 2025-04-30 00:53:46.464 [INFO][5524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:46.485885 containerd[1612]: 2025-04-30 00:53:46.464 [INFO][5524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:46.485885 containerd[1612]: 2025-04-30 00:53:46.477 [WARNING][5524] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" HandleID="k8s-pod-network.9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:46.485885 containerd[1612]: 2025-04-30 00:53:46.477 [INFO][5524] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" HandleID="k8s-pod-network.9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:46.485885 containerd[1612]: 2025-04-30 00:53:46.481 [INFO][5524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:46.485885 containerd[1612]: 2025-04-30 00:53:46.483 [INFO][5517] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:46.487641 containerd[1612]: time="2025-04-30T00:53:46.485903604Z" level=info msg="TearDown network for sandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\" successfully" Apr 30 00:53:46.487641 containerd[1612]: time="2025-04-30T00:53:46.485947285Z" level=info msg="StopPodSandbox for \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\" returns successfully" Apr 30 00:53:46.487641 containerd[1612]: time="2025-04-30T00:53:46.486546056Z" level=info msg="RemovePodSandbox for \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\"" Apr 30 00:53:46.487641 containerd[1612]: time="2025-04-30T00:53:46.486581136Z" level=info msg="Forcibly stopping sandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\"" Apr 30 00:53:46.578181 containerd[1612]: 2025-04-30 00:53:46.533 [WARNING][5542] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0", GenerateName:"calico-kube-controllers-6f9694b44d-", Namespace:"calico-system", SelfLink:"", UID:"ef26787d-7604-4d5c-b737-eddfb5e0d093", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9694b44d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-6-32a99953eb", ContainerID:"13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b", Pod:"calico-kube-controllers-6f9694b44d-f2nb6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b965c4fede", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:53:46.578181 containerd[1612]: 2025-04-30 00:53:46.533 [INFO][5542] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:46.578181 containerd[1612]: 2025-04-30 00:53:46.533 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" iface="eth0" netns="" Apr 30 00:53:46.578181 containerd[1612]: 2025-04-30 00:53:46.533 [INFO][5542] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:46.578181 containerd[1612]: 2025-04-30 00:53:46.533 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:46.578181 containerd[1612]: 2025-04-30 00:53:46.556 [INFO][5549] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" HandleID="k8s-pod-network.9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:46.578181 containerd[1612]: 2025-04-30 00:53:46.557 [INFO][5549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:53:46.578181 containerd[1612]: 2025-04-30 00:53:46.557 [INFO][5549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:53:46.578181 containerd[1612]: 2025-04-30 00:53:46.569 [WARNING][5549] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" HandleID="k8s-pod-network.9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:46.578181 containerd[1612]: 2025-04-30 00:53:46.569 [INFO][5549] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" HandleID="k8s-pod-network.9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Workload="ci--4081--3--3--6--32a99953eb-k8s-calico--kube--controllers--6f9694b44d--f2nb6-eth0" Apr 30 00:53:46.578181 containerd[1612]: 2025-04-30 00:53:46.572 [INFO][5549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:53:46.578181 containerd[1612]: 2025-04-30 00:53:46.575 [INFO][5542] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395" Apr 30 00:53:46.578181 containerd[1612]: time="2025-04-30T00:53:46.577904158Z" level=info msg="TearDown network for sandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\" successfully" Apr 30 00:53:46.583532 containerd[1612]: time="2025-04-30T00:53:46.583273573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:53:46.583532 containerd[1612]: time="2025-04-30T00:53:46.583390655Z" level=info msg="RemovePodSandbox \"9e02dd1aad0ae1af2794818c00e9829054adde96691fcf9bae16cc70ba095395\" returns successfully" Apr 30 00:54:47.489805 systemd[1]: run-containerd-runc-k8s.io-58e02cbff908733575c63647c75694dccb29d64e76d4297a6222bdfdc5c7411c-runc.RvUIHV.mount: Deactivated successfully. Apr 30 00:54:53.730037 containerd[1612]: time="2025-04-30T00:54:53.729345638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:53.731023 containerd[1612]: time="2025-04-30T00:54:53.730975992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" Apr 30 00:54:53.733742 containerd[1612]: time="2025-04-30T00:54:53.733690941Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:53.741484 containerd[1612]: time="2025-04-30T00:54:53.740822032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:53.741989 containerd[1612]: time="2025-04-30T00:54:53.741930668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1m23.948940611s" Apr 30 00:54:53.742176 containerd[1612]: time="2025-04-30T00:54:53.742082347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:54:53.744500 containerd[1612]: time="2025-04-30T00:54:53.744451817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 00:54:53.748232 containerd[1612]: time="2025-04-30T00:54:53.748054403Z" level=info msg="CreateContainer within sandbox \"87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:54:53.779220 containerd[1612]: time="2025-04-30T00:54:53.779053798Z" level=info msg="CreateContainer within sandbox \"87eb7b3926dba91294ad62e14425543cf317735977e87492ac4dbe104326e838\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7aec369d32f48e8d3237adb90fee8b6647c0a4d5a5bb578788c9197f4f3a5323\"" Apr 30 00:54:53.781982 containerd[1612]: time="2025-04-30T00:54:53.780389233Z" level=info msg="StartContainer for \"7aec369d32f48e8d3237adb90fee8b6647c0a4d5a5bb578788c9197f4f3a5323\"" Apr 30 00:54:53.870616 containerd[1612]: time="2025-04-30T00:54:53.870521111Z" level=info msg="StartContainer for \"7aec369d32f48e8d3237adb90fee8b6647c0a4d5a5bb578788c9197f4f3a5323\" returns successfully" Apr 30 00:54:55.207428 containerd[1612]: time="2025-04-30T00:54:55.205924246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:55.210200 containerd[1612]: time="2025-04-30T00:54:55.210152431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" Apr 30 00:54:55.212171 containerd[1612]: time="2025-04-30T00:54:55.211514266Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:55.217854 containerd[1612]: time="2025-04-30T00:54:55.216871966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:55.218313 containerd[1612]: time="2025-04-30T00:54:55.218095362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.473586745s" Apr 30 00:54:55.218313 containerd[1612]: time="2025-04-30T00:54:55.218146922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" Apr 30 00:54:55.225501 containerd[1612]: time="2025-04-30T00:54:55.224691578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:54:55.230721 containerd[1612]: time="2025-04-30T00:54:55.228559845Z" level=info msg="CreateContainer within sandbox \"332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 00:54:55.260157 containerd[1612]: time="2025-04-30T00:54:55.259885252Z" level=info msg="CreateContainer within sandbox \"332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e88982e48e6e149065259724dbfc3a285a05d4d3c151a26849ca47a4a5f73549\"" Apr 30 00:54:55.262148 containerd[1612]: time="2025-04-30T00:54:55.261837725Z" level=info msg="StartContainer for \"e88982e48e6e149065259724dbfc3a285a05d4d3c151a26849ca47a4a5f73549\"" Apr 30 00:54:55.424664 containerd[1612]: time="2025-04-30T00:54:55.423481344Z" level=info msg="StartContainer for \"e88982e48e6e149065259724dbfc3a285a05d4d3c151a26849ca47a4a5f73549\" returns successfully" Apr 30 00:54:55.632901 containerd[1612]: time="2025-04-30T00:54:55.632111795Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:55.635985 containerd[1612]: time="2025-04-30T00:54:55.634835945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 00:54:55.646990 containerd[1612]: time="2025-04-30T00:54:55.645449187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 420.710209ms" Apr 30 00:54:55.647232 containerd[1612]: time="2025-04-30T00:54:55.647196141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:54:55.652295 containerd[1612]: time="2025-04-30T00:54:55.652210363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 00:54:55.658964 containerd[1612]: time="2025-04-30T00:54:55.656991305Z" level=info msg="CreateContainer within sandbox \"b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:54:55.681623 containerd[1612]: time="2025-04-30T00:54:55.681569937Z" level=info msg="CreateContainer within sandbox \"b7541a8e0071d9c1b6d906ba0aaa82ee4f58b0fc2ee73a49bb5f99ae6b9e4f6b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"36b1f716810156e1230baba8e74cc9870ce9442bf6d5ca2689bbe5d33d74ea70\"" Apr 30 00:54:55.683392 containerd[1612]: time="2025-04-30T00:54:55.683357051Z" level=info msg="StartContainer for \"36b1f716810156e1230baba8e74cc9870ce9442bf6d5ca2689bbe5d33d74ea70\"" Apr 30 00:54:55.835503 containerd[1612]: time="2025-04-30T00:54:55.833461471Z" level=info msg="StartContainer for \"36b1f716810156e1230baba8e74cc9870ce9442bf6d5ca2689bbe5d33d74ea70\" returns successfully" Apr 30 00:54:55.904110 kubelet[2954]: I0430 00:54:55.902345 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-rwb45" podStartSLOduration=24.951583809 podStartE2EDuration="1m48.902325864s" podCreationTimestamp="2025-04-30 00:53:07 +0000 UTC" firstStartedPulling="2025-04-30 00:53:29.792505047 +0000 UTC m=+44.637131378" lastFinishedPulling="2025-04-30 00:54:53.743247142 +0000 UTC m=+128.587873433" observedRunningTime="2025-04-30 00:54:53.934091855 +0000 UTC m=+128.778718186" watchObservedRunningTime="2025-04-30 00:54:55.902325864 +0000 UTC m=+130.746952195" Apr 30 00:54:57.708627 containerd[1612]: time="2025-04-30T00:54:57.708567636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:57.711089 containerd[1612]: time="2025-04-30T00:54:57.711033909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" Apr 30 00:54:57.713233 containerd[1612]: time="2025-04-30T00:54:57.712929862Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:57.718529 containerd[1612]: time="2025-04-30T00:54:57.718423565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:57.721394 containerd[1612]: time="2025-04-30T00:54:57.721313076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 2.069061274s" Apr 30 00:54:57.721394 containerd[1612]: time="2025-04-30T00:54:57.721366596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" Apr 30 00:54:57.725043 containerd[1612]: time="2025-04-30T00:54:57.723440149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 00:54:57.763807 containerd[1612]: time="2025-04-30T00:54:57.763457502Z" level=info msg="CreateContainer within sandbox \"13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 00:54:57.786323 containerd[1612]: time="2025-04-30T00:54:57.784150956Z" level=info msg="CreateContainer within sandbox \"13cb6b319f1335b0c9d968a1270ee75caa924f684d0e74efaef76f622be7861b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"79bfe64f7adca602d3013e42fe94c1db7d0a25c9a4d49d1691513e05ab399cff\"" Apr 30 00:54:57.787980 containerd[1612]: time="2025-04-30T00:54:57.787686945Z" level=info msg="StartContainer for \"79bfe64f7adca602d3013e42fe94c1db7d0a25c9a4d49d1691513e05ab399cff\"" Apr 30 00:54:57.869704 containerd[1612]: time="2025-04-30T00:54:57.869480765Z" level=info msg="StartContainer for \"79bfe64f7adca602d3013e42fe94c1db7d0a25c9a4d49d1691513e05ab399cff\" returns successfully" Apr 30 00:54:57.979923 kubelet[2954]: I0430 00:54:57.977978 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bb6fc6d5c-65nrh" podStartSLOduration=29.240846981 podStartE2EDuration="1m50.9779243s" podCreationTimestamp="2025-04-30 00:53:07 +0000 UTC" firstStartedPulling="2025-04-30 00:53:33.914476686 +0000 UTC m=+48.759102977" lastFinishedPulling="2025-04-30 00:54:55.651553965 +0000 UTC m=+130.496180296" observedRunningTime="2025-04-30 00:54:55.965138078 +0000 UTC m=+130.809764409" watchObservedRunningTime="2025-04-30 00:54:57.9779243 +0000 UTC m=+132.822550671" Apr 30 00:54:57.985062 kubelet[2954]: I0430 00:54:57.984776 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f9694b44d-f2nb6" podStartSLOduration=26.541602904 podStartE2EDuration="1m49.984739838s" podCreationTimestamp="2025-04-30 00:53:08 +0000 UTC" firstStartedPulling="2025-04-30 00:53:34.279609217 +0000 UTC m=+49.124235548" lastFinishedPulling="2025-04-30 00:54:57.722746151 +0000 UTC m=+132.567372482" observedRunningTime="2025-04-30 00:54:57.976442185 +0000 UTC m=+132.821068516" watchObservedRunningTime="2025-04-30 00:54:57.984739838 +0000 UTC m=+132.829366169" Apr 30 00:54:59.154157 containerd[1612]: time="2025-04-30T00:54:59.154085545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:59.154918 containerd[1612]: time="2025-04-30T00:54:59.154781783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" Apr 30 00:54:59.156150 containerd[1612]: time="2025-04-30T00:54:59.156115339Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:59.158964 containerd[1612]: time="2025-04-30T00:54:59.158808572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:54:59.159860 containerd[1612]: time="2025-04-30T00:54:59.159686689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.434405026s" Apr 30 00:54:59.159860 containerd[1612]: time="2025-04-30T00:54:59.159750409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" Apr 30 00:54:59.164057 containerd[1612]: time="2025-04-30T00:54:59.163918117Z" level=info msg="CreateContainer within sandbox \"332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 00:54:59.193680 containerd[1612]: time="2025-04-30T00:54:59.193203436Z" level=info msg="CreateContainer within sandbox \"332cb3db79d1d0ba1602a7aabcb90e257629e1887346b9579cdf859283698b47\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d5f82a37b9c3883120ad8cf0bcfd0c19adb69bcf9f6827e29d386940150dc09c\"" Apr 30 00:54:59.195745 containerd[1612]: time="2025-04-30T00:54:59.195681069Z" level=info msg="StartContainer for \"d5f82a37b9c3883120ad8cf0bcfd0c19adb69bcf9f6827e29d386940150dc09c\"" Apr 30 00:54:59.270672 containerd[1612]: time="2025-04-30T00:54:59.270563141Z" level=info msg="StartContainer for \"d5f82a37b9c3883120ad8cf0bcfd0c19adb69bcf9f6827e29d386940150dc09c\" returns successfully" Apr 30 00:54:59.484416 kubelet[2954]: I0430 00:54:59.484308 2954 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 00:54:59.484416 kubelet[2954]: I0430 00:54:59.484345 2954 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 00:54:59.984234 kubelet[2954]: I0430 00:54:59.983984 2954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4568b" podStartSLOduration=24.627106857 podStartE2EDuration="1m51.98392944s" podCreationTimestamp="2025-04-30 00:53:08 +0000 UTC" firstStartedPulling="2025-04-30 00:53:31.804027903 +0000 UTC m=+46.648654194" lastFinishedPulling="2025-04-30 00:54:59.160850446 +0000 UTC m=+134.005476777" observedRunningTime="2025-04-30 00:54:59.983356921 +0000 UTC m=+134.827983292" watchObservedRunningTime="2025-04-30 00:54:59.98392944 +0000 UTC m=+134.828555771" Apr 30 00:55:17.382044 systemd[1]: run-containerd-runc-k8s.io-79bfe64f7adca602d3013e42fe94c1db7d0a25c9a4d49d1691513e05ab399cff-runc.SeDj33.mount: Deactivated successfully. Apr 30 00:56:17.368916 systemd[1]: run-containerd-runc-k8s.io-79bfe64f7adca602d3013e42fe94c1db7d0a25c9a4d49d1691513e05ab399cff-runc.pdZCc7.mount: Deactivated successfully. Apr 30 00:57:26.742322 systemd[1]: Started sshd@7-49.12.45.4:22-139.178.68.195:55216.service - OpenSSH per-connection server daemon (139.178.68.195:55216). Apr 30 00:57:27.717618 sshd[6199]: Accepted publickey for core from 139.178.68.195 port 55216 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:57:27.721291 sshd[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:27.727972 systemd-logind[1580]: New session 8 of user core. Apr 30 00:57:27.731417 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:57:28.511808 sshd[6199]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:28.518729 systemd[1]: sshd@7-49.12.45.4:22-139.178.68.195:55216.service: Deactivated successfully. Apr 30 00:57:28.526744 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:57:28.528003 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:57:28.529757 systemd-logind[1580]: Removed session 8. Apr 30 00:57:33.689457 systemd[1]: Started sshd@8-49.12.45.4:22-139.178.68.195:55228.service - OpenSSH per-connection server daemon (139.178.68.195:55228). Apr 30 00:57:34.686564 sshd[6218]: Accepted publickey for core from 139.178.68.195 port 55228 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:57:34.688673 sshd[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:34.694309 systemd-logind[1580]: New session 9 of user core. Apr 30 00:57:34.701491 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:57:35.454252 sshd[6218]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:35.459918 systemd[1]: sshd@8-49.12.45.4:22-139.178.68.195:55228.service: Deactivated successfully. Apr 30 00:57:35.464411 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:57:35.465250 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:57:35.469706 systemd-logind[1580]: Removed session 9. Apr 30 00:57:40.619384 systemd[1]: Started sshd@9-49.12.45.4:22-139.178.68.195:44564.service - OpenSSH per-connection server daemon (139.178.68.195:44564). Apr 30 00:57:41.614052 sshd[6234]: Accepted publickey for core from 139.178.68.195 port 44564 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:57:41.616400 sshd[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:41.622319 systemd-logind[1580]: New session 10 of user core. Apr 30 00:57:41.629280 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:57:42.374900 sshd[6234]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:42.383813 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:57:42.386548 systemd[1]: sshd@9-49.12.45.4:22-139.178.68.195:44564.service: Deactivated successfully. Apr 30 00:57:42.392562 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:57:42.402037 systemd-logind[1580]: Removed session 10. Apr 30 00:57:47.368215 systemd[1]: run-containerd-runc-k8s.io-79bfe64f7adca602d3013e42fe94c1db7d0a25c9a4d49d1691513e05ab399cff-runc.JB44P9.mount: Deactivated successfully. Apr 30 00:57:47.540454 systemd[1]: Started sshd@10-49.12.45.4:22-139.178.68.195:50902.service - OpenSSH per-connection server daemon (139.178.68.195:50902). Apr 30 00:57:48.530560 sshd[6290]: Accepted publickey for core from 139.178.68.195 port 50902 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:57:48.532686 sshd[6290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:48.538052 systemd-logind[1580]: New session 11 of user core. Apr 30 00:57:48.544354 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:57:49.301654 sshd[6290]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:49.307313 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:57:49.309543 systemd[1]: sshd@10-49.12.45.4:22-139.178.68.195:50902.service: Deactivated successfully. Apr 30 00:57:49.315253 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:57:49.318155 systemd-logind[1580]: Removed session 11. Apr 30 00:57:54.465454 systemd[1]: Started sshd@11-49.12.45.4:22-139.178.68.195:50904.service - OpenSSH per-connection server daemon (139.178.68.195:50904). Apr 30 00:57:55.461477 sshd[6306]: Accepted publickey for core from 139.178.68.195 port 50904 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:57:55.464814 sshd[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:55.475699 systemd-logind[1580]: New session 12 of user core. Apr 30 00:57:55.480444 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:57:56.214486 sshd[6306]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:56.219910 systemd[1]: sshd@11-49.12.45.4:22-139.178.68.195:50904.service: Deactivated successfully. Apr 30 00:57:56.225786 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:57:56.226521 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:57:56.228273 systemd-logind[1580]: Removed session 12. Apr 30 00:58:01.386892 systemd[1]: Started sshd@12-49.12.45.4:22-139.178.68.195:53280.service - OpenSSH per-connection server daemon (139.178.68.195:53280). Apr 30 00:58:02.370879 sshd[6321]: Accepted publickey for core from 139.178.68.195 port 53280 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:58:02.375830 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:58:02.381281 systemd-logind[1580]: New session 13 of user core. Apr 30 00:58:02.391786 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:58:03.139790 sshd[6321]: pam_unix(sshd:session): session closed for user core Apr 30 00:58:03.145962 systemd[1]: sshd@12-49.12.45.4:22-139.178.68.195:53280.service: Deactivated successfully. Apr 30 00:58:03.151197 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:58:03.151817 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:58:03.154400 systemd-logind[1580]: Removed session 13. Apr 30 00:58:08.310639 systemd[1]: Started sshd@13-49.12.45.4:22-139.178.68.195:60424.service - OpenSSH per-connection server daemon (139.178.68.195:60424). Apr 30 00:58:09.294769 sshd[6359]: Accepted publickey for core from 139.178.68.195 port 60424 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:58:09.297086 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:58:09.310545 systemd-logind[1580]: New session 14 of user core. Apr 30 00:58:09.318977 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:58:10.057184 sshd[6359]: pam_unix(sshd:session): session closed for user core Apr 30 00:58:10.062837 systemd[1]: sshd@13-49.12.45.4:22-139.178.68.195:60424.service: Deactivated successfully. Apr 30 00:58:10.066978 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:58:10.067280 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:58:10.069452 systemd-logind[1580]: Removed session 14. Apr 30 00:58:15.222472 systemd[1]: Started sshd@14-49.12.45.4:22-139.178.68.195:54620.service - OpenSSH per-connection server daemon (139.178.68.195:54620). Apr 30 00:58:16.202368 sshd[6387]: Accepted publickey for core from 139.178.68.195 port 54620 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:58:16.205626 sshd[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:58:16.211736 systemd-logind[1580]: New session 15 of user core. Apr 30 00:58:16.218196 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:58:16.972074 sshd[6387]: pam_unix(sshd:session): session closed for user core Apr 30 00:58:16.980873 systemd[1]: sshd@14-49.12.45.4:22-139.178.68.195:54620.service: Deactivated successfully. Apr 30 00:58:16.984771 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:58:16.985429 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:58:16.987544 systemd-logind[1580]: Removed session 15. Apr 30 00:58:22.146631 systemd[1]: Started sshd@15-49.12.45.4:22-139.178.68.195:54628.service - OpenSSH per-connection server daemon (139.178.68.195:54628). Apr 30 00:58:23.119802 sshd[6449]: Accepted publickey for core from 139.178.68.195 port 54628 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:58:23.121594 sshd[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:58:23.129138 systemd-logind[1580]: New session 16 of user core. Apr 30 00:58:23.132295 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:58:23.879024 sshd[6449]: pam_unix(sshd:session): session closed for user core Apr 30 00:58:23.885703 systemd[1]: sshd@15-49.12.45.4:22-139.178.68.195:54628.service: Deactivated successfully. Apr 30 00:58:23.890639 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:58:23.892894 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:58:23.895876 systemd-logind[1580]: Removed session 16. Apr 30 00:58:29.049492 systemd[1]: Started sshd@16-49.12.45.4:22-139.178.68.195:48532.service - OpenSSH per-connection server daemon (139.178.68.195:48532). Apr 30 00:58:30.044098 sshd[6464]: Accepted publickey for core from 139.178.68.195 port 48532 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:58:30.045405 sshd[6464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:58:30.051372 systemd-logind[1580]: New session 17 of user core. Apr 30 00:58:30.058428 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:58:30.828338 sshd[6464]: pam_unix(sshd:session): session closed for user core Apr 30 00:58:30.836164 systemd[1]: sshd@16-49.12.45.4:22-139.178.68.195:48532.service: Deactivated successfully. Apr 30 00:58:30.842029 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:58:30.844541 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:58:30.846028 systemd-logind[1580]: Removed session 17. Apr 30 00:58:35.993315 systemd[1]: Started sshd@17-49.12.45.4:22-139.178.68.195:43104.service - OpenSSH per-connection server daemon (139.178.68.195:43104). Apr 30 00:58:36.991394 sshd[6480]: Accepted publickey for core from 139.178.68.195 port 43104 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:58:36.994054 sshd[6480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:58:36.999822 systemd-logind[1580]: New session 18 of user core. Apr 30 00:58:37.008232 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:58:37.753595 sshd[6480]: pam_unix(sshd:session): session closed for user core Apr 30 00:58:37.764104 systemd[1]: sshd@17-49.12.45.4:22-139.178.68.195:43104.service: Deactivated successfully. Apr 30 00:58:37.770862 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:58:37.774524 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:58:37.776895 systemd-logind[1580]: Removed session 18. Apr 30 00:58:42.926976 systemd[1]: Started sshd@18-49.12.45.4:22-139.178.68.195:43116.service - OpenSSH per-connection server daemon (139.178.68.195:43116). Apr 30 00:58:43.912850 sshd[6496]: Accepted publickey for core from 139.178.68.195 port 43116 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:58:43.915316 sshd[6496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:58:43.920634 systemd-logind[1580]: New session 19 of user core. Apr 30 00:58:43.925331 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:58:44.676022 sshd[6496]: pam_unix(sshd:session): session closed for user core Apr 30 00:58:44.682447 systemd[1]: sshd@18-49.12.45.4:22-139.178.68.195:43116.service: Deactivated successfully. Apr 30 00:58:44.686534 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:58:44.687367 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:58:44.688915 systemd-logind[1580]: Removed session 19. Apr 30 00:58:49.844321 systemd[1]: Started sshd@19-49.12.45.4:22-139.178.68.195:55418.service - OpenSSH per-connection server daemon (139.178.68.195:55418). Apr 30 00:58:50.827759 sshd[6553]: Accepted publickey for core from 139.178.68.195 port 55418 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:58:50.830160 sshd[6553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:58:50.836692 systemd-logind[1580]: New session 20 of user core. Apr 30 00:58:50.851639 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:58:51.585481 sshd[6553]: pam_unix(sshd:session): session closed for user core Apr 30 00:58:51.590436 systemd[1]: sshd@19-49.12.45.4:22-139.178.68.195:55418.service: Deactivated successfully. Apr 30 00:58:51.594855 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:58:51.596359 systemd-logind[1580]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:58:51.597253 systemd-logind[1580]: Removed session 20. Apr 30 00:58:56.753418 systemd[1]: Started sshd@20-49.12.45.4:22-139.178.68.195:34462.service - OpenSSH per-connection server daemon (139.178.68.195:34462). Apr 30 00:58:57.758040 sshd[6575]: Accepted publickey for core from 139.178.68.195 port 34462 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:58:57.760703 sshd[6575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:58:57.767681 systemd-logind[1580]: New session 21 of user core. Apr 30 00:58:57.773565 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:58:58.525804 sshd[6575]: pam_unix(sshd:session): session closed for user core Apr 30 00:58:58.532392 systemd[1]: sshd@20-49.12.45.4:22-139.178.68.195:34462.service: Deactivated successfully. Apr 30 00:58:58.538290 systemd-logind[1580]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:58:58.539137 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:58:58.540919 systemd-logind[1580]: Removed session 21. Apr 30 00:59:03.697251 systemd[1]: Started sshd@21-49.12.45.4:22-139.178.68.195:34468.service - OpenSSH per-connection server daemon (139.178.68.195:34468). Apr 30 00:59:04.686663 sshd[6592]: Accepted publickey for core from 139.178.68.195 port 34468 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:59:04.690008 sshd[6592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:04.697418 systemd-logind[1580]: New session 22 of user core. Apr 30 00:59:04.703156 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:59:05.447095 sshd[6592]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:05.454848 systemd[1]: sshd@21-49.12.45.4:22-139.178.68.195:34468.service: Deactivated successfully. Apr 30 00:59:05.460033 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:59:05.460924 systemd-logind[1580]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:59:05.467565 systemd-logind[1580]: Removed session 22. Apr 30 00:59:10.611280 systemd[1]: Started sshd@22-49.12.45.4:22-139.178.68.195:60478.service - OpenSSH per-connection server daemon (139.178.68.195:60478). Apr 30 00:59:11.586258 sshd[6626]: Accepted publickey for core from 139.178.68.195 port 60478 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:59:11.588397 sshd[6626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:11.595917 systemd-logind[1580]: New session 23 of user core. Apr 30 00:59:11.602116 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:59:12.334048 sshd[6626]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:12.338512 systemd-logind[1580]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:59:12.339646 systemd[1]: sshd@22-49.12.45.4:22-139.178.68.195:60478.service: Deactivated successfully. Apr 30 00:59:12.344417 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:59:12.346605 systemd-logind[1580]: Removed session 23. Apr 30 00:59:17.501610 systemd[1]: Started sshd@23-49.12.45.4:22-139.178.68.195:38182.service - OpenSSH per-connection server daemon (139.178.68.195:38182). Apr 30 00:59:18.499690 sshd[6675]: Accepted publickey for core from 139.178.68.195 port 38182 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:59:18.503132 sshd[6675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:18.509043 systemd-logind[1580]: New session 24 of user core. Apr 30 00:59:18.512393 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:59:19.280442 sshd[6675]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:19.289085 systemd[1]: sshd@23-49.12.45.4:22-139.178.68.195:38182.service: Deactivated successfully. Apr 30 00:59:19.292929 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:59:19.294499 systemd-logind[1580]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:59:19.296844 systemd-logind[1580]: Removed session 24. Apr 30 00:59:24.442268 systemd[1]: Started sshd@24-49.12.45.4:22-139.178.68.195:38186.service - OpenSSH per-connection server daemon (139.178.68.195:38186). Apr 30 00:59:25.432372 sshd[6696]: Accepted publickey for core from 139.178.68.195 port 38186 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:59:25.434400 sshd[6696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:25.439496 systemd-logind[1580]: New session 25 of user core. Apr 30 00:59:25.447650 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:59:26.205623 sshd[6696]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:26.209988 systemd-logind[1580]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:59:26.211043 systemd[1]: sshd@24-49.12.45.4:22-139.178.68.195:38186.service: Deactivated successfully. Apr 30 00:59:26.216601 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:59:26.219582 systemd-logind[1580]: Removed session 25. Apr 30 00:59:31.375337 systemd[1]: Started sshd@25-49.12.45.4:22-139.178.68.195:43158.service - OpenSSH per-connection server daemon (139.178.68.195:43158). Apr 30 00:59:32.372043 sshd[6710]: Accepted publickey for core from 139.178.68.195 port 43158 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:59:32.376608 sshd[6710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:32.389478 systemd-logind[1580]: New session 26 of user core. Apr 30 00:59:32.399014 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:59:33.130243 sshd[6710]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:33.136658 systemd[1]: sshd@25-49.12.45.4:22-139.178.68.195:43158.service: Deactivated successfully. Apr 30 00:59:33.142066 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:59:33.142901 systemd-logind[1580]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:59:33.144161 systemd-logind[1580]: Removed session 26. Apr 30 00:59:38.302995 systemd[1]: Started sshd@26-49.12.45.4:22-139.178.68.195:34936.service - OpenSSH per-connection server daemon (139.178.68.195:34936). Apr 30 00:59:39.303881 sshd[6727]: Accepted publickey for core from 139.178.68.195 port 34936 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:59:39.306709 sshd[6727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:39.317967 systemd-logind[1580]: New session 27 of user core. Apr 30 00:59:39.322559 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:59:40.076862 sshd[6727]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:40.083011 systemd[1]: sshd@26-49.12.45.4:22-139.178.68.195:34936.service: Deactivated successfully. Apr 30 00:59:40.086473 systemd-logind[1580]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:59:40.086632 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:59:40.089006 systemd-logind[1580]: Removed session 27. Apr 30 00:59:45.239283 systemd[1]: Started sshd@27-49.12.45.4:22-139.178.68.195:35466.service - OpenSSH per-connection server daemon (139.178.68.195:35466). Apr 30 00:59:46.223952 sshd[6743]: Accepted publickey for core from 139.178.68.195 port 35466 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:59:46.226856 sshd[6743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:46.236558 systemd-logind[1580]: New session 28 of user core. Apr 30 00:59:46.242424 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 00:59:46.977631 sshd[6743]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:46.984092 systemd[1]: sshd@27-49.12.45.4:22-139.178.68.195:35466.service: Deactivated successfully. Apr 30 00:59:46.988242 systemd-logind[1580]: Session 28 logged out. Waiting for processes to exit. Apr 30 00:59:46.989158 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 00:59:46.991815 systemd-logind[1580]: Removed session 28. Apr 30 00:59:52.146573 systemd[1]: Started sshd@28-49.12.45.4:22-139.178.68.195:35476.service - OpenSSH per-connection server daemon (139.178.68.195:35476). Apr 30 00:59:53.142418 sshd[6817]: Accepted publickey for core from 139.178.68.195 port 35476 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:59:53.145369 sshd[6817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:53.153502 systemd-logind[1580]: New session 29 of user core. Apr 30 00:59:53.158501 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 00:59:53.905621 sshd[6817]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:53.912225 systemd-logind[1580]: Session 29 logged out. Waiting for processes to exit. Apr 30 00:59:53.914502 systemd[1]: sshd@28-49.12.45.4:22-139.178.68.195:35476.service: Deactivated successfully. Apr 30 00:59:53.918418 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 00:59:53.920871 systemd-logind[1580]: Removed session 29. Apr 30 00:59:59.068371 systemd[1]: Started sshd@29-49.12.45.4:22-139.178.68.195:48950.service - OpenSSH per-connection server daemon (139.178.68.195:48950). Apr 30 01:00:00.044980 sshd[6833]: Accepted publickey for core from 139.178.68.195 port 48950 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:00:00.046341 sshd[6833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:00:00.052701 systemd-logind[1580]: New session 30 of user core. Apr 30 01:00:00.058290 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 30 01:00:00.797703 sshd[6833]: pam_unix(sshd:session): session closed for user core Apr 30 01:00:00.803555 systemd-logind[1580]: Session 30 logged out. Waiting for processes to exit. Apr 30 01:00:00.804453 systemd[1]: sshd@29-49.12.45.4:22-139.178.68.195:48950.service: Deactivated successfully. Apr 30 01:00:00.811362 systemd[1]: session-30.scope: Deactivated successfully. Apr 30 01:00:00.814281 systemd-logind[1580]: Removed session 30. Apr 30 01:00:05.969320 systemd[1]: Started sshd@30-49.12.45.4:22-139.178.68.195:52114.service - OpenSSH per-connection server daemon (139.178.68.195:52114). Apr 30 01:00:06.970998 sshd[6849]: Accepted publickey for core from 139.178.68.195 port 52114 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:00:06.974214 sshd[6849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:00:06.985759 systemd-logind[1580]: New session 31 of user core. Apr 30 01:00:06.990614 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 30 01:00:07.542505 systemd[1]: run-containerd-runc-k8s.io-79bfe64f7adca602d3013e42fe94c1db7d0a25c9a4d49d1691513e05ab399cff-runc.E9K8eh.mount: Deactivated successfully. Apr 30 01:00:07.766519 sshd[6849]: pam_unix(sshd:session): session closed for user core Apr 30 01:00:07.771666 systemd[1]: sshd@30-49.12.45.4:22-139.178.68.195:52114.service: Deactivated successfully. Apr 30 01:00:07.778117 systemd[1]: session-31.scope: Deactivated successfully. Apr 30 01:00:07.783294 systemd-logind[1580]: Session 31 logged out. Waiting for processes to exit. Apr 30 01:00:07.784807 systemd-logind[1580]: Removed session 31. Apr 30 01:00:12.932419 systemd[1]: Started sshd@31-49.12.45.4:22-139.178.68.195:52128.service - OpenSSH per-connection server daemon (139.178.68.195:52128). Apr 30 01:00:13.924630 sshd[6884]: Accepted publickey for core from 139.178.68.195 port 52128 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:00:13.927304 sshd[6884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:00:13.935014 systemd-logind[1580]: New session 32 of user core. Apr 30 01:00:13.940655 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 30 01:00:14.687695 sshd[6884]: pam_unix(sshd:session): session closed for user core Apr 30 01:00:14.695128 systemd[1]: sshd@31-49.12.45.4:22-139.178.68.195:52128.service: Deactivated successfully. Apr 30 01:00:14.702485 systemd[1]: session-32.scope: Deactivated successfully. Apr 30 01:00:14.703656 systemd-logind[1580]: Session 32 logged out. Waiting for processes to exit. Apr 30 01:00:14.704872 systemd-logind[1580]: Removed session 32. Apr 30 01:00:17.377626 systemd[1]: run-containerd-runc-k8s.io-79bfe64f7adca602d3013e42fe94c1db7d0a25c9a4d49d1691513e05ab399cff-runc.2fW0F2.mount: Deactivated successfully. Apr 30 01:00:19.854297 systemd[1]: Started sshd@32-49.12.45.4:22-139.178.68.195:44836.service - OpenSSH per-connection server daemon (139.178.68.195:44836). Apr 30 01:00:20.857221 sshd[6941]: Accepted publickey for core from 139.178.68.195 port 44836 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:00:20.860404 sshd[6941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:00:20.868989 systemd-logind[1580]: New session 33 of user core. Apr 30 01:00:20.874384 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 30 01:00:21.620042 sshd[6941]: pam_unix(sshd:session): session closed for user core Apr 30 01:00:21.626567 systemd[1]: sshd@32-49.12.45.4:22-139.178.68.195:44836.service: Deactivated successfully. Apr 30 01:00:21.631394 systemd-logind[1580]: Session 33 logged out. Waiting for processes to exit. Apr 30 01:00:21.631944 systemd[1]: session-33.scope: Deactivated successfully. Apr 30 01:00:21.633735 systemd-logind[1580]: Removed session 33. Apr 30 01:00:21.789062 systemd[1]: Started sshd@33-49.12.45.4:22-139.178.68.195:44850.service - OpenSSH per-connection server daemon (139.178.68.195:44850). Apr 30 01:00:22.773123 sshd[6956]: Accepted publickey for core from 139.178.68.195 port 44850 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:00:22.775613 sshd[6956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:00:22.782041 systemd-logind[1580]: New session 34 of user core. Apr 30 01:00:22.787351 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 30 01:00:23.584442 sshd[6956]: pam_unix(sshd:session): session closed for user core Apr 30 01:00:23.590103 systemd-logind[1580]: Session 34 logged out. Waiting for processes to exit. Apr 30 01:00:23.590566 systemd[1]: sshd@33-49.12.45.4:22-139.178.68.195:44850.service: Deactivated successfully. Apr 30 01:00:23.595632 systemd[1]: session-34.scope: Deactivated successfully. Apr 30 01:00:23.598706 systemd-logind[1580]: Removed session 34. Apr 30 01:00:23.755669 systemd[1]: Started sshd@34-49.12.45.4:22-139.178.68.195:44864.service - OpenSSH per-connection server daemon (139.178.68.195:44864). Apr 30 01:00:24.740664 sshd[6968]: Accepted publickey for core from 139.178.68.195 port 44864 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:00:24.744205 sshd[6968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:00:24.750075 systemd-logind[1580]: New session 35 of user core. Apr 30 01:00:24.755508 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 30 01:00:25.356539 update_engine[1582]: I20250430 01:00:25.355090 1582 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 01:00:25.356539 update_engine[1582]: I20250430 01:00:25.355161 1582 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 01:00:25.356539 update_engine[1582]: I20250430 01:00:25.355494 1582 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 01:00:25.356539 update_engine[1582]: I20250430 01:00:25.356122 1582 omaha_request_params.cc:62] Current group set to lts Apr 30 01:00:25.356539 update_engine[1582]: I20250430 01:00:25.356261 1582 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 01:00:25.356539 update_engine[1582]: I20250430 01:00:25.356276 1582 update_attempter.cc:643] Scheduling an action processor start. Apr 30 01:00:25.356539 update_engine[1582]: I20250430 01:00:25.356300 1582 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 01:00:25.357978 locksmithd[1630]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 01:00:25.360148 update_engine[1582]: I20250430 01:00:25.359810 1582 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 01:00:25.360148 update_engine[1582]: I20250430 01:00:25.360113 1582 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 01:00:25.361838 update_engine[1582]: I20250430 01:00:25.360301 1582 omaha_request_action.cc:272] Request: Apr 30 01:00:25.361838 update_engine[1582]: Apr 30 01:00:25.361838 update_engine[1582]: Apr 30 01:00:25.361838 update_engine[1582]: Apr 30 01:00:25.361838 update_engine[1582]: Apr 30 01:00:25.361838 update_engine[1582]: Apr 30 01:00:25.361838 update_engine[1582]: Apr 30 01:00:25.361838 update_engine[1582]: Apr 30 01:00:25.361838 update_engine[1582]: Apr 30 01:00:25.361838 update_engine[1582]: I20250430 01:00:25.360332 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 01:00:25.366296 update_engine[1582]: I20250430 01:00:25.366223 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 01:00:25.366700 update_engine[1582]: I20250430 01:00:25.366633 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 01:00:25.367891 update_engine[1582]: E20250430 01:00:25.367841 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 01:00:25.368014 update_engine[1582]: I20250430 01:00:25.367947 1582 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 01:00:25.508552 sshd[6968]: pam_unix(sshd:session): session closed for user core Apr 30 01:00:25.512420 systemd-logind[1580]: Session 35 logged out. Waiting for processes to exit. Apr 30 01:00:25.515377 systemd[1]: sshd@34-49.12.45.4:22-139.178.68.195:44864.service: Deactivated successfully. Apr 30 01:00:25.520040 systemd[1]: session-35.scope: Deactivated successfully. Apr 30 01:00:25.523674 systemd-logind[1580]: Removed session 35. Apr 30 01:00:30.676631 systemd[1]: Started sshd@35-49.12.45.4:22-139.178.68.195:48476.service - OpenSSH per-connection server daemon (139.178.68.195:48476). Apr 30 01:00:31.679721 sshd[6982]: Accepted publickey for core from 139.178.68.195 port 48476 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:00:31.682612 sshd[6982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:00:31.689261 systemd-logind[1580]: New session 36 of user core. Apr 30 01:00:31.694417 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 30 01:00:32.440429 sshd[6982]: pam_unix(sshd:session): session closed for user core Apr 30 01:00:32.445552 systemd[1]: sshd@35-49.12.45.4:22-139.178.68.195:48476.service: Deactivated successfully. Apr 30 01:00:32.451311 systemd-logind[1580]: Session 36 logged out. Waiting for processes to exit. Apr 30 01:00:32.452346 systemd[1]: session-36.scope: Deactivated successfully. Apr 30 01:00:32.455142 systemd-logind[1580]: Removed session 36. Apr 30 01:00:35.358018 update_engine[1582]: I20250430 01:00:35.357692 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 01:00:35.358475 update_engine[1582]: I20250430 01:00:35.358060 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 01:00:35.358475 update_engine[1582]: I20250430 01:00:35.358307 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 01:00:35.359411 update_engine[1582]: E20250430 01:00:35.359339 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 01:00:35.359411 update_engine[1582]: I20250430 01:00:35.359424 1582 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 01:00:37.613374 systemd[1]: Started sshd@36-49.12.45.4:22-139.178.68.195:36846.service - OpenSSH per-connection server daemon (139.178.68.195:36846). Apr 30 01:00:38.600157 sshd[6998]: Accepted publickey for core from 139.178.68.195 port 36846 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:00:38.602311 sshd[6998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:00:38.609354 systemd-logind[1580]: New session 37 of user core. Apr 30 01:00:38.614288 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 30 01:00:39.361468 sshd[6998]: pam_unix(sshd:session): session closed for user core Apr 30 01:00:39.367050 systemd[1]: sshd@36-49.12.45.4:22-139.178.68.195:36846.service: Deactivated successfully. Apr 30 01:00:39.371709 systemd-logind[1580]: Session 37 logged out. Waiting for processes to exit. Apr 30 01:00:39.373411 systemd[1]: session-37.scope: Deactivated successfully. Apr 30 01:00:39.374666 systemd-logind[1580]: Removed session 37. Apr 30 01:00:44.531365 systemd[1]: Started sshd@37-49.12.45.4:22-139.178.68.195:36848.service - OpenSSH per-connection server daemon (139.178.68.195:36848). Apr 30 01:00:45.360093 update_engine[1582]: I20250430 01:00:45.359988 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 01:00:45.361320 update_engine[1582]: I20250430 01:00:45.360286 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 01:00:45.361320 update_engine[1582]: I20250430 01:00:45.360529 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 01:00:45.361604 update_engine[1582]: E20250430 01:00:45.361521 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 01:00:45.361604 update_engine[1582]: I20250430 01:00:45.361599 1582 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 01:00:45.514931 sshd[7013]: Accepted publickey for core from 139.178.68.195 port 36848 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:00:45.518304 sshd[7013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:00:45.527131 systemd-logind[1580]: New session 38 of user core. Apr 30 01:00:45.532323 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 30 01:00:46.284787 sshd[7013]: pam_unix(sshd:session): session closed for user core Apr 30 01:00:46.291911 systemd[1]: sshd@37-49.12.45.4:22-139.178.68.195:36848.service: Deactivated successfully. Apr 30 01:00:46.296839 systemd[1]: session-38.scope: Deactivated successfully. Apr 30 01:00:46.297433 systemd-logind[1580]: Session 38 logged out. Waiting for processes to exit. Apr 30 01:00:46.299349 systemd-logind[1580]: Removed session 38. Apr 30 01:00:51.450380 systemd[1]: Started sshd@38-49.12.45.4:22-139.178.68.195:60874.service - OpenSSH per-connection server daemon (139.178.68.195:60874). Apr 30 01:00:52.439411 sshd[7073]: Accepted publickey for core from 139.178.68.195 port 60874 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:00:52.442163 sshd[7073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:00:52.448226 systemd-logind[1580]: New session 39 of user core. Apr 30 01:00:52.458744 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 30 01:00:53.198732 sshd[7073]: pam_unix(sshd:session): session closed for user core Apr 30 01:00:53.204843 systemd[1]: sshd@38-49.12.45.4:22-139.178.68.195:60874.service: Deactivated successfully. Apr 30 01:00:53.209731 systemd[1]: session-39.scope: Deactivated successfully. Apr 30 01:00:53.210237 systemd-logind[1580]: Session 39 logged out. Waiting for processes to exit. Apr 30 01:00:53.214631 systemd-logind[1580]: Removed session 39. Apr 30 01:00:55.356874 update_engine[1582]: I20250430 01:00:55.356518 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 01:00:55.357349 update_engine[1582]: I20250430 01:00:55.357240 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 01:00:55.357653 update_engine[1582]: I20250430 01:00:55.357595 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 01:00:55.358566 update_engine[1582]: E20250430 01:00:55.358458 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 01:00:55.358566 update_engine[1582]: I20250430 01:00:55.358531 1582 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 01:00:55.358566 update_engine[1582]: I20250430 01:00:55.358541 1582 omaha_request_action.cc:617] Omaha request response: Apr 30 01:00:55.358790 update_engine[1582]: E20250430 01:00:55.358637 1582 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 01:00:55.358790 update_engine[1582]: I20250430 01:00:55.358655 1582 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 01:00:55.358790 update_engine[1582]: I20250430 01:00:55.358661 1582 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 01:00:55.358790 update_engine[1582]: I20250430 01:00:55.358666 1582 update_attempter.cc:306] Processing Done. Apr 30 01:00:55.358790 update_engine[1582]: E20250430 01:00:55.358681 1582 update_attempter.cc:619] Update failed. Apr 30 01:00:55.358790 update_engine[1582]: I20250430 01:00:55.358686 1582 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 01:00:55.358790 update_engine[1582]: I20250430 01:00:55.358691 1582 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 01:00:55.358790 update_engine[1582]: I20250430 01:00:55.358696 1582 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 01:00:55.358790 update_engine[1582]: I20250430 01:00:55.358779 1582 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 01:00:55.359888 update_engine[1582]: I20250430 01:00:55.358804 1582 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 01:00:55.359888 update_engine[1582]: I20250430 01:00:55.358810 1582 omaha_request_action.cc:272] Request: Apr 30 01:00:55.359888 update_engine[1582]: Apr 30 01:00:55.359888 update_engine[1582]: Apr 30 01:00:55.359888 update_engine[1582]: Apr 30 01:00:55.359888 update_engine[1582]: Apr 30 01:00:55.359888 update_engine[1582]: Apr 30 01:00:55.359888 update_engine[1582]: Apr 30 01:00:55.359888 update_engine[1582]: I20250430 01:00:55.358817 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 01:00:55.359888 update_engine[1582]: I20250430 01:00:55.358999 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 01:00:55.359888 update_engine[1582]: I20250430 01:00:55.359169 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 01:00:55.359888 update_engine[1582]: E20250430 01:00:55.359823 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 01:00:55.359888 update_engine[1582]: I20250430 01:00:55.359879 1582 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 01:00:55.359888 update_engine[1582]: I20250430 01:00:55.359886 1582 omaha_request_action.cc:617] Omaha request response: Apr 30 01:00:55.359888 update_engine[1582]: I20250430 01:00:55.359894 1582 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 01:00:55.359888 update_engine[1582]: I20250430 01:00:55.359899 1582 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 01:00:55.360681 locksmithd[1630]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 01:00:55.360681 locksmithd[1630]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 01:00:55.361322 update_engine[1582]: I20250430 01:00:55.359910 1582 update_attempter.cc:306] Processing Done. Apr 30 01:00:55.361322 update_engine[1582]: I20250430 01:00:55.359917 1582 update_attempter.cc:310] Error event sent. Apr 30 01:00:55.361322 update_engine[1582]: I20250430 01:00:55.359926 1582 update_check_scheduler.cc:74] Next update check in 43m52s Apr 30 01:00:58.369443 systemd[1]: Started sshd@39-49.12.45.4:22-139.178.68.195:41032.service - OpenSSH per-connection server daemon (139.178.68.195:41032). Apr 30 01:00:59.369392 sshd[7086]: Accepted publickey for core from 139.178.68.195 port 41032 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:00:59.371811 sshd[7086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:00:59.377620 systemd-logind[1580]: New session 40 of user core. Apr 30 01:00:59.382336 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 30 01:01:00.189369 sshd[7086]: pam_unix(sshd:session): session closed for user core Apr 30 01:01:00.195511 systemd[1]: sshd@39-49.12.45.4:22-139.178.68.195:41032.service: Deactivated successfully. Apr 30 01:01:00.203443 systemd[1]: session-40.scope: Deactivated successfully. Apr 30 01:01:00.205144 systemd-logind[1580]: Session 40 logged out. Waiting for processes to exit. Apr 30 01:01:00.207604 systemd-logind[1580]: Removed session 40. Apr 30 01:01:05.356972 systemd[1]: Started sshd@40-49.12.45.4:22-139.178.68.195:54834.service - OpenSSH per-connection server daemon (139.178.68.195:54834). Apr 30 01:01:06.343548 sshd[7102]: Accepted publickey for core from 139.178.68.195 port 54834 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:01:06.345846 sshd[7102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:01:06.353922 systemd-logind[1580]: New session 41 of user core. Apr 30 01:01:06.357375 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 30 01:01:07.107614 sshd[7102]: pam_unix(sshd:session): session closed for user core Apr 30 01:01:07.113690 systemd[1]: sshd@40-49.12.45.4:22-139.178.68.195:54834.service: Deactivated successfully. Apr 30 01:01:07.123425 systemd-logind[1580]: Session 41 logged out. Waiting for processes to exit. Apr 30 01:01:07.124382 systemd[1]: session-41.scope: Deactivated successfully. Apr 30 01:01:07.132693 systemd-logind[1580]: Removed session 41. Apr 30 01:01:12.274688 systemd[1]: Started sshd@41-49.12.45.4:22-139.178.68.195:54840.service - OpenSSH per-connection server daemon (139.178.68.195:54840). Apr 30 01:01:13.248074 sshd[7136]: Accepted publickey for core from 139.178.68.195 port 54840 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:01:13.250010 sshd[7136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:01:13.259332 systemd-logind[1580]: New session 42 of user core. Apr 30 01:01:13.263729 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 30 01:01:14.004412 sshd[7136]: pam_unix(sshd:session): session closed for user core Apr 30 01:01:14.026203 systemd[1]: sshd@41-49.12.45.4:22-139.178.68.195:54840.service: Deactivated successfully. Apr 30 01:01:14.035777 systemd-logind[1580]: Session 42 logged out. Waiting for processes to exit. Apr 30 01:01:14.036171 systemd[1]: session-42.scope: Deactivated successfully. Apr 30 01:01:14.038159 systemd-logind[1580]: Removed session 42. Apr 30 01:01:19.169360 systemd[1]: Started sshd@42-49.12.45.4:22-139.178.68.195:53584.service - OpenSSH per-connection server daemon (139.178.68.195:53584). Apr 30 01:01:20.153249 sshd[7191]: Accepted publickey for core from 139.178.68.195 port 53584 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:01:20.156880 sshd[7191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:01:20.165777 systemd-logind[1580]: New session 43 of user core. Apr 30 01:01:20.170728 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 30 01:01:20.922834 sshd[7191]: pam_unix(sshd:session): session closed for user core Apr 30 01:01:20.929542 systemd[1]: sshd@42-49.12.45.4:22-139.178.68.195:53584.service: Deactivated successfully. Apr 30 01:01:20.933700 systemd[1]: session-43.scope: Deactivated successfully. Apr 30 01:01:20.935022 systemd-logind[1580]: Session 43 logged out. Waiting for processes to exit. Apr 30 01:01:20.936089 systemd-logind[1580]: Removed session 43. Apr 30 01:01:26.088995 systemd[1]: Started sshd@43-49.12.45.4:22-139.178.68.195:37140.service - OpenSSH per-connection server daemon (139.178.68.195:37140). Apr 30 01:01:27.089566 sshd[7222]: Accepted publickey for core from 139.178.68.195 port 37140 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:01:27.092858 sshd[7222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:01:27.101187 systemd-logind[1580]: New session 44 of user core. Apr 30 01:01:27.106464 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 30 01:01:27.849202 sshd[7222]: pam_unix(sshd:session): session closed for user core Apr 30 01:01:27.853572 systemd[1]: sshd@43-49.12.45.4:22-139.178.68.195:37140.service: Deactivated successfully. Apr 30 01:01:27.855194 systemd-logind[1580]: Session 44 logged out. Waiting for processes to exit. Apr 30 01:01:27.858672 systemd[1]: session-44.scope: Deactivated successfully. Apr 30 01:01:27.861616 systemd-logind[1580]: Removed session 44. Apr 30 01:01:33.012478 systemd[1]: Started sshd@44-49.12.45.4:22-139.178.68.195:37150.service - OpenSSH per-connection server daemon (139.178.68.195:37150). Apr 30 01:01:33.985896 sshd[7238]: Accepted publickey for core from 139.178.68.195 port 37150 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:01:33.987640 sshd[7238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:01:33.994522 systemd-logind[1580]: New session 45 of user core. Apr 30 01:01:34.000433 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 30 01:01:34.739891 sshd[7238]: pam_unix(sshd:session): session closed for user core Apr 30 01:01:34.745269 systemd[1]: sshd@44-49.12.45.4:22-139.178.68.195:37150.service: Deactivated successfully. Apr 30 01:01:34.750857 systemd[1]: session-45.scope: Deactivated successfully. Apr 30 01:01:34.754184 systemd-logind[1580]: Session 45 logged out. Waiting for processes to exit. Apr 30 01:01:34.756354 systemd-logind[1580]: Removed session 45. Apr 30 01:01:39.909480 systemd[1]: Started sshd@45-49.12.45.4:22-139.178.68.195:33522.service - OpenSSH per-connection server daemon (139.178.68.195:33522). Apr 30 01:01:40.890568 sshd[7252]: Accepted publickey for core from 139.178.68.195 port 33522 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:01:40.892992 sshd[7252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:01:40.898814 systemd-logind[1580]: New session 46 of user core. Apr 30 01:01:40.908537 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 30 01:01:41.651277 sshd[7252]: pam_unix(sshd:session): session closed for user core Apr 30 01:01:41.655625 systemd[1]: sshd@45-49.12.45.4:22-139.178.68.195:33522.service: Deactivated successfully. Apr 30 01:01:41.660448 systemd[1]: session-46.scope: Deactivated successfully. Apr 30 01:01:41.662539 systemd-logind[1580]: Session 46 logged out. Waiting for processes to exit. Apr 30 01:01:41.664143 systemd-logind[1580]: Removed session 46. Apr 30 01:01:46.820452 systemd[1]: Started sshd@46-49.12.45.4:22-139.178.68.195:54166.service - OpenSSH per-connection server daemon (139.178.68.195:54166). Apr 30 01:01:47.807548 sshd[7268]: Accepted publickey for core from 139.178.68.195 port 54166 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:01:47.809844 sshd[7268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:01:47.816169 systemd-logind[1580]: New session 47 of user core. Apr 30 01:01:47.820446 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 30 01:01:48.562875 sshd[7268]: pam_unix(sshd:session): session closed for user core Apr 30 01:01:48.569117 systemd-logind[1580]: Session 47 logged out. Waiting for processes to exit. Apr 30 01:01:48.570430 systemd[1]: sshd@46-49.12.45.4:22-139.178.68.195:54166.service: Deactivated successfully. Apr 30 01:01:48.575333 systemd[1]: session-47.scope: Deactivated successfully. Apr 30 01:01:48.576861 systemd-logind[1580]: Removed session 47. Apr 30 01:01:53.729398 systemd[1]: Started sshd@47-49.12.45.4:22-139.178.68.195:54174.service - OpenSSH per-connection server daemon (139.178.68.195:54174). Apr 30 01:01:54.708216 sshd[7323]: Accepted publickey for core from 139.178.68.195 port 54174 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:01:54.710310 sshd[7323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:01:54.716003 systemd-logind[1580]: New session 48 of user core. Apr 30 01:01:54.721466 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 30 01:01:55.471308 sshd[7323]: pam_unix(sshd:session): session closed for user core Apr 30 01:01:55.475267 systemd[1]: sshd@47-49.12.45.4:22-139.178.68.195:54174.service: Deactivated successfully. Apr 30 01:01:55.481513 systemd[1]: session-48.scope: Deactivated successfully. Apr 30 01:01:55.485342 systemd-logind[1580]: Session 48 logged out. Waiting for processes to exit. Apr 30 01:01:55.487035 systemd-logind[1580]: Removed session 48. Apr 30 01:02:00.637664 systemd[1]: Started sshd@48-49.12.45.4:22-139.178.68.195:35494.service - OpenSSH per-connection server daemon (139.178.68.195:35494). Apr 30 01:02:01.632140 sshd[7345]: Accepted publickey for core from 139.178.68.195 port 35494 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:02:01.637418 sshd[7345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:02:01.644029 systemd-logind[1580]: New session 49 of user core. Apr 30 01:02:01.648406 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 30 01:02:02.395688 sshd[7345]: pam_unix(sshd:session): session closed for user core Apr 30 01:02:02.403597 systemd[1]: sshd@48-49.12.45.4:22-139.178.68.195:35494.service: Deactivated successfully. Apr 30 01:02:02.408582 systemd-logind[1580]: Session 49 logged out. Waiting for processes to exit. Apr 30 01:02:02.409541 systemd[1]: session-49.scope: Deactivated successfully. Apr 30 01:02:02.411276 systemd-logind[1580]: Removed session 49. Apr 30 01:02:07.566561 systemd[1]: Started sshd@49-49.12.45.4:22-139.178.68.195:45576.service - OpenSSH per-connection server daemon (139.178.68.195:45576). Apr 30 01:02:08.546216 sshd[7376]: Accepted publickey for core from 139.178.68.195 port 45576 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:02:08.549489 sshd[7376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:02:08.558314 systemd-logind[1580]: New session 50 of user core. Apr 30 01:02:08.564324 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 30 01:02:09.300196 sshd[7376]: pam_unix(sshd:session): session closed for user core Apr 30 01:02:09.309772 systemd-logind[1580]: Session 50 logged out. Waiting for processes to exit. Apr 30 01:02:09.311780 systemd[1]: sshd@49-49.12.45.4:22-139.178.68.195:45576.service: Deactivated successfully. Apr 30 01:02:09.317580 systemd[1]: session-50.scope: Deactivated successfully. Apr 30 01:02:09.319179 systemd-logind[1580]: Removed session 50. Apr 30 01:02:14.468885 systemd[1]: Started sshd@50-49.12.45.4:22-139.178.68.195:45582.service - OpenSSH per-connection server daemon (139.178.68.195:45582). Apr 30 01:02:15.450452 sshd[7392]: Accepted publickey for core from 139.178.68.195 port 45582 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:02:15.452221 sshd[7392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:02:15.458669 systemd-logind[1580]: New session 51 of user core. Apr 30 01:02:15.467353 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 30 01:02:16.219118 sshd[7392]: pam_unix(sshd:session): session closed for user core Apr 30 01:02:16.224503 systemd-logind[1580]: Session 51 logged out. Waiting for processes to exit. Apr 30 01:02:16.226176 systemd[1]: sshd@50-49.12.45.4:22-139.178.68.195:45582.service: Deactivated successfully. Apr 30 01:02:16.230671 systemd[1]: session-51.scope: Deactivated successfully. Apr 30 01:02:16.234151 systemd-logind[1580]: Removed session 51. Apr 30 01:02:17.372716 systemd[1]: run-containerd-runc-k8s.io-79bfe64f7adca602d3013e42fe94c1db7d0a25c9a4d49d1691513e05ab399cff-runc.9PNQwH.mount: Deactivated successfully. Apr 30 01:02:21.387398 systemd[1]: Started sshd@51-49.12.45.4:22-139.178.68.195:33540.service - OpenSSH per-connection server daemon (139.178.68.195:33540). Apr 30 01:02:22.381777 sshd[7448]: Accepted publickey for core from 139.178.68.195 port 33540 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:02:22.383983 sshd[7448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:02:22.391131 systemd-logind[1580]: New session 52 of user core. Apr 30 01:02:22.400481 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 30 01:02:23.164306 sshd[7448]: pam_unix(sshd:session): session closed for user core Apr 30 01:02:23.170560 systemd-logind[1580]: Session 52 logged out. Waiting for processes to exit. Apr 30 01:02:23.170999 systemd[1]: sshd@51-49.12.45.4:22-139.178.68.195:33540.service: Deactivated successfully. Apr 30 01:02:23.176889 systemd[1]: session-52.scope: Deactivated successfully. Apr 30 01:02:23.178464 systemd-logind[1580]: Removed session 52. Apr 30 01:02:28.331428 systemd[1]: Started sshd@52-49.12.45.4:22-139.178.68.195:35442.service - OpenSSH per-connection server daemon (139.178.68.195:35442). Apr 30 01:02:29.317088 sshd[7462]: Accepted publickey for core from 139.178.68.195 port 35442 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:02:29.319461 sshd[7462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:02:29.325587 systemd-logind[1580]: New session 53 of user core. Apr 30 01:02:29.329610 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 30 01:02:30.081995 sshd[7462]: pam_unix(sshd:session): session closed for user core Apr 30 01:02:30.086927 systemd-logind[1580]: Session 53 logged out. Waiting for processes to exit. Apr 30 01:02:30.089131 systemd[1]: sshd@52-49.12.45.4:22-139.178.68.195:35442.service: Deactivated successfully. Apr 30 01:02:30.095335 systemd[1]: session-53.scope: Deactivated successfully. Apr 30 01:02:30.096730 systemd-logind[1580]: Removed session 53. Apr 30 01:02:35.250374 systemd[1]: Started sshd@53-49.12.45.4:22-139.178.68.195:52148.service - OpenSSH per-connection server daemon (139.178.68.195:52148). Apr 30 01:02:36.226071 sshd[7478]: Accepted publickey for core from 139.178.68.195 port 52148 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:02:36.228096 sshd[7478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:02:36.234481 systemd-logind[1580]: New session 54 of user core. Apr 30 01:02:36.239450 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 30 01:02:36.978693 sshd[7478]: pam_unix(sshd:session): session closed for user core Apr 30 01:02:36.985636 systemd[1]: sshd@53-49.12.45.4:22-139.178.68.195:52148.service: Deactivated successfully. Apr 30 01:02:36.991680 systemd[1]: session-54.scope: Deactivated successfully. Apr 30 01:02:36.993541 systemd-logind[1580]: Session 54 logged out. Waiting for processes to exit. Apr 30 01:02:36.997300 systemd-logind[1580]: Removed session 54. Apr 30 01:02:42.148412 systemd[1]: Started sshd@54-49.12.45.4:22-139.178.68.195:52164.service - OpenSSH per-connection server daemon (139.178.68.195:52164). Apr 30 01:02:43.138314 sshd[7492]: Accepted publickey for core from 139.178.68.195 port 52164 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:02:43.141013 sshd[7492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:02:43.147458 systemd-logind[1580]: New session 55 of user core. Apr 30 01:02:43.153403 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 30 01:02:43.899282 sshd[7492]: pam_unix(sshd:session): session closed for user core Apr 30 01:02:43.910017 systemd[1]: sshd@54-49.12.45.4:22-139.178.68.195:52164.service: Deactivated successfully. Apr 30 01:02:43.915502 systemd-logind[1580]: Session 55 logged out. Waiting for processes to exit. Apr 30 01:02:43.915678 systemd[1]: session-55.scope: Deactivated successfully. Apr 30 01:02:43.918987 systemd-logind[1580]: Removed session 55. Apr 30 01:02:49.073868 systemd[1]: Started sshd@55-49.12.45.4:22-139.178.68.195:52764.service - OpenSSH per-connection server daemon (139.178.68.195:52764). Apr 30 01:02:50.049172 sshd[7547]: Accepted publickey for core from 139.178.68.195 port 52764 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:02:50.051502 sshd[7547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:02:50.057986 systemd-logind[1580]: New session 56 of user core. Apr 30 01:02:50.061303 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 30 01:02:50.805508 sshd[7547]: pam_unix(sshd:session): session closed for user core Apr 30 01:02:50.814755 systemd[1]: sshd@55-49.12.45.4:22-139.178.68.195:52764.service: Deactivated successfully. Apr 30 01:02:50.816777 systemd-logind[1580]: Session 56 logged out. Waiting for processes to exit. Apr 30 01:02:50.819702 systemd[1]: session-56.scope: Deactivated successfully. Apr 30 01:02:50.821186 systemd-logind[1580]: Removed session 56. Apr 30 01:02:55.976284 systemd[1]: Started sshd@56-49.12.45.4:22-139.178.68.195:39570.service - OpenSSH per-connection server daemon (139.178.68.195:39570). Apr 30 01:02:56.973352 sshd[7562]: Accepted publickey for core from 139.178.68.195 port 39570 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:02:56.977499 sshd[7562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:02:56.986149 systemd-logind[1580]: New session 57 of user core. Apr 30 01:02:56.990303 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 30 01:02:57.748307 sshd[7562]: pam_unix(sshd:session): session closed for user core Apr 30 01:02:57.753405 systemd[1]: sshd@56-49.12.45.4:22-139.178.68.195:39570.service: Deactivated successfully. Apr 30 01:02:57.760323 systemd[1]: session-57.scope: Deactivated successfully. Apr 30 01:02:57.761926 systemd-logind[1580]: Session 57 logged out. Waiting for processes to exit. Apr 30 01:02:57.764135 systemd-logind[1580]: Removed session 57. Apr 30 01:03:02.915567 systemd[1]: Started sshd@57-49.12.45.4:22-139.178.68.195:39580.service - OpenSSH per-connection server daemon (139.178.68.195:39580). Apr 30 01:03:03.909390 sshd[7594]: Accepted publickey for core from 139.178.68.195 port 39580 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:03.912921 sshd[7594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:03.918764 systemd-logind[1580]: New session 58 of user core. Apr 30 01:03:03.922350 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 30 01:03:04.712238 sshd[7594]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:04.720570 systemd[1]: sshd@57-49.12.45.4:22-139.178.68.195:39580.service: Deactivated successfully. Apr 30 01:03:04.726684 systemd[1]: session-58.scope: Deactivated successfully. Apr 30 01:03:04.729305 systemd-logind[1580]: Session 58 logged out. Waiting for processes to exit. Apr 30 01:03:04.731139 systemd-logind[1580]: Removed session 58. Apr 30 01:03:07.542673 systemd[1]: run-containerd-runc-k8s.io-79bfe64f7adca602d3013e42fe94c1db7d0a25c9a4d49d1691513e05ab399cff-runc.lbPHYz.mount: Deactivated successfully. Apr 30 01:03:09.873951 systemd[1]: Started sshd@58-49.12.45.4:22-139.178.68.195:58072.service - OpenSSH per-connection server daemon (139.178.68.195:58072). Apr 30 01:03:10.851986 sshd[7627]: Accepted publickey for core from 139.178.68.195 port 58072 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:10.853379 sshd[7627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:10.859726 systemd-logind[1580]: New session 59 of user core. Apr 30 01:03:10.864323 systemd[1]: Started session-59.scope - Session 59 of User core. Apr 30 01:03:11.611481 sshd[7627]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:11.616783 systemd[1]: sshd@58-49.12.45.4:22-139.178.68.195:58072.service: Deactivated successfully. Apr 30 01:03:11.623304 systemd[1]: session-59.scope: Deactivated successfully. Apr 30 01:03:11.625594 systemd-logind[1580]: Session 59 logged out. Waiting for processes to exit. Apr 30 01:03:11.627048 systemd-logind[1580]: Removed session 59. Apr 30 01:03:16.779276 systemd[1]: Started sshd@59-49.12.45.4:22-139.178.68.195:56318.service - OpenSSH per-connection server daemon (139.178.68.195:56318). Apr 30 01:03:17.762811 sshd[7641]: Accepted publickey for core from 139.178.68.195 port 56318 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:17.763825 sshd[7641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:17.770339 systemd-logind[1580]: New session 60 of user core. Apr 30 01:03:17.777665 systemd[1]: Started session-60.scope - Session 60 of User core. Apr 30 01:03:18.525423 sshd[7641]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:18.530882 systemd[1]: sshd@59-49.12.45.4:22-139.178.68.195:56318.service: Deactivated successfully. Apr 30 01:03:18.536868 systemd-logind[1580]: Session 60 logged out. Waiting for processes to exit. Apr 30 01:03:18.537575 systemd[1]: session-60.scope: Deactivated successfully. Apr 30 01:03:18.542062 systemd-logind[1580]: Removed session 60. Apr 30 01:03:23.691305 systemd[1]: Started sshd@60-49.12.45.4:22-139.178.68.195:56320.service - OpenSSH per-connection server daemon (139.178.68.195:56320). Apr 30 01:03:24.663719 sshd[7698]: Accepted publickey for core from 139.178.68.195 port 56320 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:24.665372 sshd[7698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:24.672234 systemd-logind[1580]: New session 61 of user core. Apr 30 01:03:24.681631 systemd[1]: Started session-61.scope - Session 61 of User core. Apr 30 01:03:25.415255 sshd[7698]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:25.421447 systemd[1]: sshd@60-49.12.45.4:22-139.178.68.195:56320.service: Deactivated successfully. Apr 30 01:03:25.429187 systemd[1]: session-61.scope: Deactivated successfully. Apr 30 01:03:25.432063 systemd-logind[1580]: Session 61 logged out. Waiting for processes to exit. Apr 30 01:03:25.433729 systemd-logind[1580]: Removed session 61. Apr 30 01:03:30.582809 systemd[1]: Started sshd@61-49.12.45.4:22-139.178.68.195:44774.service - OpenSSH per-connection server daemon (139.178.68.195:44774). Apr 30 01:03:31.579484 sshd[7711]: Accepted publickey for core from 139.178.68.195 port 44774 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:31.582370 sshd[7711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:31.589596 systemd-logind[1580]: New session 62 of user core. Apr 30 01:03:31.596571 systemd[1]: Started session-62.scope - Session 62 of User core. Apr 30 01:03:32.335393 sshd[7711]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:32.343640 systemd[1]: sshd@61-49.12.45.4:22-139.178.68.195:44774.service: Deactivated successfully. Apr 30 01:03:32.347996 systemd[1]: session-62.scope: Deactivated successfully. Apr 30 01:03:32.349466 systemd-logind[1580]: Session 62 logged out. Waiting for processes to exit. Apr 30 01:03:32.350676 systemd-logind[1580]: Removed session 62. Apr 30 01:03:37.507645 systemd[1]: Started sshd@62-49.12.45.4:22-139.178.68.195:48616.service - OpenSSH per-connection server daemon (139.178.68.195:48616). Apr 30 01:03:38.490812 sshd[7727]: Accepted publickey for core from 139.178.68.195 port 48616 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:38.493054 sshd[7727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:38.510060 systemd-logind[1580]: New session 63 of user core. Apr 30 01:03:38.518496 systemd[1]: Started session-63.scope - Session 63 of User core. Apr 30 01:03:39.259299 sshd[7727]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:39.264517 systemd[1]: sshd@62-49.12.45.4:22-139.178.68.195:48616.service: Deactivated successfully. Apr 30 01:03:39.268739 systemd[1]: session-63.scope: Deactivated successfully. Apr 30 01:03:39.270069 systemd-logind[1580]: Session 63 logged out. Waiting for processes to exit. Apr 30 01:03:39.271340 systemd-logind[1580]: Removed session 63. Apr 30 01:03:44.430767 systemd[1]: Started sshd@63-49.12.45.4:22-139.178.68.195:48626.service - OpenSSH per-connection server daemon (139.178.68.195:48626). Apr 30 01:03:45.416038 sshd[7741]: Accepted publickey for core from 139.178.68.195 port 48626 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:45.418531 sshd[7741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:45.427276 systemd-logind[1580]: New session 64 of user core. Apr 30 01:03:45.431425 systemd[1]: Started session-64.scope - Session 64 of User core. Apr 30 01:03:46.181204 sshd[7741]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:46.187130 systemd[1]: sshd@63-49.12.45.4:22-139.178.68.195:48626.service: Deactivated successfully. Apr 30 01:03:46.192788 systemd[1]: session-64.scope: Deactivated successfully. Apr 30 01:03:46.194235 systemd-logind[1580]: Session 64 logged out. Waiting for processes to exit. Apr 30 01:03:46.195633 systemd-logind[1580]: Removed session 64. Apr 30 01:03:51.346262 systemd[1]: Started sshd@64-49.12.45.4:22-139.178.68.195:51744.service - OpenSSH per-connection server daemon (139.178.68.195:51744). Apr 30 01:03:52.339004 sshd[7799]: Accepted publickey for core from 139.178.68.195 port 51744 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:52.341530 sshd[7799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:52.348718 systemd-logind[1580]: New session 65 of user core. Apr 30 01:03:52.353302 systemd[1]: Started session-65.scope - Session 65 of User core. Apr 30 01:03:53.095330 sshd[7799]: pam_unix(sshd:session): session closed for user core Apr 30 01:03:53.104578 systemd[1]: sshd@64-49.12.45.4:22-139.178.68.195:51744.service: Deactivated successfully. Apr 30 01:03:53.108445 systemd[1]: session-65.scope: Deactivated successfully. Apr 30 01:03:53.109375 systemd-logind[1580]: Session 65 logged out. Waiting for processes to exit. Apr 30 01:03:53.111530 systemd-logind[1580]: Removed session 65. Apr 30 01:03:58.266314 systemd[1]: Started sshd@65-49.12.45.4:22-139.178.68.195:53882.service - OpenSSH per-connection server daemon (139.178.68.195:53882). Apr 30 01:03:59.251892 sshd[7813]: Accepted publickey for core from 139.178.68.195 port 53882 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:03:59.254702 sshd[7813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:03:59.260099 systemd-logind[1580]: New session 66 of user core. Apr 30 01:03:59.267621 systemd[1]: Started session-66.scope - Session 66 of User core. Apr 30 01:04:00.017122 sshd[7813]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:00.024405 systemd[1]: sshd@65-49.12.45.4:22-139.178.68.195:53882.service: Deactivated successfully. Apr 30 01:04:00.031474 systemd[1]: session-66.scope: Deactivated successfully. Apr 30 01:04:00.033911 systemd-logind[1580]: Session 66 logged out. Waiting for processes to exit. Apr 30 01:04:00.036443 systemd-logind[1580]: Removed session 66. Apr 30 01:04:05.190607 systemd[1]: Started sshd@66-49.12.45.4:22-139.178.68.195:53884.service - OpenSSH per-connection server daemon (139.178.68.195:53884). Apr 30 01:04:06.184924 sshd[7828]: Accepted publickey for core from 139.178.68.195 port 53884 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:04:06.187340 sshd[7828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:04:06.193056 systemd-logind[1580]: New session 67 of user core. Apr 30 01:04:06.201426 systemd[1]: Started session-67.scope - Session 67 of User core. Apr 30 01:04:06.978715 sshd[7828]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:06.987150 systemd[1]: sshd@66-49.12.45.4:22-139.178.68.195:53884.service: Deactivated successfully. Apr 30 01:04:06.993150 systemd[1]: session-67.scope: Deactivated successfully. Apr 30 01:04:06.995752 systemd-logind[1580]: Session 67 logged out. Waiting for processes to exit. Apr 30 01:04:06.997854 systemd-logind[1580]: Removed session 67. Apr 30 01:04:12.143392 systemd[1]: Started sshd@67-49.12.45.4:22-139.178.68.195:55916.service - OpenSSH per-connection server daemon (139.178.68.195:55916). Apr 30 01:04:13.127732 sshd[7861]: Accepted publickey for core from 139.178.68.195 port 55916 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:04:13.130401 sshd[7861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:04:13.137207 systemd-logind[1580]: New session 68 of user core. Apr 30 01:04:13.143450 systemd[1]: Started session-68.scope - Session 68 of User core. Apr 30 01:04:13.888862 sshd[7861]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:13.896219 systemd[1]: sshd@67-49.12.45.4:22-139.178.68.195:55916.service: Deactivated successfully. Apr 30 01:04:13.899162 systemd-logind[1580]: Session 68 logged out. Waiting for processes to exit. Apr 30 01:04:13.910996 systemd[1]: session-68.scope: Deactivated successfully. Apr 30 01:04:13.918269 systemd-logind[1580]: Removed session 68. Apr 30 01:04:19.059296 systemd[1]: Started sshd@68-49.12.45.4:22-139.178.68.195:48778.service - OpenSSH per-connection server daemon (139.178.68.195:48778). Apr 30 01:04:20.052787 sshd[7923]: Accepted publickey for core from 139.178.68.195 port 48778 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:04:20.056034 sshd[7923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:04:20.062231 systemd-logind[1580]: New session 69 of user core. Apr 30 01:04:20.072427 systemd[1]: Started session-69.scope - Session 69 of User core. Apr 30 01:04:20.822131 sshd[7923]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:20.829282 systemd[1]: sshd@68-49.12.45.4:22-139.178.68.195:48778.service: Deactivated successfully. Apr 30 01:04:20.832863 systemd-logind[1580]: Session 69 logged out. Waiting for processes to exit. Apr 30 01:04:20.833774 systemd[1]: session-69.scope: Deactivated successfully. Apr 30 01:04:20.836447 systemd-logind[1580]: Removed session 69. Apr 30 01:04:25.984483 systemd[1]: Started sshd@69-49.12.45.4:22-139.178.68.195:45324.service - OpenSSH per-connection server daemon (139.178.68.195:45324). Apr 30 01:04:26.954754 sshd[7937]: Accepted publickey for core from 139.178.68.195 port 45324 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:04:26.956090 sshd[7937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:04:26.963139 systemd-logind[1580]: New session 70 of user core. Apr 30 01:04:26.969461 systemd[1]: Started session-70.scope - Session 70 of User core. Apr 30 01:04:27.703403 sshd[7937]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:27.707498 systemd[1]: sshd@69-49.12.45.4:22-139.178.68.195:45324.service: Deactivated successfully. Apr 30 01:04:27.711856 systemd-logind[1580]: Session 70 logged out. Waiting for processes to exit. Apr 30 01:04:27.713078 systemd[1]: session-70.scope: Deactivated successfully. Apr 30 01:04:27.715552 systemd-logind[1580]: Removed session 70. Apr 30 01:04:27.872093 systemd[1]: Started sshd@70-49.12.45.4:22-139.178.68.195:45340.service - OpenSSH per-connection server daemon (139.178.68.195:45340). Apr 30 01:04:28.866579 sshd[7958]: Accepted publickey for core from 139.178.68.195 port 45340 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:04:28.870544 sshd[7958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:04:28.878523 systemd-logind[1580]: New session 71 of user core. Apr 30 01:04:28.883361 systemd[1]: Started session-71.scope - Session 71 of User core. Apr 30 01:04:29.781628 sshd[7958]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:29.786549 systemd[1]: sshd@70-49.12.45.4:22-139.178.68.195:45340.service: Deactivated successfully. Apr 30 01:04:29.793416 systemd[1]: session-71.scope: Deactivated successfully. Apr 30 01:04:29.795087 systemd-logind[1580]: Session 71 logged out. Waiting for processes to exit. Apr 30 01:04:29.796759 systemd-logind[1580]: Removed session 71. Apr 30 01:04:29.946393 systemd[1]: Started sshd@71-49.12.45.4:22-139.178.68.195:45352.service - OpenSSH per-connection server daemon (139.178.68.195:45352). Apr 30 01:04:30.939981 sshd[7971]: Accepted publickey for core from 139.178.68.195 port 45352 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:04:30.942002 sshd[7971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:04:30.947532 systemd-logind[1580]: New session 72 of user core. Apr 30 01:04:30.951507 systemd[1]: Started session-72.scope - Session 72 of User core. Apr 30 01:04:33.711606 sshd[7971]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:33.717776 systemd[1]: sshd@71-49.12.45.4:22-139.178.68.195:45352.service: Deactivated successfully. Apr 30 01:04:33.724076 systemd[1]: session-72.scope: Deactivated successfully. Apr 30 01:04:33.727086 systemd-logind[1580]: Session 72 logged out. Waiting for processes to exit. Apr 30 01:04:33.728593 systemd-logind[1580]: Removed session 72. Apr 30 01:04:33.884027 systemd[1]: Started sshd@72-49.12.45.4:22-139.178.68.195:45364.service - OpenSSH per-connection server daemon (139.178.68.195:45364). Apr 30 01:04:34.856682 sshd[8007]: Accepted publickey for core from 139.178.68.195 port 45364 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:04:34.860571 sshd[8007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:04:34.865870 systemd-logind[1580]: New session 73 of user core. Apr 30 01:04:34.876429 systemd[1]: Started session-73.scope - Session 73 of User core. Apr 30 01:04:35.835361 sshd[8007]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:35.840730 systemd-logind[1580]: Session 73 logged out. Waiting for processes to exit. Apr 30 01:04:35.840980 systemd[1]: sshd@72-49.12.45.4:22-139.178.68.195:45364.service: Deactivated successfully. Apr 30 01:04:35.847228 systemd[1]: session-73.scope: Deactivated successfully. Apr 30 01:04:35.849627 systemd-logind[1580]: Removed session 73. Apr 30 01:04:36.002440 systemd[1]: Started sshd@73-49.12.45.4:22-139.178.68.195:40460.service - OpenSSH per-connection server daemon (139.178.68.195:40460). Apr 30 01:04:36.996769 sshd[8019]: Accepted publickey for core from 139.178.68.195 port 40460 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:04:36.998518 sshd[8019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:04:37.009224 systemd-logind[1580]: New session 74 of user core. Apr 30 01:04:37.017465 systemd[1]: Started session-74.scope - Session 74 of User core. Apr 30 01:04:37.760185 sshd[8019]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:37.767179 systemd-logind[1580]: Session 74 logged out. Waiting for processes to exit. Apr 30 01:04:37.768492 systemd[1]: sshd@73-49.12.45.4:22-139.178.68.195:40460.service: Deactivated successfully. Apr 30 01:04:37.773848 systemd[1]: session-74.scope: Deactivated successfully. Apr 30 01:04:37.775623 systemd-logind[1580]: Removed session 74. Apr 30 01:04:42.931429 systemd[1]: Started sshd@74-49.12.45.4:22-139.178.68.195:40462.service - OpenSSH per-connection server daemon (139.178.68.195:40462). Apr 30 01:04:43.927166 sshd[8034]: Accepted publickey for core from 139.178.68.195 port 40462 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:04:43.929375 sshd[8034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:04:43.937599 systemd-logind[1580]: New session 75 of user core. Apr 30 01:04:43.941865 systemd[1]: Started session-75.scope - Session 75 of User core. Apr 30 01:04:44.691149 sshd[8034]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:44.698109 systemd-logind[1580]: Session 75 logged out. Waiting for processes to exit. Apr 30 01:04:44.698868 systemd[1]: sshd@74-49.12.45.4:22-139.178.68.195:40462.service: Deactivated successfully. Apr 30 01:04:44.705205 systemd[1]: session-75.scope: Deactivated successfully. Apr 30 01:04:44.707598 systemd-logind[1580]: Removed session 75. Apr 30 01:04:49.857442 systemd[1]: Started sshd@75-49.12.45.4:22-139.178.68.195:36722.service - OpenSSH per-connection server daemon (139.178.68.195:36722). Apr 30 01:04:50.835615 sshd[8090]: Accepted publickey for core from 139.178.68.195 port 36722 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:04:50.837396 sshd[8090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:04:50.844433 systemd-logind[1580]: New session 76 of user core. Apr 30 01:04:50.849354 systemd[1]: Started session-76.scope - Session 76 of User core. Apr 30 01:04:51.601318 sshd[8090]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:51.608357 systemd[1]: sshd@75-49.12.45.4:22-139.178.68.195:36722.service: Deactivated successfully. Apr 30 01:04:51.613842 systemd[1]: session-76.scope: Deactivated successfully. Apr 30 01:04:51.615312 systemd-logind[1580]: Session 76 logged out. Waiting for processes to exit. Apr 30 01:04:51.618234 systemd-logind[1580]: Removed session 76. Apr 30 01:04:56.767654 systemd[1]: Started sshd@76-49.12.45.4:22-139.178.68.195:38572.service - OpenSSH per-connection server daemon (139.178.68.195:38572). Apr 30 01:04:57.737976 sshd[8107]: Accepted publickey for core from 139.178.68.195 port 38572 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:04:57.739061 sshd[8107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:04:57.745594 systemd-logind[1580]: New session 77 of user core. Apr 30 01:04:57.756579 systemd[1]: Started session-77.scope - Session 77 of User core. Apr 30 01:04:58.489070 sshd[8107]: pam_unix(sshd:session): session closed for user core Apr 30 01:04:58.494701 systemd[1]: sshd@76-49.12.45.4:22-139.178.68.195:38572.service: Deactivated successfully. Apr 30 01:04:58.498573 systemd[1]: session-77.scope: Deactivated successfully. Apr 30 01:04:58.499861 systemd-logind[1580]: Session 77 logged out. Waiting for processes to exit. Apr 30 01:04:58.501412 systemd-logind[1580]: Removed session 77. Apr 30 01:05:03.664478 systemd[1]: Started sshd@77-49.12.45.4:22-139.178.68.195:38586.service - OpenSSH per-connection server daemon (139.178.68.195:38586). Apr 30 01:05:04.661154 sshd[8123]: Accepted publickey for core from 139.178.68.195 port 38586 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:05:04.664016 sshd[8123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:05:04.670053 systemd-logind[1580]: New session 78 of user core. Apr 30 01:05:04.679317 systemd[1]: Started session-78.scope - Session 78 of User core. Apr 30 01:05:05.433672 sshd[8123]: pam_unix(sshd:session): session closed for user core Apr 30 01:05:05.442761 systemd[1]: sshd@77-49.12.45.4:22-139.178.68.195:38586.service: Deactivated successfully. Apr 30 01:05:05.453064 systemd[1]: session-78.scope: Deactivated successfully. Apr 30 01:05:05.455519 systemd-logind[1580]: Session 78 logged out. Waiting for processes to exit. Apr 30 01:05:05.463688 systemd-logind[1580]: Removed session 78. Apr 30 01:05:07.542548 systemd[1]: run-containerd-runc-k8s.io-79bfe64f7adca602d3013e42fe94c1db7d0a25c9a4d49d1691513e05ab399cff-runc.uDOuvN.mount: Deactivated successfully. Apr 30 01:05:10.598277 systemd[1]: Started sshd@78-49.12.45.4:22-139.178.68.195:46318.service - OpenSSH per-connection server daemon (139.178.68.195:46318). Apr 30 01:05:11.570015 sshd[8154]: Accepted publickey for core from 139.178.68.195 port 46318 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:05:11.572638 sshd[8154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:05:11.581812 systemd-logind[1580]: New session 79 of user core. Apr 30 01:05:11.587397 systemd[1]: Started session-79.scope - Session 79 of User core. Apr 30 01:05:12.325311 sshd[8154]: pam_unix(sshd:session): session closed for user core Apr 30 01:05:12.331140 systemd[1]: sshd@78-49.12.45.4:22-139.178.68.195:46318.service: Deactivated successfully. Apr 30 01:05:12.335921 systemd[1]: session-79.scope: Deactivated successfully. Apr 30 01:05:12.341165 systemd-logind[1580]: Session 79 logged out. Waiting for processes to exit. Apr 30 01:05:12.342486 systemd-logind[1580]: Removed session 79. Apr 30 01:05:17.493729 systemd[1]: Started sshd@79-49.12.45.4:22-139.178.68.195:54048.service - OpenSSH per-connection server daemon (139.178.68.195:54048). Apr 30 01:05:18.497124 sshd[8197]: Accepted publickey for core from 139.178.68.195 port 54048 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:05:18.500089 sshd[8197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:05:18.508352 systemd-logind[1580]: New session 80 of user core. Apr 30 01:05:18.516586 systemd[1]: Started session-80.scope - Session 80 of User core. Apr 30 01:05:19.266061 sshd[8197]: pam_unix(sshd:session): session closed for user core Apr 30 01:05:19.277366 systemd[1]: sshd@79-49.12.45.4:22-139.178.68.195:54048.service: Deactivated successfully. Apr 30 01:05:19.287361 systemd[1]: session-80.scope: Deactivated successfully. Apr 30 01:05:19.290423 systemd-logind[1580]: Session 80 logged out. Waiting for processes to exit. Apr 30 01:05:19.295035 systemd-logind[1580]: Removed session 80. Apr 30 01:05:24.436349 systemd[1]: Started sshd@80-49.12.45.4:22-139.178.68.195:54062.service - OpenSSH per-connection server daemon (139.178.68.195:54062). Apr 30 01:05:25.420836 sshd[8221]: Accepted publickey for core from 139.178.68.195 port 54062 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:05:25.422283 sshd[8221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:05:25.430467 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 30 01:05:25.436021 systemd-logind[1580]: New session 81 of user core. Apr 30 01:05:25.436847 systemd[1]: Started session-81.scope - Session 81 of User core. Apr 30 01:05:25.461215 systemd-tmpfiles[8223]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 01:05:25.462068 systemd-tmpfiles[8223]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 01:05:25.463162 systemd-tmpfiles[8223]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 01:05:25.463419 systemd-tmpfiles[8223]: ACLs are not supported, ignoring. Apr 30 01:05:25.463477 systemd-tmpfiles[8223]: ACLs are not supported, ignoring. Apr 30 01:05:25.466506 systemd-tmpfiles[8223]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 01:05:25.466523 systemd-tmpfiles[8223]: Skipping /boot Apr 30 01:05:25.477879 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 30 01:05:25.479198 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 30 01:05:26.186042 sshd[8221]: pam_unix(sshd:session): session closed for user core Apr 30 01:05:26.192099 systemd[1]: sshd@80-49.12.45.4:22-139.178.68.195:54062.service: Deactivated successfully. Apr 30 01:05:26.197081 systemd[1]: session-81.scope: Deactivated successfully. Apr 30 01:05:26.200042 systemd-logind[1580]: Session 81 logged out. Waiting for processes to exit. Apr 30 01:05:26.201344 systemd-logind[1580]: Removed session 81. Apr 30 01:05:31.352530 systemd[1]: Started sshd@81-49.12.45.4:22-139.178.68.195:60318.service - OpenSSH per-connection server daemon (139.178.68.195:60318). Apr 30 01:05:31.902629 systemd[1]: Started sshd@82-49.12.45.4:22-211.95.135.58:50554.service - OpenSSH per-connection server daemon (211.95.135.58:50554). Apr 30 01:05:32.111969 sshd[8240]: Connection closed by 211.95.135.58 port 50554 Apr 30 01:05:32.113148 systemd[1]: sshd@82-49.12.45.4:22-211.95.135.58:50554.service: Deactivated successfully. Apr 30 01:05:32.327319 sshd[8238]: Accepted publickey for core from 139.178.68.195 port 60318 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:05:32.331442 sshd[8238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:05:32.338731 systemd-logind[1580]: New session 82 of user core. Apr 30 01:05:32.346690 systemd[1]: Started session-82.scope - Session 82 of User core. Apr 30 01:05:33.083405 sshd[8238]: pam_unix(sshd:session): session closed for user core Apr 30 01:05:33.089289 systemd[1]: sshd@81-49.12.45.4:22-139.178.68.195:60318.service: Deactivated successfully. Apr 30 01:05:33.099478 systemd-logind[1580]: Session 82 logged out. Waiting for processes to exit. Apr 30 01:05:33.102739 systemd[1]: session-82.scope: Deactivated successfully. Apr 30 01:05:33.109187 systemd-logind[1580]: Removed session 82. Apr 30 01:05:38.253498 systemd[1]: Started sshd@83-49.12.45.4:22-139.178.68.195:33658.service - OpenSSH per-connection server daemon (139.178.68.195:33658). Apr 30 01:05:39.245486 sshd[8258]: Accepted publickey for core from 139.178.68.195 port 33658 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:05:39.246182 sshd[8258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:05:39.254542 systemd-logind[1580]: New session 83 of user core. Apr 30 01:05:39.261031 systemd[1]: Started session-83.scope - Session 83 of User core. Apr 30 01:05:40.002623 sshd[8258]: pam_unix(sshd:session): session closed for user core Apr 30 01:05:40.008154 systemd[1]: sshd@83-49.12.45.4:22-139.178.68.195:33658.service: Deactivated successfully. Apr 30 01:05:40.013484 systemd[1]: session-83.scope: Deactivated successfully. Apr 30 01:05:40.015763 systemd-logind[1580]: Session 83 logged out. Waiting for processes to exit. Apr 30 01:05:40.017599 systemd-logind[1580]: Removed session 83. Apr 30 01:05:45.174467 systemd[1]: Started sshd@84-49.12.45.4:22-139.178.68.195:33668.service - OpenSSH per-connection server daemon (139.178.68.195:33668). Apr 30 01:05:46.161998 sshd[8272]: Accepted publickey for core from 139.178.68.195 port 33668 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:05:46.164726 sshd[8272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:05:46.169894 systemd-logind[1580]: New session 84 of user core. Apr 30 01:05:46.174653 systemd[1]: Started session-84.scope - Session 84 of User core. Apr 30 01:05:46.918702 sshd[8272]: pam_unix(sshd:session): session closed for user core Apr 30 01:05:46.925860 systemd-logind[1580]: Session 84 logged out. Waiting for processes to exit. Apr 30 01:05:46.927329 systemd[1]: sshd@84-49.12.45.4:22-139.178.68.195:33668.service: Deactivated successfully. Apr 30 01:05:46.933671 systemd[1]: session-84.scope: Deactivated successfully. Apr 30 01:05:46.936623 systemd-logind[1580]: Removed session 84. Apr 30 01:05:52.093571 systemd[1]: Started sshd@85-49.12.45.4:22-139.178.68.195:33468.service - OpenSSH per-connection server daemon (139.178.68.195:33468). Apr 30 01:05:53.075277 sshd[8326]: Accepted publickey for core from 139.178.68.195 port 33468 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:05:53.076643 sshd[8326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:05:53.084600 systemd-logind[1580]: New session 85 of user core. Apr 30 01:05:53.088341 systemd[1]: Started session-85.scope - Session 85 of User core. Apr 30 01:05:53.843281 sshd[8326]: pam_unix(sshd:session): session closed for user core Apr 30 01:05:53.852368 systemd[1]: sshd@85-49.12.45.4:22-139.178.68.195:33468.service: Deactivated successfully. Apr 30 01:05:53.857819 systemd[1]: session-85.scope: Deactivated successfully. Apr 30 01:05:53.860061 systemd-logind[1580]: Session 85 logged out. Waiting for processes to exit. Apr 30 01:05:53.861453 systemd-logind[1580]: Removed session 85. Apr 30 01:05:59.016406 systemd[1]: Started sshd@86-49.12.45.4:22-139.178.68.195:33494.service - OpenSSH per-connection server daemon (139.178.68.195:33494). Apr 30 01:06:00.007862 sshd[8340]: Accepted publickey for core from 139.178.68.195 port 33494 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:06:00.010630 sshd[8340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:06:00.018102 systemd-logind[1580]: New session 86 of user core. Apr 30 01:06:00.024317 systemd[1]: Started session-86.scope - Session 86 of User core. Apr 30 01:06:00.783369 sshd[8340]: pam_unix(sshd:session): session closed for user core Apr 30 01:06:00.788749 systemd-logind[1580]: Session 86 logged out. Waiting for processes to exit. Apr 30 01:06:00.789799 systemd[1]: sshd@86-49.12.45.4:22-139.178.68.195:33494.service: Deactivated successfully. Apr 30 01:06:00.796023 systemd[1]: session-86.scope: Deactivated successfully. Apr 30 01:06:00.799134 systemd-logind[1580]: Removed session 86. Apr 30 01:06:05.949290 systemd[1]: Started sshd@87-49.12.45.4:22-139.178.68.195:57278.service - OpenSSH per-connection server daemon (139.178.68.195:57278). Apr 30 01:06:06.963308 sshd[8361]: Accepted publickey for core from 139.178.68.195 port 57278 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:06:06.965544 sshd[8361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:06:06.972755 systemd-logind[1580]: New session 87 of user core. Apr 30 01:06:06.976232 systemd[1]: Started session-87.scope - Session 87 of User core. Apr 30 01:06:07.741616 sshd[8361]: pam_unix(sshd:session): session closed for user core Apr 30 01:06:07.745618 systemd[1]: sshd@87-49.12.45.4:22-139.178.68.195:57278.service: Deactivated successfully. Apr 30 01:06:07.753008 systemd[1]: session-87.scope: Deactivated successfully. Apr 30 01:06:07.754919 systemd-logind[1580]: Session 87 logged out. Waiting for processes to exit. Apr 30 01:06:07.757260 systemd-logind[1580]: Removed session 87. Apr 30 01:06:12.911347 systemd[1]: Started sshd@88-49.12.45.4:22-139.178.68.195:57280.service - OpenSSH per-connection server daemon (139.178.68.195:57280). Apr 30 01:06:13.891994 sshd[8407]: Accepted publickey for core from 139.178.68.195 port 57280 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:06:13.893852 sshd[8407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:06:13.902434 systemd-logind[1580]: New session 88 of user core. Apr 30 01:06:13.908338 systemd[1]: Started session-88.scope - Session 88 of User core. Apr 30 01:06:14.650208 sshd[8407]: pam_unix(sshd:session): session closed for user core Apr 30 01:06:14.657168 systemd[1]: sshd@88-49.12.45.4:22-139.178.68.195:57280.service: Deactivated successfully. Apr 30 01:06:14.662788 systemd[1]: session-88.scope: Deactivated successfully. Apr 30 01:06:14.664701 systemd-logind[1580]: Session 88 logged out. Waiting for processes to exit. Apr 30 01:06:14.666477 systemd-logind[1580]: Removed session 88. Apr 30 01:06:19.823286 systemd[1]: Started sshd@89-49.12.45.4:22-139.178.68.195:49086.service - OpenSSH per-connection server daemon (139.178.68.195:49086). Apr 30 01:06:20.818902 sshd[8468]: Accepted publickey for core from 139.178.68.195 port 49086 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:06:20.820860 sshd[8468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:06:20.825718 systemd-logind[1580]: New session 89 of user core. Apr 30 01:06:20.831433 systemd[1]: Started session-89.scope - Session 89 of User core. Apr 30 01:06:21.587545 sshd[8468]: pam_unix(sshd:session): session closed for user core Apr 30 01:06:21.593480 systemd[1]: sshd@89-49.12.45.4:22-139.178.68.195:49086.service: Deactivated successfully. Apr 30 01:06:21.599918 systemd[1]: session-89.scope: Deactivated successfully. Apr 30 01:06:21.601300 systemd-logind[1580]: Session 89 logged out. Waiting for processes to exit. Apr 30 01:06:21.602354 systemd-logind[1580]: Removed session 89. Apr 30 01:06:26.754345 systemd[1]: Started sshd@90-49.12.45.4:22-139.178.68.195:44736.service - OpenSSH per-connection server daemon (139.178.68.195:44736). Apr 30 01:06:27.754530 sshd[8482]: Accepted publickey for core from 139.178.68.195 port 44736 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 01:06:27.757439 sshd[8482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 01:06:27.764103 systemd-logind[1580]: New session 90 of user core. Apr 30 01:06:27.768594 systemd[1]: Started session-90.scope - Session 90 of User core. Apr 30 01:06:28.523474 sshd[8482]: pam_unix(sshd:session): session closed for user core Apr 30 01:06:28.530251 systemd[1]: sshd@90-49.12.45.4:22-139.178.68.195:44736.service: Deactivated successfully. Apr 30 01:06:28.535477 systemd[1]: session-90.scope: Deactivated successfully. Apr 30 01:06:28.535836 systemd-logind[1580]: Session 90 logged out. Waiting for processes to exit. Apr 30 01:06:28.537956 systemd-logind[1580]: Removed session 90. Apr 30 01:07:00.933809 containerd[1612]: time="2025-04-30T01:07:00.933676419Z" level=info msg="shim disconnected" id=5be8f62062b67200eb6196c8a753478276337fdc3d12b7b032870ab5f955a1a4 namespace=k8s.io Apr 30 01:07:00.933809 containerd[1612]: time="2025-04-30T01:07:00.933799861Z" level=warning msg="cleaning up after shim disconnected" id=5be8f62062b67200eb6196c8a753478276337fdc3d12b7b032870ab5f955a1a4 namespace=k8s.io Apr 30 01:07:00.933809 containerd[1612]: time="2025-04-30T01:07:00.933811021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:07:00.935433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5be8f62062b67200eb6196c8a753478276337fdc3d12b7b032870ab5f955a1a4-rootfs.mount: Deactivated successfully. Apr 30 01:07:01.099233 kubelet[2954]: I0430 01:07:01.098766 2954 scope.go:117] "RemoveContainer" containerID="5be8f62062b67200eb6196c8a753478276337fdc3d12b7b032870ab5f955a1a4" Apr 30 01:07:01.113506 containerd[1612]: time="2025-04-30T01:07:01.113330528Z" level=info msg="CreateContainer within sandbox \"2d5d0aa98ef8114bee8fd5ab65dcb795fbe3ea41c39b870a7fdaa8ac0ef74efd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 30 01:07:01.130779 containerd[1612]: time="2025-04-30T01:07:01.129894736Z" level=info msg="CreateContainer within sandbox \"2d5d0aa98ef8114bee8fd5ab65dcb795fbe3ea41c39b870a7fdaa8ac0ef74efd\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"01cdda11cae057fd43ad0c2830c74a158fb50074e61833f9a051358a2baf5a1d\"" Apr 30 01:07:01.131449 containerd[1612]: time="2025-04-30T01:07:01.131311631Z" level=info msg="StartContainer for \"01cdda11cae057fd43ad0c2830c74a158fb50074e61833f9a051358a2baf5a1d\"" Apr 30 01:07:01.132180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2904905315.mount: Deactivated successfully. Apr 30 01:07:01.188034 containerd[1612]: time="2025-04-30T01:07:01.187492122Z" level=info msg="StartContainer for \"01cdda11cae057fd43ad0c2830c74a158fb50074e61833f9a051358a2baf5a1d\" returns successfully" Apr 30 01:07:01.358308 kubelet[2954]: E0430 01:07:01.357908 2954 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:50150->10.0.0.2:2379: read: connection timed out" Apr 30 01:07:01.387360 containerd[1612]: time="2025-04-30T01:07:01.387270995Z" level=info msg="shim disconnected" id=df7f87ab076e49e1bca6a076e13ebbb089a78e0a329897ab30c02d9127d55612 namespace=k8s.io Apr 30 01:07:01.387755 containerd[1612]: time="2025-04-30T01:07:01.387564718Z" level=warning msg="cleaning up after shim disconnected" id=df7f87ab076e49e1bca6a076e13ebbb089a78e0a329897ab30c02d9127d55612 namespace=k8s.io Apr 30 01:07:01.387755 containerd[1612]: time="2025-04-30T01:07:01.387598639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:07:01.936170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df7f87ab076e49e1bca6a076e13ebbb089a78e0a329897ab30c02d9127d55612-rootfs.mount: Deactivated successfully. Apr 30 01:07:02.113050 kubelet[2954]: I0430 01:07:02.112343 2954 scope.go:117] "RemoveContainer" containerID="df7f87ab076e49e1bca6a076e13ebbb089a78e0a329897ab30c02d9127d55612" Apr 30 01:07:02.120088 containerd[1612]: time="2025-04-30T01:07:02.119892010Z" level=info msg="CreateContainer within sandbox \"9314b1fdfe46a6e7f8c18655d557a092c5f2a374c4ed2b28add56713e2ab14ef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 01:07:02.143644 containerd[1612]: time="2025-04-30T01:07:02.141750433Z" level=info msg="CreateContainer within sandbox \"9314b1fdfe46a6e7f8c18655d557a092c5f2a374c4ed2b28add56713e2ab14ef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0e1550784f5656b914e1c044ab792fd7d118cc14a384403163595b53537b75e6\"" Apr 30 01:07:02.143644 containerd[1612]: time="2025-04-30T01:07:02.142740003Z" level=info msg="StartContainer for \"0e1550784f5656b914e1c044ab792fd7d118cc14a384403163595b53537b75e6\"" Apr 30 01:07:02.142653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849313929.mount: Deactivated successfully. Apr 30 01:07:02.222713 containerd[1612]: time="2025-04-30T01:07:02.221478924Z" level=info msg="StartContainer for \"0e1550784f5656b914e1c044ab792fd7d118cc14a384403163595b53537b75e6\" returns successfully" Apr 30 01:07:05.323737 kubelet[2954]: E0430 01:07:05.320866 2954 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-6-32a99953eb.183af3352f2509f2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-6-32a99953eb,UID:523ddb53e4e89ae6e8842f74387fd547,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-6-32a99953eb,},FirstTimestamp:2025-04-30 01:06:55.309261298 +0000 UTC m=+850.153887669,LastTimestamp:2025-04-30 01:06:55.309261298 +0000 UTC m=+850.153887669,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-6-32a99953eb,}" Apr 30 01:07:06.450697 containerd[1612]: time="2025-04-30T01:07:06.450614353Z" level=info msg="shim disconnected" id=fa415790e5e072ccdf5cfa4fa4a089266199db8c20929a79a01246bc539bd7b0 namespace=k8s.io Apr 30 01:07:06.450697 containerd[1612]: time="2025-04-30T01:07:06.450731235Z" level=warning msg="cleaning up after shim disconnected" id=fa415790e5e072ccdf5cfa4fa4a089266199db8c20929a79a01246bc539bd7b0 namespace=k8s.io Apr 30 01:07:06.450697 containerd[1612]: time="2025-04-30T01:07:06.450742395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 01:07:06.451651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa415790e5e072ccdf5cfa4fa4a089266199db8c20929a79a01246bc539bd7b0-rootfs.mount: Deactivated successfully. Apr 30 01:07:07.143950 kubelet[2954]: I0430 01:07:07.143894 2954 scope.go:117] "RemoveContainer" containerID="fa415790e5e072ccdf5cfa4fa4a089266199db8c20929a79a01246bc539bd7b0" Apr 30 01:07:07.147970 containerd[1612]: time="2025-04-30T01:07:07.147825086Z" level=info msg="CreateContainer within sandbox \"7a30b6b02f95c5738b248a787312aa3ea6f052435a6e8ff85777f8e1fee806c9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 01:07:07.167698 containerd[1612]: time="2025-04-30T01:07:07.167491767Z" level=info msg="CreateContainer within sandbox \"7a30b6b02f95c5738b248a787312aa3ea6f052435a6e8ff85777f8e1fee806c9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"063e0ad0ae943a52f777718b64a946c880e34acdad034b4694da27729dc687bf\"" Apr 30 01:07:07.169215 containerd[1612]: time="2025-04-30T01:07:07.169152623Z" level=info msg="StartContainer for \"063e0ad0ae943a52f777718b64a946c880e34acdad034b4694da27729dc687bf\"" Apr 30 01:07:07.170804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2477139247.mount: Deactivated successfully. Apr 30 01:07:07.250233 containerd[1612]: time="2025-04-30T01:07:07.250182168Z" level=info msg="StartContainer for \"063e0ad0ae943a52f777718b64a946c880e34acdad034b4694da27729dc687bf\" returns successfully" Apr 30 01:07:07.392364 containerd[1612]: time="2025-04-30T01:07:07.392092091Z" level=info msg="shim disconnected" id=01cdda11cae057fd43ad0c2830c74a158fb50074e61833f9a051358a2baf5a1d namespace=k8s.io Apr 30 01:07:07.392364 containerd[1612]: time="2025-04-30T01:07:07.392166052Z" level=warning msg="cleaning up after shim disconnected" id=01cdda11cae057fd43ad0c2830c74a158fb50074e61833f9a051358a2baf5a1d namespace=k8s.io Apr 30 01:07:07.392364 containerd[1612]: time="2025-04-30T01:07:07.392174892Z" level=info msg="cleaning up dead shim" namespace=k8s.io