Apr 30 00:45:20.897052 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:45:20.897079 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:45:20.897089 kernel: KASLR enabled Apr 30 00:45:20.897095 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 30 00:45:20.897101 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 30 00:45:20.897107 kernel: random: crng init done Apr 30 00:45:20.897114 kernel: ACPI: Early table checksum verification disabled Apr 30 00:45:20.897120 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 30 00:45:20.897131 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 30 00:45:20.897140 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:20.897146 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:20.897152 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:20.897158 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:20.897164 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:20.897172 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:20.897179 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:20.897186 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:20.897192 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:45:20.897201 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 00:45:20.897208 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 30 00:45:20.897215 kernel: NUMA: Failed to initialise from firmware Apr 30 00:45:20.897221 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 00:45:20.897227 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 30 00:45:20.897234 kernel: Zone ranges: Apr 30 00:45:20.897240 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 00:45:20.897248 kernel: DMA32 empty Apr 30 00:45:20.897255 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 30 00:45:20.897261 kernel: Movable zone start for each node Apr 30 00:45:20.897267 kernel: Early memory node ranges Apr 30 00:45:20.897274 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 30 00:45:20.897280 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 30 00:45:20.897286 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 30 00:45:20.897292 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 30 00:45:20.897299 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 30 00:45:20.897305 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 30 00:45:20.897311 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 30 00:45:20.897317 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 00:45:20.897325 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 30 00:45:20.897339 kernel: psci: probing for conduit method from ACPI. Apr 30 00:45:20.897345 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:45:20.897355 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:45:20.897362 kernel: psci: Trusted OS migration not required Apr 30 00:45:20.897368 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:45:20.897377 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 00:45:20.897384 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:45:20.897391 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:45:20.897398 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:45:20.897404 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:45:20.897411 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:45:20.897418 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:45:20.897424 kernel: CPU features: detected: Spectre-v4 Apr 30 00:45:20.897431 kernel: CPU features: detected: Spectre-BHB Apr 30 00:45:20.897438 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:45:20.897446 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:45:20.897453 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:45:20.897460 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:45:20.897466 kernel: alternatives: applying boot alternatives Apr 30 00:45:20.897474 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:45:20.897481 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:45:20.897488 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:45:20.897495 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:45:20.897501 kernel: Fallback order for Node 0: 0 Apr 30 00:45:20.897508 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 30 00:45:20.897515 kernel: Policy zone: Normal Apr 30 00:45:20.897523 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:45:20.897530 kernel: software IO TLB: area num 2. Apr 30 00:45:20.897536 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 30 00:45:20.897543 kernel: Memory: 3882872K/4096000K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213128K reserved, 0K cma-reserved) Apr 30 00:45:20.897550 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:45:20.897577 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:45:20.897604 kernel: rcu: RCU event tracing is enabled. Apr 30 00:45:20.897611 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:45:20.897618 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:45:20.897629 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:45:20.897636 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:45:20.897645 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:45:20.897652 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:45:20.897659 kernel: GICv3: 256 SPIs implemented Apr 30 00:45:20.897665 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:45:20.897672 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:45:20.897679 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:45:20.897694 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 00:45:20.897700 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 00:45:20.897707 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:45:20.897714 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:45:20.897721 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 30 00:45:20.897728 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 30 00:45:20.897739 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:45:20.897750 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:45:20.897757 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:45:20.897766 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:45:20.897774 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:45:20.897780 kernel: Console: colour dummy device 80x25 Apr 30 00:45:20.897787 kernel: ACPI: Core revision 20230628 Apr 30 00:45:20.897795 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:45:20.897802 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:45:20.897809 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:45:20.897818 kernel: landlock: Up and running. Apr 30 00:45:20.897824 kernel: SELinux: Initializing. Apr 30 00:45:20.897831 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:45:20.897838 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:45:20.897845 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:45:20.897853 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:45:20.897863 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:45:20.897870 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:45:20.897877 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 00:45:20.897885 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 00:45:20.897892 kernel: Remapping and enabling EFI services. Apr 30 00:45:20.897899 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:45:20.897906 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:45:20.897913 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 00:45:20.897920 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 30 00:45:20.897927 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:45:20.897934 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:45:20.897941 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:45:20.897948 kernel: SMP: Total of 2 processors activated. Apr 30 00:45:20.897956 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:45:20.897963 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:45:20.897976 kernel: CPU features: detected: Common not Private translations Apr 30 00:45:20.897984 kernel: CPU features: detected: CRC32 instructions Apr 30 00:45:20.897992 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 00:45:20.897999 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:45:20.898006 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:45:20.898013 kernel: CPU features: detected: Privileged Access Never Apr 30 00:45:20.898021 kernel: CPU features: detected: RAS Extension Support Apr 30 00:45:20.898029 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 00:45:20.898037 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:45:20.898044 kernel: alternatives: applying system-wide alternatives Apr 30 00:45:20.898051 kernel: devtmpfs: initialized Apr 30 00:45:20.898059 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:45:20.898066 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:45:20.898073 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:45:20.898082 kernel: SMBIOS 3.0.0 present. Apr 30 00:45:20.898090 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 30 00:45:20.898097 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:45:20.898104 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:45:20.898112 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:45:20.898119 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:45:20.898127 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:45:20.898134 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Apr 30 00:45:20.898142 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:45:20.898150 kernel: cpuidle: using governor menu Apr 30 00:45:20.898161 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:45:20.898168 kernel: ASID allocator initialised with 32768 entries Apr 30 00:45:20.898175 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:45:20.898185 kernel: Serial: AMBA PL011 UART driver Apr 30 00:45:20.898193 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:45:20.898203 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:45:20.898210 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:45:20.898218 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:45:20.898227 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:45:20.898234 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:45:20.898241 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:45:20.898249 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:45:20.898256 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:45:20.898263 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:45:20.898271 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:45:20.898278 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:45:20.898285 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:45:20.898293 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:45:20.898301 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:45:20.898308 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:45:20.898315 kernel: ACPI: Interpreter enabled Apr 30 00:45:20.898322 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:45:20.898330 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:45:20.898337 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:45:20.898345 kernel: printk: console [ttyAMA0] enabled Apr 30 00:45:20.898352 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:45:20.898516 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:45:20.898623 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:45:20.898738 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:45:20.898813 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 00:45:20.898878 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 00:45:20.898887 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 00:45:20.898895 kernel: PCI host bridge to bus 0000:00 Apr 30 00:45:20.898971 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 00:45:20.899031 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:45:20.899096 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 00:45:20.899159 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:45:20.899257 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 00:45:20.899340 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 30 00:45:20.899419 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 30 00:45:20.899496 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 00:45:20.899590 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:20.899659 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 30 00:45:20.899748 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:20.899816 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 30 00:45:20.899889 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:20.899967 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 30 00:45:20.900041 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:20.900109 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 30 00:45:20.900187 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:20.900253 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 30 00:45:20.900330 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:20.900407 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 30 00:45:20.900481 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:20.900547 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 30 00:45:20.900650 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:20.900733 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 30 00:45:20.900815 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 00:45:20.900887 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 30 00:45:20.900962 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 30 00:45:20.901030 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 30 00:45:20.901107 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 00:45:20.901186 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 30 00:45:20.901255 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:45:20.901325 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 00:45:20.901417 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 00:45:20.901486 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 30 00:45:20.901579 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 00:45:20.901654 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 30 00:45:20.901768 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 30 00:45:20.901848 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 00:45:20.901922 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 30 00:45:20.902002 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 00:45:20.902073 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 30 00:45:20.902149 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 30 00:45:20.902226 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 00:45:20.902296 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 30 00:45:20.902368 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 00:45:20.902446 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 00:45:20.902517 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 30 00:45:20.903266 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 30 00:45:20.903371 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 00:45:20.903442 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 30 00:45:20.903508 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 30 00:45:20.903668 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 30 00:45:20.903795 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 30 00:45:20.903866 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 30 00:45:20.903931 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 30 00:45:20.904003 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 30 00:45:20.904072 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 30 00:45:20.904138 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 30 00:45:20.904214 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 30 00:45:20.904279 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 30 00:45:20.904350 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 30 00:45:20.904421 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 30 00:45:20.904493 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 30 00:45:20.904585 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 30 00:45:20.904679 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 30 00:45:20.904764 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 30 00:45:20.904855 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 30 00:45:20.906810 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 00:45:20.906997 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 30 00:45:20.907126 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 30 00:45:20.907212 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 00:45:20.907289 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 30 00:45:20.907357 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 30 00:45:20.907443 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 00:45:20.907529 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 30 00:45:20.907633 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 30 00:45:20.907719 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 30 00:45:20.907791 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 00:45:20.907862 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 30 00:45:20.907928 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 00:45:20.908004 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 30 00:45:20.908073 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 00:45:20.908145 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 30 00:45:20.908220 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 00:45:20.908292 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 30 00:45:20.908367 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 00:45:20.908442 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 30 00:45:20.908525 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 00:45:20.910658 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 30 00:45:20.910792 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 00:45:20.910889 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 30 00:45:20.910969 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 00:45:20.911041 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 30 00:45:20.911119 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 00:45:20.911205 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 30 00:45:20.911290 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 30 00:45:20.911374 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 30 00:45:20.911450 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 00:45:20.911528 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 30 00:45:20.911630 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 00:45:20.911965 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 30 00:45:20.912050 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 00:45:20.912140 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 30 00:45:20.912228 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 30 00:45:20.912322 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 30 00:45:20.912393 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 30 00:45:20.912462 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 30 00:45:20.912529 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 30 00:45:20.913742 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 30 00:45:20.913858 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 30 00:45:20.913937 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 30 00:45:20.914018 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 30 00:45:20.914092 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 30 00:45:20.914177 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 30 00:45:20.914261 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 30 00:45:20.914354 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 30 00:45:20.914423 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:45:20.914503 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 30 00:45:20.915640 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 00:45:20.915769 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 30 00:45:20.915846 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 30 00:45:20.915916 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 00:45:20.915995 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 30 00:45:20.916072 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 00:45:20.916138 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 30 00:45:20.916205 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 30 00:45:20.916278 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 00:45:20.916358 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 00:45:20.916434 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 30 00:45:20.916519 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 00:45:20.916627 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 30 00:45:20.916710 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 30 00:45:20.916791 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 00:45:20.916870 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 00:45:20.916949 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 00:45:20.917034 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 30 00:45:20.917123 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 30 00:45:20.917189 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 00:45:20.917279 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 30 00:45:20.917357 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 30 00:45:20.917439 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 00:45:20.917512 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 30 00:45:20.919159 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 30 00:45:20.919257 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 00:45:20.919337 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 30 00:45:20.919419 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 30 00:45:20.919501 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 00:45:20.919701 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 30 00:45:20.919784 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 30 00:45:20.919851 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 00:45:20.919922 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 30 00:45:20.919997 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 30 00:45:20.920064 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 30 00:45:20.920143 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 00:45:20.920215 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 30 00:45:20.920290 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 30 00:45:20.920368 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 00:45:20.920442 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 00:45:20.920517 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 30 00:45:20.920610 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 30 00:45:20.920681 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 00:45:20.920812 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 00:45:20.920882 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 30 00:45:20.920946 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 30 00:45:20.921031 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 00:45:20.921114 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 00:45:20.921175 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:45:20.921234 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 00:45:20.921317 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 30 00:45:20.921385 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 30 00:45:20.921451 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 00:45:20.921531 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 30 00:45:20.923749 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 30 00:45:20.923844 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 00:45:20.923925 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 30 00:45:20.923997 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 30 00:45:20.924060 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 00:45:20.924150 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 00:45:20.924219 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 30 00:45:20.924282 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 00:45:20.924353 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 30 00:45:20.924415 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 30 00:45:20.924480 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 00:45:20.925589 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 30 00:45:20.925719 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 30 00:45:20.925801 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 00:45:20.925876 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 30 00:45:20.925939 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 30 00:45:20.926000 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 00:45:20.926081 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 30 00:45:20.926151 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 30 00:45:20.926223 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 00:45:20.926293 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 30 00:45:20.926358 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 30 00:45:20.926420 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 00:45:20.926429 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:45:20.926437 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:45:20.926445 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:45:20.926453 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:45:20.926512 kernel: iommu: Default domain type: Translated Apr 30 00:45:20.926522 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:45:20.926534 kernel: efivars: Registered efivars operations Apr 30 00:45:20.926548 kernel: vgaarb: loaded Apr 30 00:45:20.926556 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:45:20.926579 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:45:20.926587 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:45:20.926595 kernel: pnp: PnP ACPI init Apr 30 00:45:20.926731 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 00:45:20.926751 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:45:20.926763 kernel: NET: Registered PF_INET protocol family Apr 30 00:45:20.926771 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:45:20.926779 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:45:20.926787 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:45:20.926795 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:45:20.926803 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:45:20.926811 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:45:20.926818 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:45:20.926826 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:45:20.926836 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:45:20.926931 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 30 00:45:20.926943 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:45:20.926951 kernel: kvm [1]: HYP mode not available Apr 30 00:45:20.926963 kernel: Initialise system trusted keyrings Apr 30 00:45:20.926975 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:45:20.926983 kernel: Key type asymmetric registered Apr 30 00:45:20.926991 kernel: Asymmetric key parser 'x509' registered Apr 30 00:45:20.926998 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:45:20.927013 kernel: io scheduler mq-deadline registered Apr 30 00:45:20.927021 kernel: io scheduler kyber registered Apr 30 00:45:20.927029 kernel: io scheduler bfq registered Apr 30 00:45:20.927037 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 00:45:20.927111 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 30 00:45:20.927207 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 30 00:45:20.927293 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:20.927372 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 30 00:45:20.927455 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 30 00:45:20.927534 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:20.931754 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 30 00:45:20.931843 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 30 00:45:20.931913 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:20.932005 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 30 00:45:20.932097 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 30 00:45:20.932180 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:20.932260 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 30 00:45:20.932338 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 30 00:45:20.932415 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:20.932490 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 30 00:45:20.932570 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 30 00:45:20.932651 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:20.932777 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 30 00:45:20.932861 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 30 00:45:20.932929 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:20.933017 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 30 00:45:20.933094 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 30 00:45:20.933163 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:20.933174 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 30 00:45:20.933251 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 30 00:45:20.933321 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 30 00:45:20.933388 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 00:45:20.933401 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:45:20.933409 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:45:20.933418 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:45:20.933497 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 30 00:45:20.935815 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 30 00:45:20.935839 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:45:20.935848 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 00:45:20.935957 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 30 00:45:20.935976 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 30 00:45:20.935985 kernel: thunder_xcv, ver 1.0 Apr 30 00:45:20.935993 kernel: thunder_bgx, ver 1.0 Apr 30 00:45:20.936000 kernel: nicpf, ver 1.0 Apr 30 00:45:20.936008 kernel: nicvf, ver 1.0 Apr 30 00:45:20.936097 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:45:20.936200 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:45:20 UTC (1745973920) Apr 30 00:45:20.936212 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:45:20.936223 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 00:45:20.936231 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:45:20.936239 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:45:20.936247 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:45:20.936255 kernel: Segment Routing with IPv6 Apr 30 00:45:20.936263 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:45:20.936271 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:45:20.936279 kernel: Key type dns_resolver registered Apr 30 00:45:20.936287 kernel: registered taskstats version 1 Apr 30 00:45:20.936296 kernel: Loading compiled-in X.509 certificates Apr 30 00:45:20.936304 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:45:20.936312 kernel: Key type .fscrypt registered Apr 30 00:45:20.936320 kernel: Key type fscrypt-provisioning registered Apr 30 00:45:20.936327 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:45:20.936335 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:45:20.936343 kernel: ima: No architecture policies found Apr 30 00:45:20.936351 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:45:20.936359 kernel: clk: Disabling unused clocks Apr 30 00:45:20.936369 kernel: Freeing unused kernel memory: 39424K Apr 30 00:45:20.936376 kernel: Run /init as init process Apr 30 00:45:20.936384 kernel: with arguments: Apr 30 00:45:20.936392 kernel: /init Apr 30 00:45:20.936400 kernel: with environment: Apr 30 00:45:20.936408 kernel: HOME=/ Apr 30 00:45:20.936420 kernel: TERM=linux Apr 30 00:45:20.936428 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:45:20.936438 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:45:20.936450 systemd[1]: Detected virtualization kvm. Apr 30 00:45:20.936459 systemd[1]: Detected architecture arm64. Apr 30 00:45:20.936470 systemd[1]: Running in initrd. Apr 30 00:45:20.936478 systemd[1]: No hostname configured, using default hostname. Apr 30 00:45:20.936489 systemd[1]: Hostname set to . Apr 30 00:45:20.936500 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:45:20.936508 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:45:20.936522 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:45:20.936531 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:45:20.936539 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:45:20.936548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:45:20.936556 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:45:20.937358 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:45:20.937369 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:45:20.937383 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:45:20.937392 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:45:20.937400 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:45:20.937409 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:45:20.937417 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:45:20.937425 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:45:20.937434 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:45:20.937442 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:45:20.937452 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:45:20.937461 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:45:20.937480 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:45:20.937489 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:45:20.937498 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:45:20.937506 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:45:20.937515 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:45:20.937523 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:45:20.937531 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:45:20.937542 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:45:20.937550 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:45:20.937572 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:45:20.937582 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:45:20.937625 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:45:20.937633 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:45:20.937670 systemd-journald[235]: Collecting audit messages is disabled. Apr 30 00:45:20.937734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:45:20.937744 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:45:20.937753 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:45:20.937765 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:45:20.937774 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:45:20.937782 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:45:20.937791 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:45:20.937799 kernel: Bridge firewalling registered Apr 30 00:45:20.937808 systemd-journald[235]: Journal started Apr 30 00:45:20.937830 systemd-journald[235]: Runtime Journal (/run/log/journal/d2ede32642eb42539498681eaabf725f) is 8.0M, max 76.6M, 68.6M free. Apr 30 00:45:20.941368 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:45:20.909622 systemd-modules-load[236]: Inserted module 'overlay' Apr 30 00:45:20.936672 systemd-modules-load[236]: Inserted module 'br_netfilter' Apr 30 00:45:20.945039 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:45:20.949047 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:45:20.952906 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:45:20.960761 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:45:20.964306 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:45:20.973813 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:45:20.977835 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:45:20.978547 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:45:20.987240 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:45:20.994823 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:45:20.997531 dracut-cmdline[269]: dracut-dracut-053 Apr 30 00:45:21.001119 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:45:21.024422 systemd-resolved[276]: Positive Trust Anchors: Apr 30 00:45:21.024438 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:45:21.024469 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:45:21.035146 systemd-resolved[276]: Defaulting to hostname 'linux'. Apr 30 00:45:21.036717 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:45:21.037360 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:45:21.079595 kernel: SCSI subsystem initialized Apr 30 00:45:21.084609 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:45:21.092586 kernel: iscsi: registered transport (tcp) Apr 30 00:45:21.105665 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:45:21.105802 kernel: QLogic iSCSI HBA Driver Apr 30 00:45:21.152119 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:45:21.157766 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:45:21.178860 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:45:21.178920 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:45:21.178931 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:45:21.231601 kernel: raid6: neonx8 gen() 15594 MB/s Apr 30 00:45:21.248622 kernel: raid6: neonx4 gen() 15512 MB/s Apr 30 00:45:21.265618 kernel: raid6: neonx2 gen() 13057 MB/s Apr 30 00:45:21.282690 kernel: raid6: neonx1 gen() 10344 MB/s Apr 30 00:45:21.299638 kernel: raid6: int64x8 gen() 6902 MB/s Apr 30 00:45:21.316661 kernel: raid6: int64x4 gen() 7270 MB/s Apr 30 00:45:21.333643 kernel: raid6: int64x2 gen() 6057 MB/s Apr 30 00:45:21.350630 kernel: raid6: int64x1 gen() 4980 MB/s Apr 30 00:45:21.350743 kernel: raid6: using algorithm neonx8 gen() 15594 MB/s Apr 30 00:45:21.367645 kernel: raid6: .... xor() 11848 MB/s, rmw enabled Apr 30 00:45:21.367743 kernel: raid6: using neon recovery algorithm Apr 30 00:45:21.372610 kernel: xor: measuring software checksum speed Apr 30 00:45:21.372706 kernel: 8regs : 19807 MB/sec Apr 30 00:45:21.372740 kernel: 32regs : 17707 MB/sec Apr 30 00:45:21.373599 kernel: arm64_neon : 27007 MB/sec Apr 30 00:45:21.373641 kernel: xor: using function: arm64_neon (27007 MB/sec) Apr 30 00:45:21.424628 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:45:21.439948 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:45:21.447854 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:45:21.463995 systemd-udevd[455]: Using default interface naming scheme 'v255'. Apr 30 00:45:21.467405 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:45:21.476007 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:45:21.495993 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Apr 30 00:45:21.535881 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:45:21.542945 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:45:21.604752 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:45:21.615068 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:45:21.639641 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:45:21.640730 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:45:21.643844 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:45:21.644401 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:45:21.649845 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:45:21.672520 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:45:21.707222 kernel: scsi host0: Virtio SCSI HBA Apr 30 00:45:21.713145 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 00:45:21.713182 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 00:45:21.734469 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:45:21.734658 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:45:21.738311 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:45:21.739198 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:45:21.739357 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:45:21.741740 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:45:21.750903 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 30 00:45:21.761268 kernel: ACPI: bus type USB registered Apr 30 00:45:21.761288 kernel: usbcore: registered new interface driver usbfs Apr 30 00:45:21.761298 kernel: usbcore: registered new interface driver hub Apr 30 00:45:21.761308 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 30 00:45:21.761444 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:45:21.761479 kernel: usbcore: registered new device driver usb Apr 30 00:45:21.761492 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 30 00:45:21.759281 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:45:21.773081 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 30 00:45:21.781751 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 00:45:21.781873 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 30 00:45:21.781963 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 30 00:45:21.782044 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 00:45:21.782125 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:45:21.782135 kernel: GPT:17805311 != 80003071 Apr 30 00:45:21.782144 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:45:21.782153 kernel: GPT:17805311 != 80003071 Apr 30 00:45:21.782161 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:45:21.782170 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:45:21.782180 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 30 00:45:21.789551 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 00:45:21.804859 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 00:45:21.804978 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 00:45:21.805062 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 00:45:21.805161 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 00:45:21.805262 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 00:45:21.805354 kernel: hub 1-0:1.0: USB hub found Apr 30 00:45:21.805460 kernel: hub 1-0:1.0: 4 ports detected Apr 30 00:45:21.805541 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 00:45:21.806011 kernel: hub 2-0:1.0: USB hub found Apr 30 00:45:21.806163 kernel: hub 2-0:1.0: 4 ports detected Apr 30 00:45:21.791842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:45:21.801298 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:45:21.835151 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:45:21.852595 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (507) Apr 30 00:45:21.854599 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (502) Apr 30 00:45:21.857817 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 00:45:21.866829 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 00:45:21.878136 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 00:45:21.880183 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 00:45:21.885470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 00:45:21.900862 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:45:21.909184 disk-uuid[573]: Primary Header is updated. Apr 30 00:45:21.909184 disk-uuid[573]: Secondary Entries is updated. Apr 30 00:45:21.909184 disk-uuid[573]: Secondary Header is updated. Apr 30 00:45:21.916549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:45:21.921640 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:45:21.927898 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:45:22.042736 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 00:45:22.286745 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 30 00:45:22.422291 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 30 00:45:22.422342 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 00:45:22.424583 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 30 00:45:22.479622 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 30 00:45:22.480123 kernel: usbcore: registered new interface driver usbhid Apr 30 00:45:22.480202 kernel: usbhid: USB HID core driver Apr 30 00:45:22.928623 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:45:22.929431 disk-uuid[574]: The operation has completed successfully. Apr 30 00:45:22.978627 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:45:22.978750 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:45:22.989789 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:45:23.003528 sh[591]: Success Apr 30 00:45:23.018710 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:45:23.080907 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:45:23.082543 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:45:23.088235 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:45:23.112815 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:45:23.112884 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:45:23.112907 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:45:23.113680 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:45:23.113709 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:45:23.119610 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 00:45:23.121772 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:45:23.123595 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:45:23.129846 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:45:23.134783 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:45:23.144259 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:45:23.144312 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:45:23.144325 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:45:23.148599 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:45:23.148653 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:45:23.159385 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:45:23.160625 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:45:23.165173 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:45:23.172842 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:45:23.256100 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:45:23.263819 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:45:23.274489 ignition[681]: Ignition 2.19.0 Apr 30 00:45:23.274500 ignition[681]: Stage: fetch-offline Apr 30 00:45:23.274534 ignition[681]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:23.274542 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:23.275549 ignition[681]: parsed url from cmdline: "" Apr 30 00:45:23.277656 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:45:23.275553 ignition[681]: no config URL provided Apr 30 00:45:23.275570 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:45:23.275581 ignition[681]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:45:23.275586 ignition[681]: failed to fetch config: resource requires networking Apr 30 00:45:23.275792 ignition[681]: Ignition finished successfully Apr 30 00:45:23.285738 systemd-networkd[778]: lo: Link UP Apr 30 00:45:23.285750 systemd-networkd[778]: lo: Gained carrier Apr 30 00:45:23.287691 systemd-networkd[778]: Enumeration completed Apr 30 00:45:23.287791 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:45:23.288512 systemd[1]: Reached target network.target - Network. Apr 30 00:45:23.289299 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:23.289302 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:45:23.290835 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:23.290838 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:45:23.292205 systemd-networkd[778]: eth0: Link UP Apr 30 00:45:23.292208 systemd-networkd[778]: eth0: Gained carrier Apr 30 00:45:23.292216 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:23.297745 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:45:23.297806 systemd-networkd[778]: eth1: Link UP Apr 30 00:45:23.297810 systemd-networkd[778]: eth1: Gained carrier Apr 30 00:45:23.297818 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:23.311398 ignition[782]: Ignition 2.19.0 Apr 30 00:45:23.311407 ignition[782]: Stage: fetch Apr 30 00:45:23.311612 ignition[782]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:23.311622 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:23.311762 ignition[782]: parsed url from cmdline: "" Apr 30 00:45:23.311766 ignition[782]: no config URL provided Apr 30 00:45:23.311771 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:45:23.311779 ignition[782]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:45:23.311799 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 00:45:23.312470 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 00:45:23.329697 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:45:23.359694 systemd-networkd[778]: eth0: DHCPv4 address 49.13.50.0/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 00:45:23.512710 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 00:45:23.519432 ignition[782]: GET result: OK Apr 30 00:45:23.519627 ignition[782]: parsing config with SHA512: 33745544f4ae9415818a575510a3b3e9186ee4ee147ed71da5b9c3f8f4e9c9a4270a63cc125a74b1a12df808137ea546ae2c5fa1a16eef551bbadae264620a8f Apr 30 00:45:23.525544 unknown[782]: fetched base config from "system" Apr 30 00:45:23.525553 unknown[782]: fetched base config from "system" Apr 30 00:45:23.526002 ignition[782]: fetch: fetch complete Apr 30 00:45:23.525570 unknown[782]: fetched user config from "hetzner" Apr 30 00:45:23.526008 ignition[782]: fetch: fetch passed Apr 30 00:45:23.526062 ignition[782]: Ignition finished successfully Apr 30 00:45:23.528358 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:45:23.533881 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:45:23.551241 ignition[789]: Ignition 2.19.0 Apr 30 00:45:23.551256 ignition[789]: Stage: kargs Apr 30 00:45:23.551452 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:23.551462 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:23.552430 ignition[789]: kargs: kargs passed Apr 30 00:45:23.552483 ignition[789]: Ignition finished successfully Apr 30 00:45:23.556642 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:45:23.564862 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:45:23.577519 ignition[796]: Ignition 2.19.0 Apr 30 00:45:23.577533 ignition[796]: Stage: disks Apr 30 00:45:23.578527 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:23.578553 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:23.579959 ignition[796]: disks: disks passed Apr 30 00:45:23.580028 ignition[796]: Ignition finished successfully Apr 30 00:45:23.584640 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:45:23.585908 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:45:23.587022 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:45:23.588285 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:45:23.588892 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:45:23.589838 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:45:23.595844 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:45:23.615823 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 00:45:23.622108 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:45:23.627789 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:45:23.672612 kernel: EXT4-fs (sda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:45:23.673824 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:45:23.675035 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:45:23.684784 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:45:23.688031 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:45:23.697153 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:45:23.698507 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:45:23.699709 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:45:23.701518 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:45:23.704762 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:45:23.709576 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) Apr 30 00:45:23.711733 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:45:23.711776 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:45:23.711787 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:45:23.719179 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:45:23.719237 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:45:23.724177 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:45:23.766810 coreos-metadata[814]: Apr 30 00:45:23.766 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 00:45:23.768632 coreos-metadata[814]: Apr 30 00:45:23.768 INFO Fetch successful Apr 30 00:45:23.771271 coreos-metadata[814]: Apr 30 00:45:23.770 INFO wrote hostname ci-4081-3-3-7-874bc1dee9 to /sysroot/etc/hostname Apr 30 00:45:23.773039 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:45:23.773012 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:45:23.780491 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:45:23.785829 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:45:23.790343 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:45:23.894365 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:45:23.901753 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:45:23.907677 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:45:23.911580 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:45:23.936595 ignition[929]: INFO : Ignition 2.19.0 Apr 30 00:45:23.937398 ignition[929]: INFO : Stage: mount Apr 30 00:45:23.938731 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:23.938731 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:23.938731 ignition[929]: INFO : mount: mount passed Apr 30 00:45:23.938731 ignition[929]: INFO : Ignition finished successfully Apr 30 00:45:23.940617 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:45:23.945883 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:45:23.947599 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:45:24.113979 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:45:24.123890 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:45:24.132607 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Apr 30 00:45:24.133957 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:45:24.134003 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:45:24.134026 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:45:24.137589 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:45:24.137644 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:45:24.141140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:45:24.161026 ignition[958]: INFO : Ignition 2.19.0 Apr 30 00:45:24.161026 ignition[958]: INFO : Stage: files Apr 30 00:45:24.162122 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:24.162122 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:24.163761 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:45:24.163761 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:45:24.163761 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:45:24.166697 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:45:24.167767 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:45:24.167767 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:45:24.167036 unknown[958]: wrote ssh authorized keys file for user: core Apr 30 00:45:24.170116 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:45:24.170116 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Apr 30 00:45:24.274232 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:45:24.609979 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:45:24.609979 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:45:24.614161 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Apr 30 00:45:24.902941 systemd-networkd[778]: eth0: Gained IPv6LL Apr 30 00:45:25.208524 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 00:45:25.350817 systemd-networkd[778]: eth1: Gained IPv6LL Apr 30 00:45:25.544148 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:45:25.544148 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 00:45:25.546812 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:45:25.547881 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:45:25.547881 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 00:45:25.547881 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 30 00:45:25.547881 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 00:45:25.547881 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 00:45:25.547881 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 30 00:45:25.547881 ignition[958]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:45:25.547881 ignition[958]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:45:25.557456 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:45:25.557456 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:45:25.557456 ignition[958]: INFO : files: files passed Apr 30 00:45:25.557456 ignition[958]: INFO : Ignition finished successfully Apr 30 00:45:25.550630 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:45:25.556927 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:45:25.561330 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:45:25.567904 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:45:25.568009 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:45:25.578594 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:45:25.578594 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:45:25.580860 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:45:25.583372 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:45:25.584891 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:45:25.591793 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:45:25.633755 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:45:25.633989 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:45:25.636108 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:45:25.636869 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:45:25.637608 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:45:25.642912 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:45:25.659381 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:45:25.669889 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:45:25.685732 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:45:25.687038 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:45:25.687784 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:45:25.688820 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:45:25.688975 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:45:25.690480 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:45:25.691120 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:45:25.692183 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:45:25.693189 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:45:25.694160 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:45:25.695174 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:45:25.696249 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:45:25.697401 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:45:25.698338 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:45:25.699370 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:45:25.700244 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:45:25.700373 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:45:25.701529 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:45:25.702221 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:45:25.703203 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:45:25.703608 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:45:25.704274 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:45:25.704396 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:45:25.705845 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:45:25.705973 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:45:25.707058 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:45:25.707155 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:45:25.708190 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:45:25.708294 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:45:25.715845 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:45:25.719812 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:45:25.720268 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:45:25.720389 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:45:25.721266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:45:25.721357 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:45:25.731040 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:45:25.731141 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:45:25.739327 ignition[1010]: INFO : Ignition 2.19.0 Apr 30 00:45:25.739327 ignition[1010]: INFO : Stage: umount Apr 30 00:45:25.741340 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:45:25.741340 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:45:25.741340 ignition[1010]: INFO : umount: umount passed Apr 30 00:45:25.741340 ignition[1010]: INFO : Ignition finished successfully Apr 30 00:45:25.743134 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:45:25.744664 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:45:25.746190 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:45:25.746836 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:45:25.746892 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:45:25.748955 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:45:25.749010 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:45:25.750086 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:45:25.750121 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:45:25.751203 systemd[1]: Stopped target network.target - Network. Apr 30 00:45:25.752044 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:45:25.752096 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:45:25.754765 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:45:25.755471 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:45:25.761693 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:45:25.763322 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:45:25.764041 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:45:25.765316 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:45:25.765362 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:45:25.766286 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:45:25.766322 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:45:25.767193 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:45:25.767240 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:45:25.768207 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:45:25.768251 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:45:25.769235 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:45:25.770781 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:45:25.771486 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:45:25.771593 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:45:25.773017 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:45:25.773091 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:45:25.778622 systemd-networkd[778]: eth0: DHCPv6 lease lost Apr 30 00:45:25.780246 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:45:25.780440 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:45:25.783402 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:45:25.783474 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:45:25.784228 systemd-networkd[778]: eth1: DHCPv6 lease lost Apr 30 00:45:25.785992 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:45:25.786157 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:45:25.787360 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:45:25.787419 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:45:25.792740 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:45:25.793515 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:45:25.795718 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:45:25.798322 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:45:25.798390 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:45:25.798995 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:45:25.799037 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:45:25.800190 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:45:25.814918 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:45:25.815062 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:45:25.824926 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:45:25.825257 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:45:25.828470 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:45:25.828553 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:45:25.830135 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:45:25.830168 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:45:25.831781 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:45:25.831832 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:45:25.833703 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:45:25.833752 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:45:25.835674 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:45:25.835728 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:45:25.851363 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:45:25.852837 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:45:25.852949 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:45:25.854408 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 00:45:25.854484 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:45:25.857884 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:45:25.857944 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:45:25.859729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:45:25.859784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:45:25.863831 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:45:25.865155 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:45:25.866916 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:45:25.873772 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:45:25.883534 systemd[1]: Switching root. Apr 30 00:45:25.919791 systemd-journald[235]: Journal stopped Apr 30 00:45:26.812615 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Apr 30 00:45:26.812733 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:45:26.812752 kernel: SELinux: policy capability open_perms=1 Apr 30 00:45:26.812762 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:45:26.812772 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:45:26.812781 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:45:26.812791 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:45:26.812800 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:45:26.812814 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:45:26.812823 kernel: audit: type=1403 audit(1745973926.076:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:45:26.812835 systemd[1]: Successfully loaded SELinux policy in 35.323ms. Apr 30 00:45:26.812859 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.038ms. Apr 30 00:45:26.812870 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:45:26.812881 systemd[1]: Detected virtualization kvm. Apr 30 00:45:26.812892 systemd[1]: Detected architecture arm64. Apr 30 00:45:26.812902 systemd[1]: Detected first boot. Apr 30 00:45:26.812914 systemd[1]: Hostname set to . Apr 30 00:45:26.812924 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:45:26.812935 zram_generator::config[1053]: No configuration found. Apr 30 00:45:26.812946 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:45:26.812956 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:45:26.812966 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:45:26.812977 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:45:26.812988 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:45:26.812998 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:45:26.813010 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:45:26.813020 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:45:26.813030 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:45:26.813041 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:45:26.813051 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:45:26.813066 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:45:26.813076 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:45:26.813091 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:45:26.813102 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:45:26.813114 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:45:26.813125 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:45:26.813135 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:45:26.813146 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:45:26.813156 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:45:26.813167 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:45:26.813177 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:45:26.813189 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:45:26.813201 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:45:26.813211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:45:26.813225 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:45:26.813235 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:45:26.813250 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:45:26.813262 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:45:26.813272 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:45:26.813284 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:45:26.813295 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:45:26.813305 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:45:26.813316 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:45:26.813327 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:45:26.813338 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:45:26.813348 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:45:26.813359 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:45:26.813369 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:45:26.813381 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:45:26.813392 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:45:26.813406 systemd[1]: Reached target machines.target - Containers. Apr 30 00:45:26.813418 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:45:26.813432 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:45:26.813444 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:45:26.813455 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:45:26.813465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:45:26.813475 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:45:26.813486 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:45:26.813496 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:45:26.813507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:45:26.813518 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:45:26.813531 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:45:26.813541 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:45:26.813552 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:45:26.813575 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:45:26.813586 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:45:26.813605 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:45:26.813616 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:45:26.813627 kernel: ACPI: bus type drm_connector registered Apr 30 00:45:26.813645 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:45:26.813660 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:45:26.813671 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:45:26.813686 systemd[1]: Stopped verity-setup.service. Apr 30 00:45:26.813697 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:45:26.813708 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:45:26.813720 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:45:26.813731 kernel: fuse: init (API version 7.39) Apr 30 00:45:26.813740 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:45:26.813751 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:45:26.813761 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:45:26.813772 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:45:26.813782 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:45:26.813793 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:45:26.813802 kernel: loop: module loaded Apr 30 00:45:26.813814 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:45:26.813825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:45:26.813835 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:45:26.813846 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:45:26.813858 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:45:26.813868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:45:26.813880 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:45:26.813891 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:45:26.813901 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:45:26.813912 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:45:26.813922 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:45:26.813933 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:45:26.813975 systemd-journald[1116]: Collecting audit messages is disabled. Apr 30 00:45:26.813999 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:45:26.814010 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:45:26.814021 systemd-journald[1116]: Journal started Apr 30 00:45:26.814043 systemd-journald[1116]: Runtime Journal (/run/log/journal/d2ede32642eb42539498681eaabf725f) is 8.0M, max 76.6M, 68.6M free. Apr 30 00:45:26.818789 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:45:26.554440 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:45:26.578073 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 00:45:26.578862 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:45:26.826630 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:45:26.828909 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:45:26.828966 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:45:26.833798 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:45:26.840915 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:45:26.840976 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:45:26.843589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:45:26.854022 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:45:26.856626 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:45:26.861584 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:45:26.861687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:45:26.871851 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:45:26.877921 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:45:26.881970 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:45:26.886623 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:45:26.888600 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:45:26.890199 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:45:26.892004 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:45:26.894074 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:45:26.899244 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:45:26.929803 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:45:26.939776 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:45:26.944592 kernel: loop0: detected capacity change from 0 to 114328 Apr 30 00:45:26.947838 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:45:26.950212 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:45:26.964844 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:45:26.972266 systemd-journald[1116]: Time spent on flushing to /var/log/journal/d2ede32642eb42539498681eaabf725f is 21.280ms for 1135 entries. Apr 30 00:45:26.972266 systemd-journald[1116]: System Journal (/var/log/journal/d2ede32642eb42539498681eaabf725f) is 8.0M, max 584.8M, 576.8M free. Apr 30 00:45:27.009238 systemd-journald[1116]: Received client request to flush runtime journal. Apr 30 00:45:27.009277 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:45:26.978362 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:45:26.989153 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:45:26.993409 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:45:26.999721 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 00:45:27.016523 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:45:27.021186 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Apr 30 00:45:27.021204 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Apr 30 00:45:27.026831 kernel: loop1: detected capacity change from 0 to 8 Apr 30 00:45:27.030661 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:45:27.038736 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:45:27.048595 kernel: loop2: detected capacity change from 0 to 201592 Apr 30 00:45:27.094647 kernel: loop3: detected capacity change from 0 to 114432 Apr 30 00:45:27.101082 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:45:27.114614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:45:27.138679 kernel: loop4: detected capacity change from 0 to 114328 Apr 30 00:45:27.148358 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Apr 30 00:45:27.148376 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Apr 30 00:45:27.157312 kernel: loop5: detected capacity change from 0 to 8 Apr 30 00:45:27.157375 kernel: loop6: detected capacity change from 0 to 201592 Apr 30 00:45:27.162047 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:45:27.180588 kernel: loop7: detected capacity change from 0 to 114432 Apr 30 00:45:27.199229 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 00:45:27.199750 (sd-merge)[1194]: Merged extensions into '/usr'. Apr 30 00:45:27.205799 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:45:27.205815 systemd[1]: Reloading... Apr 30 00:45:27.308104 zram_generator::config[1217]: No configuration found. Apr 30 00:45:27.381589 ldconfig[1145]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:45:27.461744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:45:27.508148 systemd[1]: Reloading finished in 301 ms. Apr 30 00:45:27.537601 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:45:27.538665 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:45:27.554148 systemd[1]: Starting ensure-sysext.service... Apr 30 00:45:27.556243 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:45:27.571417 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:45:27.573869 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:45:27.573901 systemd[1]: Reloading... Apr 30 00:45:27.588936 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:45:27.589256 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:45:27.591171 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:45:27.591551 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 30 00:45:27.591730 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 30 00:45:27.595401 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:45:27.595410 systemd-tmpfiles[1259]: Skipping /boot Apr 30 00:45:27.606135 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:45:27.606225 systemd-tmpfiles[1259]: Skipping /boot Apr 30 00:45:27.647586 zram_generator::config[1295]: No configuration found. Apr 30 00:45:27.743316 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:45:27.789348 systemd[1]: Reloading finished in 215 ms. Apr 30 00:45:27.804270 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:45:27.826849 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:45:27.832863 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:45:27.837144 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:45:27.845970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:45:27.850919 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:45:27.856750 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:45:27.862982 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:45:27.865872 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:45:27.871489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:45:27.873988 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:45:27.874962 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:45:27.877518 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:45:27.883032 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:45:27.883176 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:45:27.888312 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:45:27.901141 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:45:27.902297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:45:27.903976 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:45:27.906017 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:45:27.908512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:45:27.908847 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:45:27.911729 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:45:27.911885 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:45:27.916770 augenrules[1351]: No rules Apr 30 00:45:27.919156 systemd[1]: Finished ensure-sysext.service. Apr 30 00:45:27.920645 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:45:27.923420 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:45:27.925101 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:45:27.925683 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:45:27.933251 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:45:27.938790 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:45:27.938887 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:45:27.945845 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:45:27.954858 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:45:27.960536 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Apr 30 00:45:27.972687 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:45:27.981694 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:45:27.992184 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:45:27.993200 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:45:27.994284 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:45:28.006070 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:45:28.104274 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:45:28.105028 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:45:28.105771 systemd-networkd[1379]: lo: Link UP Apr 30 00:45:28.105783 systemd-networkd[1379]: lo: Gained carrier Apr 30 00:45:28.106344 systemd-networkd[1379]: Enumeration completed Apr 30 00:45:28.106750 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:45:28.107344 systemd-timesyncd[1363]: No network connectivity, watching for changes. Apr 30 00:45:28.121985 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:45:28.124413 systemd-resolved[1332]: Positive Trust Anchors: Apr 30 00:45:28.124431 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:45:28.124463 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:45:28.131005 systemd-resolved[1332]: Using system hostname 'ci-4081-3-3-7-874bc1dee9'. Apr 30 00:45:28.133771 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:45:28.134486 systemd[1]: Reached target network.target - Network. Apr 30 00:45:28.135046 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:45:28.138922 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 00:45:28.185642 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:45:28.200903 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:28.201200 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:45:28.203036 systemd-networkd[1379]: eth0: Link UP Apr 30 00:45:28.203145 systemd-networkd[1379]: eth0: Gained carrier Apr 30 00:45:28.203390 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:28.229503 systemd-networkd[1379]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:28.229513 systemd-networkd[1379]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:45:28.231211 systemd-networkd[1379]: eth1: Link UP Apr 30 00:45:28.231309 systemd-networkd[1379]: eth1: Gained carrier Apr 30 00:45:28.231367 systemd-networkd[1379]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:45:28.254775 systemd-networkd[1379]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:45:28.255552 systemd-timesyncd[1363]: Network configuration changed, trying to establish connection. Apr 30 00:45:28.274613 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1389) Apr 30 00:45:28.277265 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 00:45:28.277390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:45:28.279901 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:45:28.281694 systemd-networkd[1379]: eth0: DHCPv4 address 49.13.50.0/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 00:45:28.282061 systemd-timesyncd[1363]: Network configuration changed, trying to establish connection. Apr 30 00:45:28.282187 systemd-timesyncd[1363]: Network configuration changed, trying to establish connection. Apr 30 00:45:28.283800 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:45:28.289310 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:45:28.291822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:45:28.291858 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:45:28.304661 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:45:28.304882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:45:28.310196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:45:28.310611 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:45:28.313130 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:45:28.313288 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:45:28.314760 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:45:28.314814 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:45:28.353705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:45:28.363782 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 30 00:45:28.363849 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 00:45:28.363862 kernel: [drm] features: -context_init Apr 30 00:45:28.364721 kernel: [drm] number of scanouts: 1 Apr 30 00:45:28.365584 kernel: [drm] number of cap sets: 0 Apr 30 00:45:28.369580 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 00:45:28.382582 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 00:45:28.388071 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 00:45:28.388614 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 00:45:28.388959 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:45:28.389153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:45:28.400232 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:45:28.404792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:45:28.413613 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:45:28.469794 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:45:28.520143 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:45:28.526874 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:45:28.543594 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:45:28.570966 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:45:28.571959 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:45:28.572538 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:45:28.573191 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:45:28.575664 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:45:28.576429 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:45:28.577332 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:45:28.578000 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:45:28.578569 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:45:28.578601 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:45:28.579017 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:45:28.581229 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:45:28.583331 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:45:28.590248 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:45:28.593059 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:45:28.594839 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:45:28.595965 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:45:28.596888 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:45:28.597873 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:45:28.598007 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:45:28.602867 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:45:28.606282 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:45:28.611113 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:45:28.611349 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:45:28.613908 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:45:28.620800 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:45:28.621310 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:45:28.623815 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:45:28.630801 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:45:28.634514 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 00:45:28.640194 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:45:28.654721 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:45:28.659849 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:45:28.661335 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:45:28.662898 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:45:28.665343 jq[1447]: false Apr 30 00:45:28.665782 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:45:28.670769 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:45:28.675130 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:45:28.677881 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:45:28.678047 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:45:28.696326 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:45:28.696508 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:45:28.715633 coreos-metadata[1445]: Apr 30 00:45:28.715 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 00:45:28.716488 extend-filesystems[1448]: Found loop4 Apr 30 00:45:28.716488 extend-filesystems[1448]: Found loop5 Apr 30 00:45:28.716488 extend-filesystems[1448]: Found loop6 Apr 30 00:45:28.716488 extend-filesystems[1448]: Found loop7 Apr 30 00:45:28.716488 extend-filesystems[1448]: Found sda Apr 30 00:45:28.716488 extend-filesystems[1448]: Found sda1 Apr 30 00:45:28.716488 extend-filesystems[1448]: Found sda2 Apr 30 00:45:28.716488 extend-filesystems[1448]: Found sda3 Apr 30 00:45:28.716488 extend-filesystems[1448]: Found usr Apr 30 00:45:28.716488 extend-filesystems[1448]: Found sda4 Apr 30 00:45:28.716488 extend-filesystems[1448]: Found sda6 Apr 30 00:45:28.716488 extend-filesystems[1448]: Found sda7 Apr 30 00:45:28.716488 extend-filesystems[1448]: Found sda9 Apr 30 00:45:28.716488 extend-filesystems[1448]: Checking size of /dev/sda9 Apr 30 00:45:28.718328 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:45:28.753928 tar[1468]: linux-arm64/LICENSE Apr 30 00:45:28.753928 tar[1468]: linux-arm64/helm Apr 30 00:45:28.760776 coreos-metadata[1445]: Apr 30 00:45:28.718 INFO Fetch successful Apr 30 00:45:28.760776 coreos-metadata[1445]: Apr 30 00:45:28.718 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 00:45:28.760776 coreos-metadata[1445]: Apr 30 00:45:28.720 INFO Fetch successful Apr 30 00:45:28.760876 jq[1461]: true Apr 30 00:45:28.719124 dbus-daemon[1446]: [system] SELinux support is enabled Apr 30 00:45:28.719646 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:45:28.722848 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:45:28.742834 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:45:28.742888 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:45:28.748287 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:45:28.748309 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:45:28.760052 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:45:28.777784 update_engine[1459]: I20250430 00:45:28.776516 1459 main.cc:92] Flatcar Update Engine starting Apr 30 00:45:28.779705 extend-filesystems[1448]: Resized partition /dev/sda9 Apr 30 00:45:28.785655 jq[1481]: true Apr 30 00:45:28.791658 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:45:28.796219 update_engine[1459]: I20250430 00:45:28.789600 1459 update_check_scheduler.cc:74] Next update check in 9m28s Apr 30 00:45:28.796271 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:45:28.798058 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:45:28.804481 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 00:45:28.855887 systemd-logind[1458]: New seat seat0. Apr 30 00:45:28.865654 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:45:28.865676 systemd-logind[1458]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 30 00:45:28.865902 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:45:28.903657 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1375) Apr 30 00:45:28.899651 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:45:28.900718 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:45:28.963436 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:45:28.965161 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:45:28.971909 systemd[1]: Starting sshkeys.service... Apr 30 00:45:28.980600 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 00:45:29.010429 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 00:45:29.015767 extend-filesystems[1492]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 00:45:29.015767 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 00:45:29.015767 extend-filesystems[1492]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 00:45:29.024698 extend-filesystems[1448]: Resized filesystem in /dev/sda9 Apr 30 00:45:29.024698 extend-filesystems[1448]: Found sr0 Apr 30 00:45:29.024214 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 00:45:29.026047 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:45:29.028674 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:45:29.086831 containerd[1475]: time="2025-04-30T00:45:29.082522000Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:45:29.087086 coreos-metadata[1524]: Apr 30 00:45:29.086 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 00:45:29.089183 coreos-metadata[1524]: Apr 30 00:45:29.089 INFO Fetch successful Apr 30 00:45:29.093916 unknown[1524]: wrote ssh authorized keys file for user: core Apr 30 00:45:29.127895 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:45:29.129986 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 00:45:29.133286 systemd[1]: Finished sshkeys.service. Apr 30 00:45:29.154546 containerd[1475]: time="2025-04-30T00:45:29.154472440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:29.161583 containerd[1475]: time="2025-04-30T00:45:29.159854760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:45:29.161583 containerd[1475]: time="2025-04-30T00:45:29.159898200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:45:29.161583 containerd[1475]: time="2025-04-30T00:45:29.159915520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:45:29.161583 containerd[1475]: time="2025-04-30T00:45:29.160078720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:45:29.161583 containerd[1475]: time="2025-04-30T00:45:29.160097480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:29.161583 containerd[1475]: time="2025-04-30T00:45:29.160156560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:45:29.161583 containerd[1475]: time="2025-04-30T00:45:29.160169920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:29.161583 containerd[1475]: time="2025-04-30T00:45:29.160328960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:45:29.161583 containerd[1475]: time="2025-04-30T00:45:29.160344920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:29.161583 containerd[1475]: time="2025-04-30T00:45:29.160357720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:45:29.161583 containerd[1475]: time="2025-04-30T00:45:29.160367680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:29.161863 containerd[1475]: time="2025-04-30T00:45:29.160440880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:29.161863 containerd[1475]: time="2025-04-30T00:45:29.160706000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:45:29.161863 containerd[1475]: time="2025-04-30T00:45:29.160820680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:45:29.161863 containerd[1475]: time="2025-04-30T00:45:29.160835080Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:45:29.161863 containerd[1475]: time="2025-04-30T00:45:29.160914520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:45:29.161863 containerd[1475]: time="2025-04-30T00:45:29.160953440Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:45:29.167780 containerd[1475]: time="2025-04-30T00:45:29.167737080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:45:29.167998 containerd[1475]: time="2025-04-30T00:45:29.167981720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:45:29.168232 containerd[1475]: time="2025-04-30T00:45:29.168216040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:45:29.168314 containerd[1475]: time="2025-04-30T00:45:29.168301640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:45:29.168450 containerd[1475]: time="2025-04-30T00:45:29.168433640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:45:29.169556 containerd[1475]: time="2025-04-30T00:45:29.169529520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.171850920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172032680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172060320Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172075800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172091160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172104760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172117000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172130600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172146600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172159840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172172760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172184200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172222000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172588 containerd[1475]: time="2025-04-30T00:45:29.172237960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172250440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172265120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172278440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172291880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172304760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172318800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172332040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172346920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172358080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172370240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172388520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172404240Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172425760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172437240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.172909 containerd[1475]: time="2025-04-30T00:45:29.172449360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:45:29.176545 containerd[1475]: time="2025-04-30T00:45:29.174823760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:45:29.176545 containerd[1475]: time="2025-04-30T00:45:29.174863600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:45:29.176545 containerd[1475]: time="2025-04-30T00:45:29.174875760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:45:29.176545 containerd[1475]: time="2025-04-30T00:45:29.174887520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:45:29.176545 containerd[1475]: time="2025-04-30T00:45:29.174898200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.176545 containerd[1475]: time="2025-04-30T00:45:29.174912320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:45:29.176545 containerd[1475]: time="2025-04-30T00:45:29.174922560Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:45:29.176545 containerd[1475]: time="2025-04-30T00:45:29.174932680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:45:29.176821 containerd[1475]: time="2025-04-30T00:45:29.175315360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:45:29.176821 containerd[1475]: time="2025-04-30T00:45:29.175375920Z" level=info msg="Connect containerd service" Apr 30 00:45:29.176821 containerd[1475]: time="2025-04-30T00:45:29.175416160Z" level=info msg="using legacy CRI server" Apr 30 00:45:29.176821 containerd[1475]: time="2025-04-30T00:45:29.175423240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:45:29.176821 containerd[1475]: time="2025-04-30T00:45:29.175579800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:45:29.176821 containerd[1475]: time="2025-04-30T00:45:29.176293000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:45:29.179352 containerd[1475]: time="2025-04-30T00:45:29.179313960Z" level=info msg="Start subscribing containerd event" Apr 30 00:45:29.180064 containerd[1475]: time="2025-04-30T00:45:29.179726480Z" level=info msg="Start recovering state" Apr 30 00:45:29.180064 containerd[1475]: time="2025-04-30T00:45:29.179813440Z" level=info msg="Start event monitor" Apr 30 00:45:29.180064 containerd[1475]: time="2025-04-30T00:45:29.179825880Z" level=info msg="Start snapshots syncer" Apr 30 00:45:29.180064 containerd[1475]: time="2025-04-30T00:45:29.179835800Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:45:29.180064 containerd[1475]: time="2025-04-30T00:45:29.179843760Z" level=info msg="Start streaming server" Apr 30 00:45:29.183933 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:45:29.189187 containerd[1475]: time="2025-04-30T00:45:29.181498840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:45:29.190640 containerd[1475]: time="2025-04-30T00:45:29.190600600Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:45:29.193535 containerd[1475]: time="2025-04-30T00:45:29.193510320Z" level=info msg="containerd successfully booted in 0.114078s" Apr 30 00:45:29.193668 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:45:29.210352 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:45:29.218045 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:45:29.229071 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:45:29.230662 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:45:29.239258 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:45:29.241013 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:45:29.256198 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:45:29.267889 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:45:29.271041 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:45:29.274090 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:45:29.468714 tar[1468]: linux-arm64/README.md Apr 30 00:45:29.481374 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:45:29.894823 systemd-networkd[1379]: eth0: Gained IPv6LL Apr 30 00:45:29.895852 systemd-timesyncd[1363]: Network configuration changed, trying to establish connection. Apr 30 00:45:29.898460 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:45:29.900674 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:45:29.907304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:45:29.911708 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:45:29.940440 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:45:30.214745 systemd-networkd[1379]: eth1: Gained IPv6LL Apr 30 00:45:30.216367 systemd-timesyncd[1363]: Network configuration changed, trying to establish connection. Apr 30 00:45:30.666771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:45:30.667917 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:45:30.669141 systemd[1]: Startup finished in 780ms (kernel) + 5.381s (initrd) + 4.627s (userspace) = 10.789s. Apr 30 00:45:30.675932 (kubelet)[1578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:45:31.176267 kubelet[1578]: E0430 00:45:31.176202 1578 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:45:31.180099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:45:31.180341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:45:41.430898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:45:41.441900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:45:41.570778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:45:41.584053 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:45:41.642331 kubelet[1597]: E0430 00:45:41.642259 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:45:41.645393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:45:41.645594 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:45:51.896677 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:45:51.904894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:45:52.023769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:45:52.040066 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:45:52.088486 kubelet[1611]: E0430 00:45:52.088397 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:45:52.090827 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:45:52.090991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:46:00.392466 systemd-timesyncd[1363]: Contacted time server 194.59.205.229:123 (2.flatcar.pool.ntp.org). Apr 30 00:46:00.392588 systemd-timesyncd[1363]: Initial clock synchronization to Wed 2025-04-30 00:46:00.639517 UTC. Apr 30 00:46:02.138554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:46:02.160099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:46:02.272667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:46:02.278271 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:46:02.326989 kubelet[1628]: E0430 00:46:02.326939 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:46:02.332059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:46:02.332225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:46:12.386982 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 00:46:12.393923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:46:12.514469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:46:12.529151 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:46:12.575639 kubelet[1644]: E0430 00:46:12.575554 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:46:12.579075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:46:12.579502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:46:13.926287 update_engine[1459]: I20250430 00:46:13.926116 1459 update_attempter.cc:509] Updating boot flags... Apr 30 00:46:13.968594 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1660) Apr 30 00:46:22.637496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 00:46:22.644851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:46:22.753295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:46:22.757706 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:46:22.805391 kubelet[1674]: E0430 00:46:22.805328 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:46:22.808270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:46:22.808455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:46:32.887369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 00:46:32.898967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:46:33.028519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:46:33.051238 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:46:33.098104 kubelet[1690]: E0430 00:46:33.098034 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:46:33.101012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:46:33.101397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:46:43.138396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 00:46:43.147925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:46:43.259637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:46:43.264074 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:46:43.305087 kubelet[1704]: E0430 00:46:43.305022 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:46:43.307881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:46:43.308042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:46:53.387245 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 00:46:53.402026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:46:53.520098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:46:53.525514 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:46:53.572380 kubelet[1719]: E0430 00:46:53.572245 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:46:53.575111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:46:53.575324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:47:03.637257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 00:47:03.648959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:03.774198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:03.778737 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:47:03.818798 kubelet[1734]: E0430 00:47:03.818732 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:47:03.821120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:47:03.821286 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:47:10.852311 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:47:10.866046 systemd[1]: Started sshd@0-49.13.50.0:22-139.178.68.195:34926.service - OpenSSH per-connection server daemon (139.178.68.195:34926). Apr 30 00:47:11.857489 sshd[1743]: Accepted publickey for core from 139.178.68.195 port 34926 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:11.860500 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:11.869998 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:47:11.875013 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:47:11.878139 systemd-logind[1458]: New session 1 of user core. Apr 30 00:47:11.889967 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:47:11.898989 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:47:11.902504 (systemd)[1747]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:47:12.010220 systemd[1747]: Queued start job for default target default.target. Apr 30 00:47:12.021426 systemd[1747]: Created slice app.slice - User Application Slice. Apr 30 00:47:12.021480 systemd[1747]: Reached target paths.target - Paths. Apr 30 00:47:12.021503 systemd[1747]: Reached target timers.target - Timers. Apr 30 00:47:12.023646 systemd[1747]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:47:12.040543 systemd[1747]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:47:12.040755 systemd[1747]: Reached target sockets.target - Sockets. Apr 30 00:47:12.040772 systemd[1747]: Reached target basic.target - Basic System. Apr 30 00:47:12.040825 systemd[1747]: Reached target default.target - Main User Target. Apr 30 00:47:12.040855 systemd[1747]: Startup finished in 131ms. Apr 30 00:47:12.041216 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:47:12.057920 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:47:12.766128 systemd[1]: Started sshd@1-49.13.50.0:22-139.178.68.195:34934.service - OpenSSH per-connection server daemon (139.178.68.195:34934). Apr 30 00:47:13.740271 sshd[1758]: Accepted publickey for core from 139.178.68.195 port 34934 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:13.742454 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:13.747726 systemd-logind[1458]: New session 2 of user core. Apr 30 00:47:13.757871 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:47:13.887071 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 00:47:13.898011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:14.015217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:14.027057 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:47:14.076249 kubelet[1769]: E0430 00:47:14.076155 1769 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:47:14.079216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:47:14.079396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:47:14.420997 sshd[1758]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:14.426904 systemd[1]: sshd@1-49.13.50.0:22-139.178.68.195:34934.service: Deactivated successfully. Apr 30 00:47:14.429434 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:47:14.430425 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:47:14.431958 systemd-logind[1458]: Removed session 2. Apr 30 00:47:14.591379 systemd[1]: Started sshd@2-49.13.50.0:22-139.178.68.195:34942.service - OpenSSH per-connection server daemon (139.178.68.195:34942). Apr 30 00:47:15.564459 sshd[1781]: Accepted publickey for core from 139.178.68.195 port 34942 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:15.566723 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:15.573552 systemd-logind[1458]: New session 3 of user core. Apr 30 00:47:15.583914 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:47:16.238449 sshd[1781]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:16.244686 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:47:16.245073 systemd[1]: sshd@2-49.13.50.0:22-139.178.68.195:34942.service: Deactivated successfully. Apr 30 00:47:16.247629 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:47:16.250182 systemd-logind[1458]: Removed session 3. Apr 30 00:47:16.407135 systemd[1]: Started sshd@3-49.13.50.0:22-139.178.68.195:34828.service - OpenSSH per-connection server daemon (139.178.68.195:34828). Apr 30 00:47:17.381082 sshd[1788]: Accepted publickey for core from 139.178.68.195 port 34828 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:17.383251 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:17.389452 systemd-logind[1458]: New session 4 of user core. Apr 30 00:47:17.399900 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:47:18.059253 sshd[1788]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:18.063974 systemd[1]: sshd@3-49.13.50.0:22-139.178.68.195:34828.service: Deactivated successfully. Apr 30 00:47:18.065747 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:47:18.067730 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:47:18.068812 systemd-logind[1458]: Removed session 4. Apr 30 00:47:18.239094 systemd[1]: Started sshd@4-49.13.50.0:22-139.178.68.195:34832.service - OpenSSH per-connection server daemon (139.178.68.195:34832). Apr 30 00:47:19.217054 sshd[1795]: Accepted publickey for core from 139.178.68.195 port 34832 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:19.219475 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:19.224716 systemd-logind[1458]: New session 5 of user core. Apr 30 00:47:19.234923 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:47:19.751430 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:47:19.751775 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:47:19.766723 sudo[1798]: pam_unix(sudo:session): session closed for user root Apr 30 00:47:19.927635 sshd[1795]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:19.932903 systemd[1]: sshd@4-49.13.50.0:22-139.178.68.195:34832.service: Deactivated successfully. Apr 30 00:47:19.934764 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:47:19.935626 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:47:19.937059 systemd-logind[1458]: Removed session 5. Apr 30 00:47:20.103028 systemd[1]: Started sshd@5-49.13.50.0:22-139.178.68.195:34846.service - OpenSSH per-connection server daemon (139.178.68.195:34846). Apr 30 00:47:21.069511 sshd[1803]: Accepted publickey for core from 139.178.68.195 port 34846 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:21.071718 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:21.078280 systemd-logind[1458]: New session 6 of user core. Apr 30 00:47:21.089856 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:47:21.589141 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:47:21.589476 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:47:21.593450 sudo[1807]: pam_unix(sudo:session): session closed for user root Apr 30 00:47:21.599052 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:47:21.599437 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:47:21.625333 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:47:21.627171 auditctl[1810]: No rules Apr 30 00:47:21.627713 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:47:21.628015 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:47:21.631927 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:47:21.665483 augenrules[1828]: No rules Apr 30 00:47:21.667209 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:47:21.670351 sudo[1806]: pam_unix(sudo:session): session closed for user root Apr 30 00:47:21.828872 sshd[1803]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:21.834920 systemd[1]: sshd@5-49.13.50.0:22-139.178.68.195:34846.service: Deactivated successfully. Apr 30 00:47:21.836925 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:47:21.838346 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:47:21.839608 systemd-logind[1458]: Removed session 6. Apr 30 00:47:22.008868 systemd[1]: Started sshd@6-49.13.50.0:22-139.178.68.195:34850.service - OpenSSH per-connection server daemon (139.178.68.195:34850). Apr 30 00:47:23.014937 sshd[1836]: Accepted publickey for core from 139.178.68.195 port 34850 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:47:23.017276 sshd[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:23.023800 systemd-logind[1458]: New session 7 of user core. Apr 30 00:47:23.030797 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:47:23.546235 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:47:23.546538 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:47:23.841875 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:47:23.843171 (dockerd)[1854]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:47:24.087984 dockerd[1854]: time="2025-04-30T00:47:24.087907707Z" level=info msg="Starting up" Apr 30 00:47:24.095303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 00:47:24.104695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:24.207390 dockerd[1854]: time="2025-04-30T00:47:24.207281035Z" level=info msg="Loading containers: start." Apr 30 00:47:24.259482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:24.266903 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:47:24.312864 kubelet[1887]: E0430 00:47:24.312797 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:47:24.314987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:47:24.315129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:47:24.344617 kernel: Initializing XFRM netlink socket Apr 30 00:47:24.421977 systemd-networkd[1379]: docker0: Link UP Apr 30 00:47:24.448053 dockerd[1854]: time="2025-04-30T00:47:24.447985804Z" level=info msg="Loading containers: done." Apr 30 00:47:24.466821 dockerd[1854]: time="2025-04-30T00:47:24.465986810Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:47:24.466821 dockerd[1854]: time="2025-04-30T00:47:24.466727108Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:47:24.467044 dockerd[1854]: time="2025-04-30T00:47:24.466899602Z" level=info msg="Daemon has completed initialization" Apr 30 00:47:24.503891 dockerd[1854]: time="2025-04-30T00:47:24.503649553Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:47:24.504845 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:47:25.518534 containerd[1475]: time="2025-04-30T00:47:25.518495308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 00:47:26.191805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124082459.mount: Deactivated successfully. Apr 30 00:47:28.580203 containerd[1475]: time="2025-04-30T00:47:28.578875306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:28.580203 containerd[1475]: time="2025-04-30T00:47:28.580154521Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233210" Apr 30 00:47:28.580894 containerd[1475]: time="2025-04-30T00:47:28.580855773Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:28.584649 containerd[1475]: time="2025-04-30T00:47:28.584610291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:28.587219 containerd[1475]: time="2025-04-30T00:47:28.587151479Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 3.068600927s" Apr 30 00:47:28.587306 containerd[1475]: time="2025-04-30T00:47:28.587228045Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" Apr 30 00:47:28.588913 containerd[1475]: time="2025-04-30T00:47:28.588871447Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 00:47:31.376097 containerd[1475]: time="2025-04-30T00:47:31.376013737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:31.377207 containerd[1475]: time="2025-04-30T00:47:31.377162739Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529591" Apr 30 00:47:31.379586 containerd[1475]: time="2025-04-30T00:47:31.377885831Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:31.381418 containerd[1475]: time="2025-04-30T00:47:31.381346199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:31.382741 containerd[1475]: time="2025-04-30T00:47:31.382698376Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 2.793666237s" Apr 30 00:47:31.382874 containerd[1475]: time="2025-04-30T00:47:31.382857147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" Apr 30 00:47:31.383625 containerd[1475]: time="2025-04-30T00:47:31.383540676Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 00:47:33.436515 containerd[1475]: time="2025-04-30T00:47:33.436440088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:33.439992 containerd[1475]: time="2025-04-30T00:47:33.439924333Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482193" Apr 30 00:47:33.440474 containerd[1475]: time="2025-04-30T00:47:33.440445289Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:33.444295 containerd[1475]: time="2025-04-30T00:47:33.444259637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:33.445368 containerd[1475]: time="2025-04-30T00:47:33.445325392Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 2.061538578s" Apr 30 00:47:33.445368 containerd[1475]: time="2025-04-30T00:47:33.445367555Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" Apr 30 00:47:33.445812 containerd[1475]: time="2025-04-30T00:47:33.445785144Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 00:47:34.387014 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 00:47:34.392872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:34.534982 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:47:34.536743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:34.579479 kubelet[2081]: E0430 00:47:34.579321 2081 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:47:34.583276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:47:34.583434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:47:34.818300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882595036.mount: Deactivated successfully. Apr 30 00:47:35.474902 containerd[1475]: time="2025-04-30T00:47:35.474805524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:35.476490 containerd[1475]: time="2025-04-30T00:47:35.476002887Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370377" Apr 30 00:47:35.477681 containerd[1475]: time="2025-04-30T00:47:35.477604638Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:35.485590 containerd[1475]: time="2025-04-30T00:47:35.483921514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:35.486016 containerd[1475]: time="2025-04-30T00:47:35.485952014Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 2.040134147s" Apr 30 00:47:35.486016 containerd[1475]: time="2025-04-30T00:47:35.486011058Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" Apr 30 00:47:35.487890 containerd[1475]: time="2025-04-30T00:47:35.487837384Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 00:47:36.172663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421329708.mount: Deactivated successfully. Apr 30 00:47:37.732449 containerd[1475]: time="2025-04-30T00:47:37.731089385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:37.733572 containerd[1475]: time="2025-04-30T00:47:37.733521980Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Apr 30 00:47:37.734969 containerd[1475]: time="2025-04-30T00:47:37.734917675Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:37.739690 containerd[1475]: time="2025-04-30T00:47:37.739644149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:37.742076 containerd[1475]: time="2025-04-30T00:47:37.741878948Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.253996003s" Apr 30 00:47:37.742076 containerd[1475]: time="2025-04-30T00:47:37.741935627Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Apr 30 00:47:37.742764 containerd[1475]: time="2025-04-30T00:47:37.742662774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 00:47:38.278666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1120933311.mount: Deactivated successfully. Apr 30 00:47:38.289336 containerd[1475]: time="2025-04-30T00:47:38.288727832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:38.289455 containerd[1475]: time="2025-04-30T00:47:38.289426901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 30 00:47:38.290972 containerd[1475]: time="2025-04-30T00:47:38.290665241Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:38.292874 containerd[1475]: time="2025-04-30T00:47:38.292818326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:38.294237 containerd[1475]: time="2025-04-30T00:47:38.293748111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 550.883942ms" Apr 30 00:47:38.294237 containerd[1475]: time="2025-04-30T00:47:38.293788831Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 30 00:47:38.294603 containerd[1475]: time="2025-04-30T00:47:38.294504019Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 00:47:38.956225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171400457.mount: Deactivated successfully. Apr 30 00:47:43.734767 containerd[1475]: time="2025-04-30T00:47:43.734491316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:43.736336 containerd[1475]: time="2025-04-30T00:47:43.736267425Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812537" Apr 30 00:47:43.738162 containerd[1475]: time="2025-04-30T00:47:43.738073814Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:43.742519 containerd[1475]: time="2025-04-30T00:47:43.742474467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:47:43.745242 containerd[1475]: time="2025-04-30T00:47:43.744638374Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 5.449937798s" Apr 30 00:47:43.745242 containerd[1475]: time="2025-04-30T00:47:43.744701134Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Apr 30 00:47:44.636801 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 30 00:47:44.648329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:44.760811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:44.763497 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:47:44.805488 kubelet[2229]: E0430 00:47:44.805354 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:47:44.809094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:47:44.809525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:47:48.257697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:48.265119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:48.303405 systemd[1]: Reloading requested from client PID 2244 ('systemctl') (unit session-7.scope)... Apr 30 00:47:48.303425 systemd[1]: Reloading... Apr 30 00:47:48.421645 zram_generator::config[2280]: No configuration found. Apr 30 00:47:48.526301 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:47:48.597606 systemd[1]: Reloading finished in 293 ms. Apr 30 00:47:48.652436 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:47:48.652525 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:47:48.652823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:48.658215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:48.783798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:48.786643 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:47:48.826523 kubelet[2332]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:47:48.826523 kubelet[2332]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:47:48.826523 kubelet[2332]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:47:48.826955 kubelet[2332]: I0430 00:47:48.826738 2332 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:47:49.175932 kubelet[2332]: I0430 00:47:49.175663 2332 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:47:49.175932 kubelet[2332]: I0430 00:47:49.175695 2332 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:47:49.176145 kubelet[2332]: I0430 00:47:49.175997 2332 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:47:49.204826 kubelet[2332]: E0430 00:47:49.204468 2332 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://49.13.50.0:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.50.0:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:49.208113 kubelet[2332]: I0430 00:47:49.207118 2332 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:47:49.216417 kubelet[2332]: E0430 00:47:49.216379 2332 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:47:49.216590 kubelet[2332]: I0430 00:47:49.216575 2332 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:47:49.219227 kubelet[2332]: I0430 00:47:49.219196 2332 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:47:49.220304 kubelet[2332]: I0430 00:47:49.220262 2332 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:47:49.220615 kubelet[2332]: I0430 00:47:49.220409 2332 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-7-874bc1dee9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:47:49.220819 kubelet[2332]: I0430 00:47:49.220805 2332 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:47:49.220871 kubelet[2332]: I0430 00:47:49.220863 2332 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:47:49.221163 kubelet[2332]: I0430 00:47:49.221147 2332 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:47:49.224428 kubelet[2332]: I0430 00:47:49.224405 2332 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:47:49.224546 kubelet[2332]: I0430 00:47:49.224534 2332 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:47:49.224670 kubelet[2332]: I0430 00:47:49.224660 2332 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:47:49.224727 kubelet[2332]: I0430 00:47:49.224719 2332 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:47:49.226577 kubelet[2332]: W0430 00:47:49.226498 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.50.0:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-7-874bc1dee9&limit=500&resourceVersion=0": dial tcp 49.13.50.0:6443: connect: connection refused Apr 30 00:47:49.226649 kubelet[2332]: E0430 00:47:49.226612 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.50.0:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-7-874bc1dee9&limit=500&resourceVersion=0\": dial tcp 49.13.50.0:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:49.227784 kubelet[2332]: W0430 00:47:49.227748 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.50.0:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.50.0:6443: connect: connection refused Apr 30 00:47:49.227946 kubelet[2332]: E0430 00:47:49.227916 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.50.0:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.50.0:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:49.228344 kubelet[2332]: I0430 00:47:49.228326 2332 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:47:49.232078 kubelet[2332]: I0430 00:47:49.231170 2332 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:47:49.232078 kubelet[2332]: W0430 00:47:49.231297 2332 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:47:49.233584 kubelet[2332]: I0430 00:47:49.233312 2332 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:47:49.233584 kubelet[2332]: I0430 00:47:49.233346 2332 server.go:1287] "Started kubelet" Apr 30 00:47:49.237707 kubelet[2332]: E0430 00:47:49.237438 2332 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.50.0:6443/api/v1/namespaces/default/events\": dial tcp 49.13.50.0:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-7-874bc1dee9.183af22a57b227cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-7-874bc1dee9,UID:ci-4081-3-3-7-874bc1dee9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-7-874bc1dee9,},FirstTimestamp:2025-04-30 00:47:49.233330125 +0000 UTC m=+0.442936353,LastTimestamp:2025-04-30 00:47:49.233330125 +0000 UTC m=+0.442936353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-7-874bc1dee9,}" Apr 30 00:47:49.237850 kubelet[2332]: I0430 00:47:49.237821 2332 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:47:49.238720 kubelet[2332]: I0430 00:47:49.238675 2332 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:47:49.239620 kubelet[2332]: I0430 00:47:49.238859 2332 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:47:49.239620 kubelet[2332]: I0430 00:47:49.239105 2332 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:47:49.243125 kubelet[2332]: I0430 00:47:49.240935 2332 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:47:49.243125 kubelet[2332]: I0430 00:47:49.241157 2332 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:47:49.244699 kubelet[2332]: E0430 00:47:49.244679 2332 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-7-874bc1dee9\" not found" Apr 30 00:47:49.244810 kubelet[2332]: I0430 00:47:49.244799 2332 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:47:49.245070 kubelet[2332]: I0430 00:47:49.245049 2332 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:47:49.245377 kubelet[2332]: I0430 00:47:49.245362 2332 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:47:49.246184 kubelet[2332]: W0430 00:47:49.246146 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.50.0:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.50.0:6443: connect: connection refused Apr 30 00:47:49.246298 kubelet[2332]: E0430 00:47:49.246282 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.13.50.0:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.50.0:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:49.246538 kubelet[2332]: I0430 00:47:49.246520 2332 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:47:49.246734 kubelet[2332]: I0430 00:47:49.246715 2332 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:47:49.249010 kubelet[2332]: E0430 00:47:49.248980 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.50.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-7-874bc1dee9?timeout=10s\": dial tcp 49.13.50.0:6443: connect: connection refused" interval="200ms" Apr 30 00:47:49.249290 kubelet[2332]: I0430 00:47:49.249268 2332 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:47:49.249352 kubelet[2332]: E0430 00:47:49.249043 2332 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:47:49.284475 kubelet[2332]: I0430 00:47:49.284434 2332 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:47:49.285718 kubelet[2332]: I0430 00:47:49.285691 2332 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:47:49.286218 kubelet[2332]: I0430 00:47:49.286197 2332 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:47:49.286339 kubelet[2332]: I0430 00:47:49.286326 2332 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:47:49.286389 kubelet[2332]: I0430 00:47:49.286381 2332 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:47:49.286496 kubelet[2332]: E0430 00:47:49.286471 2332 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:47:49.288951 kubelet[2332]: W0430 00:47:49.288913 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.50.0:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.50.0:6443: connect: connection refused Apr 30 00:47:49.289194 kubelet[2332]: E0430 00:47:49.288960 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.50.0:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.50.0:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:49.293213 kubelet[2332]: I0430 00:47:49.293195 2332 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:47:49.293425 kubelet[2332]: I0430 00:47:49.293411 2332 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:47:49.293534 kubelet[2332]: I0430 00:47:49.293525 2332 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:47:49.295950 kubelet[2332]: I0430 00:47:49.295925 2332 policy_none.go:49] "None policy: Start" Apr 30 00:47:49.296343 kubelet[2332]: I0430 00:47:49.296058 2332 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:47:49.296343 kubelet[2332]: I0430 00:47:49.296079 2332 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:47:49.303014 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:47:49.311887 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:47:49.316450 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:47:49.324983 kubelet[2332]: I0430 00:47:49.324633 2332 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:47:49.325099 kubelet[2332]: I0430 00:47:49.325020 2332 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:47:49.325099 kubelet[2332]: I0430 00:47:49.325042 2332 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:47:49.327242 kubelet[2332]: I0430 00:47:49.326511 2332 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:47:49.328323 kubelet[2332]: E0430 00:47:49.328145 2332 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:47:49.328323 kubelet[2332]: E0430 00:47:49.328197 2332 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-7-874bc1dee9\" not found" Apr 30 00:47:49.400412 systemd[1]: Created slice kubepods-burstable-podf08d26f4974860c52b7ffb589a377c58.slice - libcontainer container kubepods-burstable-podf08d26f4974860c52b7ffb589a377c58.slice. Apr 30 00:47:49.417504 kubelet[2332]: E0430 00:47:49.417169 2332 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-7-874bc1dee9\" not found" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.421262 systemd[1]: Created slice kubepods-burstable-pod96abf1ee77e0eb68ae4d7a9098af3e2b.slice - libcontainer container kubepods-burstable-pod96abf1ee77e0eb68ae4d7a9098af3e2b.slice. Apr 30 00:47:49.431656 kubelet[2332]: E0430 00:47:49.429153 2332 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-7-874bc1dee9\" not found" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.431656 kubelet[2332]: I0430 00:47:49.429756 2332 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.431656 kubelet[2332]: E0430 00:47:49.430368 2332 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://49.13.50.0:6443/api/v1/nodes\": dial tcp 49.13.50.0:6443: connect: connection refused" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.434935 systemd[1]: Created slice kubepods-burstable-pod34ebbf0bd7e6f022cc3122d00c4a0a06.slice - libcontainer container kubepods-burstable-pod34ebbf0bd7e6f022cc3122d00c4a0a06.slice. Apr 30 00:47:49.437242 kubelet[2332]: E0430 00:47:49.437210 2332 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-7-874bc1dee9\" not found" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.450196 kubelet[2332]: E0430 00:47:49.450145 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.50.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-7-874bc1dee9?timeout=10s\": dial tcp 49.13.50.0:6443: connect: connection refused" interval="400ms" Apr 30 00:47:49.547667 kubelet[2332]: I0430 00:47:49.547611 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f08d26f4974860c52b7ffb589a377c58-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-7-874bc1dee9\" (UID: \"f08d26f4974860c52b7ffb589a377c58\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.547815 kubelet[2332]: I0430 00:47:49.547694 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f08d26f4974860c52b7ffb589a377c58-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-7-874bc1dee9\" (UID: \"f08d26f4974860c52b7ffb589a377c58\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.547815 kubelet[2332]: I0430 00:47:49.547748 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f08d26f4974860c52b7ffb589a377c58-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-7-874bc1dee9\" (UID: \"f08d26f4974860c52b7ffb589a377c58\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.547815 kubelet[2332]: I0430 00:47:49.547790 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f08d26f4974860c52b7ffb589a377c58-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-7-874bc1dee9\" (UID: \"f08d26f4974860c52b7ffb589a377c58\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.548251 kubelet[2332]: I0430 00:47:49.547832 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34ebbf0bd7e6f022cc3122d00c4a0a06-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-7-874bc1dee9\" (UID: \"34ebbf0bd7e6f022cc3122d00c4a0a06\") " pod="kube-system/kube-apiserver-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.548251 kubelet[2332]: I0430 00:47:49.547933 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34ebbf0bd7e6f022cc3122d00c4a0a06-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-7-874bc1dee9\" (UID: \"34ebbf0bd7e6f022cc3122d00c4a0a06\") " pod="kube-system/kube-apiserver-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.548251 kubelet[2332]: I0430 00:47:49.548066 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34ebbf0bd7e6f022cc3122d00c4a0a06-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-7-874bc1dee9\" (UID: \"34ebbf0bd7e6f022cc3122d00c4a0a06\") " pod="kube-system/kube-apiserver-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.548251 kubelet[2332]: I0430 00:47:49.548090 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f08d26f4974860c52b7ffb589a377c58-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-7-874bc1dee9\" (UID: \"f08d26f4974860c52b7ffb589a377c58\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.548251 kubelet[2332]: I0430 00:47:49.548111 2332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96abf1ee77e0eb68ae4d7a9098af3e2b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-7-874bc1dee9\" (UID: \"96abf1ee77e0eb68ae4d7a9098af3e2b\") " pod="kube-system/kube-scheduler-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.634162 kubelet[2332]: I0430 00:47:49.634051 2332 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.634583 kubelet[2332]: E0430 00:47:49.634483 2332 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://49.13.50.0:6443/api/v1/nodes\": dial tcp 49.13.50.0:6443: connect: connection refused" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:49.719791 containerd[1475]: time="2025-04-30T00:47:49.719534766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-7-874bc1dee9,Uid:f08d26f4974860c52b7ffb589a377c58,Namespace:kube-system,Attempt:0,}" Apr 30 00:47:49.731614 containerd[1475]: time="2025-04-30T00:47:49.731478252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-7-874bc1dee9,Uid:96abf1ee77e0eb68ae4d7a9098af3e2b,Namespace:kube-system,Attempt:0,}" Apr 30 00:47:49.738555 containerd[1475]: time="2025-04-30T00:47:49.738484880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-7-874bc1dee9,Uid:34ebbf0bd7e6f022cc3122d00c4a0a06,Namespace:kube-system,Attempt:0,}" Apr 30 00:47:49.851206 kubelet[2332]: E0430 00:47:49.851121 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.50.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-7-874bc1dee9?timeout=10s\": dial tcp 49.13.50.0:6443: connect: connection refused" interval="800ms" Apr 30 00:47:50.037131 kubelet[2332]: I0430 00:47:50.036998 2332 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:50.037545 kubelet[2332]: E0430 00:47:50.037466 2332 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://49.13.50.0:6443/api/v1/nodes\": dial tcp 49.13.50.0:6443: connect: connection refused" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:50.152107 kubelet[2332]: W0430 00:47:50.151415 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.50.0:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.50.0:6443: connect: connection refused Apr 30 00:47:50.152295 kubelet[2332]: E0430 00:47:50.152155 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.13.50.0:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.50.0:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:50.242240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2964232423.mount: Deactivated successfully. Apr 30 00:47:50.248618 containerd[1475]: time="2025-04-30T00:47:50.248383742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:47:50.250393 containerd[1475]: time="2025-04-30T00:47:50.250288192Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 30 00:47:50.254991 containerd[1475]: time="2025-04-30T00:47:50.254790456Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:47:50.258233 containerd[1475]: time="2025-04-30T00:47:50.258161634Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:47:50.259818 containerd[1475]: time="2025-04-30T00:47:50.259782163Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:47:50.259996 containerd[1475]: time="2025-04-30T00:47:50.259973364Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:47:50.262053 containerd[1475]: time="2025-04-30T00:47:50.261238771Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:47:50.264927 containerd[1475]: time="2025-04-30T00:47:50.264418188Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 525.796708ms" Apr 30 00:47:50.265087 containerd[1475]: time="2025-04-30T00:47:50.265060271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:47:50.265952 containerd[1475]: time="2025-04-30T00:47:50.265922676Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.262349ms" Apr 30 00:47:50.268655 containerd[1475]: time="2025-04-30T00:47:50.268580130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.970277ms" Apr 30 00:47:50.392012 containerd[1475]: time="2025-04-30T00:47:50.391679550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:47:50.392355 containerd[1475]: time="2025-04-30T00:47:50.391739470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:47:50.392355 containerd[1475]: time="2025-04-30T00:47:50.392256433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:50.392547 containerd[1475]: time="2025-04-30T00:47:50.392477874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:50.392921 containerd[1475]: time="2025-04-30T00:47:50.391682470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:47:50.393514 containerd[1475]: time="2025-04-30T00:47:50.393401199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:47:50.393514 containerd[1475]: time="2025-04-30T00:47:50.393438959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:50.394075 containerd[1475]: time="2025-04-30T00:47:50.393769001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:50.394689 kubelet[2332]: W0430 00:47:50.394634 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.50.0:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-7-874bc1dee9&limit=500&resourceVersion=0": dial tcp 49.13.50.0:6443: connect: connection refused Apr 30 00:47:50.395022 kubelet[2332]: E0430 00:47:50.394982 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.50.0:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-7-874bc1dee9&limit=500&resourceVersion=0\": dial tcp 49.13.50.0:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:50.397862 containerd[1475]: time="2025-04-30T00:47:50.397533021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:47:50.397862 containerd[1475]: time="2025-04-30T00:47:50.397605501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:47:50.397862 containerd[1475]: time="2025-04-30T00:47:50.397622301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:50.397862 containerd[1475]: time="2025-04-30T00:47:50.397705542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:50.420263 systemd[1]: Started cri-containerd-5c72a28561d6f291c550e23e39fb2a8204b1c9b0cac8696dc2b9fc868ab5c918.scope - libcontainer container 5c72a28561d6f291c550e23e39fb2a8204b1c9b0cac8696dc2b9fc868ab5c918. Apr 30 00:47:50.428045 systemd[1]: Started cri-containerd-3f8a2da1f92d2f70b299df893811d2a0b87fb4048c477daa7dde75a3c4f15c17.scope - libcontainer container 3f8a2da1f92d2f70b299df893811d2a0b87fb4048c477daa7dde75a3c4f15c17. Apr 30 00:47:50.433001 systemd[1]: Started cri-containerd-9c898fccc078387fd85cbfc5e73ae34045db2bc4447eeb6262270a1ea43c089a.scope - libcontainer container 9c898fccc078387fd85cbfc5e73ae34045db2bc4447eeb6262270a1ea43c089a. Apr 30 00:47:50.492805 containerd[1475]: time="2025-04-30T00:47:50.492660491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-7-874bc1dee9,Uid:34ebbf0bd7e6f022cc3122d00c4a0a06,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f8a2da1f92d2f70b299df893811d2a0b87fb4048c477daa7dde75a3c4f15c17\"" Apr 30 00:47:50.497902 containerd[1475]: time="2025-04-30T00:47:50.497635477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-7-874bc1dee9,Uid:f08d26f4974860c52b7ffb589a377c58,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c72a28561d6f291c550e23e39fb2a8204b1c9b0cac8696dc2b9fc868ab5c918\"" Apr 30 00:47:50.499666 containerd[1475]: time="2025-04-30T00:47:50.499479447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-7-874bc1dee9,Uid:96abf1ee77e0eb68ae4d7a9098af3e2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c898fccc078387fd85cbfc5e73ae34045db2bc4447eeb6262270a1ea43c089a\"" Apr 30 00:47:50.500686 containerd[1475]: time="2025-04-30T00:47:50.500654693Z" level=info msg="CreateContainer within sandbox \"3f8a2da1f92d2f70b299df893811d2a0b87fb4048c477daa7dde75a3c4f15c17\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:47:50.505389 containerd[1475]: time="2025-04-30T00:47:50.505340079Z" level=info msg="CreateContainer within sandbox \"5c72a28561d6f291c550e23e39fb2a8204b1c9b0cac8696dc2b9fc868ab5c918\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:47:50.505501 containerd[1475]: time="2025-04-30T00:47:50.505345919Z" level=info msg="CreateContainer within sandbox \"9c898fccc078387fd85cbfc5e73ae34045db2bc4447eeb6262270a1ea43c089a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:47:50.515729 containerd[1475]: time="2025-04-30T00:47:50.515640934Z" level=info msg="CreateContainer within sandbox \"3f8a2da1f92d2f70b299df893811d2a0b87fb4048c477daa7dde75a3c4f15c17\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b7a3f4af71777b552d4740f3281b58dbd287449c9714c8e040a457461beb23ad\"" Apr 30 00:47:50.516711 containerd[1475]: time="2025-04-30T00:47:50.516676899Z" level=info msg="StartContainer for \"b7a3f4af71777b552d4740f3281b58dbd287449c9714c8e040a457461beb23ad\"" Apr 30 00:47:50.526414 containerd[1475]: time="2025-04-30T00:47:50.526366671Z" level=info msg="CreateContainer within sandbox \"9c898fccc078387fd85cbfc5e73ae34045db2bc4447eeb6262270a1ea43c089a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"56562810572b60dfc1255a45c545dd779b18212270aba05b2c80c1f3c7e0fb42\"" Apr 30 00:47:50.527115 containerd[1475]: time="2025-04-30T00:47:50.526967674Z" level=info msg="CreateContainer within sandbox \"5c72a28561d6f291c550e23e39fb2a8204b1c9b0cac8696dc2b9fc868ab5c918\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8c5035cd70163e5d709ef17ca12587cd3eaf1df9c4951e30cb6bfc85a47ef091\"" Apr 30 00:47:50.527549 containerd[1475]: time="2025-04-30T00:47:50.527523597Z" level=info msg="StartContainer for \"8c5035cd70163e5d709ef17ca12587cd3eaf1df9c4951e30cb6bfc85a47ef091\"" Apr 30 00:47:50.528647 containerd[1475]: time="2025-04-30T00:47:50.527697758Z" level=info msg="StartContainer for \"56562810572b60dfc1255a45c545dd779b18212270aba05b2c80c1f3c7e0fb42\"" Apr 30 00:47:50.554941 systemd[1]: Started cri-containerd-b7a3f4af71777b552d4740f3281b58dbd287449c9714c8e040a457461beb23ad.scope - libcontainer container b7a3f4af71777b552d4740f3281b58dbd287449c9714c8e040a457461beb23ad. Apr 30 00:47:50.565650 systemd[1]: Started cri-containerd-8c5035cd70163e5d709ef17ca12587cd3eaf1df9c4951e30cb6bfc85a47ef091.scope - libcontainer container 8c5035cd70163e5d709ef17ca12587cd3eaf1df9c4951e30cb6bfc85a47ef091. Apr 30 00:47:50.568810 kubelet[2332]: W0430 00:47:50.568434 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.50.0:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.50.0:6443: connect: connection refused Apr 30 00:47:50.568810 kubelet[2332]: E0430 00:47:50.568477 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.50.0:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.50.0:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:50.580739 systemd[1]: Started cri-containerd-56562810572b60dfc1255a45c545dd779b18212270aba05b2c80c1f3c7e0fb42.scope - libcontainer container 56562810572b60dfc1255a45c545dd779b18212270aba05b2c80c1f3c7e0fb42. Apr 30 00:47:50.625593 containerd[1475]: time="2025-04-30T00:47:50.625171921Z" level=info msg="StartContainer for \"b7a3f4af71777b552d4740f3281b58dbd287449c9714c8e040a457461beb23ad\" returns successfully" Apr 30 00:47:50.637118 containerd[1475]: time="2025-04-30T00:47:50.637072905Z" level=info msg="StartContainer for \"8c5035cd70163e5d709ef17ca12587cd3eaf1df9c4951e30cb6bfc85a47ef091\" returns successfully" Apr 30 00:47:50.654930 kubelet[2332]: E0430 00:47:50.654774 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.50.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-7-874bc1dee9?timeout=10s\": dial tcp 49.13.50.0:6443: connect: connection refused" interval="1.6s" Apr 30 00:47:50.665003 containerd[1475]: time="2025-04-30T00:47:50.664841413Z" level=info msg="StartContainer for \"56562810572b60dfc1255a45c545dd779b18212270aba05b2c80c1f3c7e0fb42\" returns successfully" Apr 30 00:47:50.723819 kubelet[2332]: W0430 00:47:50.723705 2332 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.50.0:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.50.0:6443: connect: connection refused Apr 30 00:47:50.723819 kubelet[2332]: E0430 00:47:50.723781 2332 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.50.0:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.50.0:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:47:50.839967 kubelet[2332]: I0430 00:47:50.839371 2332 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:51.301247 kubelet[2332]: E0430 00:47:51.301210 2332 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-7-874bc1dee9\" not found" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:51.306586 kubelet[2332]: E0430 00:47:51.306014 2332 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-7-874bc1dee9\" not found" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:51.309643 kubelet[2332]: E0430 00:47:51.308161 2332 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-7-874bc1dee9\" not found" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:52.311392 kubelet[2332]: E0430 00:47:52.310987 2332 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-7-874bc1dee9\" not found" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:52.311392 kubelet[2332]: E0430 00:47:52.311334 2332 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-7-874bc1dee9\" not found" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:52.613651 kubelet[2332]: E0430 00:47:52.611992 2332 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-7-874bc1dee9\" not found" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:52.736729 kubelet[2332]: I0430 00:47:52.736673 2332 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:52.736729 kubelet[2332]: E0430 00:47:52.736716 2332 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-7-874bc1dee9\": node \"ci-4081-3-3-7-874bc1dee9\" not found" Apr 30 00:47:52.747860 kubelet[2332]: E0430 00:47:52.747826 2332 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-7-874bc1dee9\" not found" Apr 30 00:47:52.848016 kubelet[2332]: E0430 00:47:52.847975 2332 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-7-874bc1dee9\" not found" Apr 30 00:47:52.949089 kubelet[2332]: I0430 00:47:52.949025 2332 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:52.962574 kubelet[2332]: E0430 00:47:52.962529 2332 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-7-874bc1dee9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:52.962574 kubelet[2332]: I0430 00:47:52.962574 2332 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:52.964617 kubelet[2332]: E0430 00:47:52.964589 2332 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-7-874bc1dee9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:52.964682 kubelet[2332]: I0430 00:47:52.964620 2332 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:52.967283 kubelet[2332]: E0430 00:47:52.967230 2332 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-7-874bc1dee9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:53.230073 kubelet[2332]: I0430 00:47:53.229948 2332 apiserver.go:52] "Watching apiserver" Apr 30 00:47:53.245735 kubelet[2332]: I0430 00:47:53.245680 2332 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:47:54.752536 systemd[1]: Reloading requested from client PID 2611 ('systemctl') (unit session-7.scope)... Apr 30 00:47:54.753115 systemd[1]: Reloading... Apr 30 00:47:54.852595 zram_generator::config[2651]: No configuration found. Apr 30 00:47:54.972372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:47:55.063070 systemd[1]: Reloading finished in 309 ms. Apr 30 00:47:55.107670 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:55.122775 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:47:55.123168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:55.138361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:47:55.267859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:47:55.284389 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:47:55.341343 kubelet[2695]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:47:55.341343 kubelet[2695]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:47:55.341343 kubelet[2695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:47:55.341343 kubelet[2695]: I0430 00:47:55.340202 2695 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:47:55.356836 kubelet[2695]: I0430 00:47:55.355712 2695 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:47:55.356836 kubelet[2695]: I0430 00:47:55.355744 2695 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:47:55.356836 kubelet[2695]: I0430 00:47:55.356125 2695 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:47:55.359381 kubelet[2695]: I0430 00:47:55.358718 2695 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:47:55.363047 kubelet[2695]: I0430 00:47:55.362102 2695 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:47:55.370597 kubelet[2695]: E0430 00:47:55.369222 2695 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:47:55.370597 kubelet[2695]: I0430 00:47:55.369273 2695 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:47:55.372062 kubelet[2695]: I0430 00:47:55.372036 2695 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:47:55.372599 kubelet[2695]: I0430 00:47:55.372548 2695 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:47:55.373045 kubelet[2695]: I0430 00:47:55.372748 2695 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-7-874bc1dee9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:47:55.373554 kubelet[2695]: I0430 00:47:55.373538 2695 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:47:55.373694 kubelet[2695]: I0430 00:47:55.373683 2695 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:47:55.373830 kubelet[2695]: I0430 00:47:55.373816 2695 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:47:55.374055 kubelet[2695]: I0430 00:47:55.374042 2695 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:47:55.374181 kubelet[2695]: I0430 00:47:55.374169 2695 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:47:55.374259 kubelet[2695]: I0430 00:47:55.374250 2695 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:47:55.374322 kubelet[2695]: I0430 00:47:55.374313 2695 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:47:55.377062 kubelet[2695]: I0430 00:47:55.377044 2695 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:47:55.377947 kubelet[2695]: I0430 00:47:55.377902 2695 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:47:55.378972 kubelet[2695]: I0430 00:47:55.378941 2695 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:47:55.379211 kubelet[2695]: I0430 00:47:55.379192 2695 server.go:1287] "Started kubelet" Apr 30 00:47:55.387974 kubelet[2695]: I0430 00:47:55.387944 2695 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:47:55.393064 kubelet[2695]: I0430 00:47:55.393015 2695 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:47:55.395279 kubelet[2695]: I0430 00:47:55.394320 2695 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:47:55.398573 kubelet[2695]: I0430 00:47:55.394645 2695 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:47:55.398573 kubelet[2695]: I0430 00:47:55.398239 2695 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:47:55.398573 kubelet[2695]: I0430 00:47:55.396852 2695 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:47:55.410762 kubelet[2695]: I0430 00:47:55.394989 2695 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:47:55.410861 kubelet[2695]: I0430 00:47:55.396876 2695 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:47:55.410975 kubelet[2695]: I0430 00:47:55.410957 2695 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:47:55.410975 kubelet[2695]: E0430 00:47:55.397168 2695 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-3-7-874bc1dee9\" not found" Apr 30 00:47:55.424690 kubelet[2695]: I0430 00:47:55.424659 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:47:55.426590 kubelet[2695]: I0430 00:47:55.426540 2695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:47:55.426700 kubelet[2695]: I0430 00:47:55.426690 2695 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:47:55.426766 kubelet[2695]: I0430 00:47:55.426757 2695 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:47:55.426831 kubelet[2695]: I0430 00:47:55.426822 2695 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:47:55.426938 kubelet[2695]: E0430 00:47:55.426921 2695 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:47:55.435774 kubelet[2695]: I0430 00:47:55.435732 2695 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:47:55.435774 kubelet[2695]: I0430 00:47:55.435755 2695 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:47:55.435944 kubelet[2695]: I0430 00:47:55.435879 2695 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:47:55.508756 kubelet[2695]: I0430 00:47:55.508698 2695 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:47:55.508756 kubelet[2695]: I0430 00:47:55.508719 2695 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:47:55.508756 kubelet[2695]: I0430 00:47:55.508741 2695 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:47:55.508945 kubelet[2695]: I0430 00:47:55.508918 2695 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:47:55.508945 kubelet[2695]: I0430 00:47:55.508931 2695 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:47:55.508997 kubelet[2695]: I0430 00:47:55.508950 2695 policy_none.go:49] "None policy: Start" Apr 30 00:47:55.508997 kubelet[2695]: I0430 00:47:55.508959 2695 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:47:55.508997 kubelet[2695]: I0430 00:47:55.508970 2695 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:47:55.509224 kubelet[2695]: I0430 00:47:55.509061 2695 state_mem.go:75] "Updated machine memory state" Apr 30 00:47:55.513537 kubelet[2695]: I0430 00:47:55.513511 2695 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:47:55.513718 kubelet[2695]: I0430 00:47:55.513704 2695 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:47:55.513751 kubelet[2695]: I0430 00:47:55.513720 2695 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:47:55.514314 kubelet[2695]: I0430 00:47:55.514275 2695 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:47:55.517095 kubelet[2695]: E0430 00:47:55.517053 2695 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:47:55.530367 kubelet[2695]: I0430 00:47:55.529624 2695 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.534813 kubelet[2695]: I0430 00:47:55.534424 2695 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.534813 kubelet[2695]: I0430 00:47:55.531478 2695 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.626436 kubelet[2695]: I0430 00:47:55.626285 2695 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.638275 kubelet[2695]: I0430 00:47:55.638186 2695 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.638275 kubelet[2695]: I0430 00:47:55.638275 2695 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.713076 kubelet[2695]: I0430 00:47:55.713011 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34ebbf0bd7e6f022cc3122d00c4a0a06-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-7-874bc1dee9\" (UID: \"34ebbf0bd7e6f022cc3122d00c4a0a06\") " pod="kube-system/kube-apiserver-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.713076 kubelet[2695]: I0430 00:47:55.713064 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f08d26f4974860c52b7ffb589a377c58-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-7-874bc1dee9\" (UID: \"f08d26f4974860c52b7ffb589a377c58\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.713076 kubelet[2695]: I0430 00:47:55.713086 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96abf1ee77e0eb68ae4d7a9098af3e2b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-7-874bc1dee9\" (UID: \"96abf1ee77e0eb68ae4d7a9098af3e2b\") " pod="kube-system/kube-scheduler-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.715241 kubelet[2695]: I0430 00:47:55.713105 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34ebbf0bd7e6f022cc3122d00c4a0a06-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-7-874bc1dee9\" (UID: \"34ebbf0bd7e6f022cc3122d00c4a0a06\") " pod="kube-system/kube-apiserver-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.715241 kubelet[2695]: I0430 00:47:55.713123 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34ebbf0bd7e6f022cc3122d00c4a0a06-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-7-874bc1dee9\" (UID: \"34ebbf0bd7e6f022cc3122d00c4a0a06\") " pod="kube-system/kube-apiserver-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.715241 kubelet[2695]: I0430 00:47:55.713139 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f08d26f4974860c52b7ffb589a377c58-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-7-874bc1dee9\" (UID: \"f08d26f4974860c52b7ffb589a377c58\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.715241 kubelet[2695]: I0430 00:47:55.713156 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f08d26f4974860c52b7ffb589a377c58-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-7-874bc1dee9\" (UID: \"f08d26f4974860c52b7ffb589a377c58\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.715241 kubelet[2695]: I0430 00:47:55.713172 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f08d26f4974860c52b7ffb589a377c58-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-7-874bc1dee9\" (UID: \"f08d26f4974860c52b7ffb589a377c58\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:55.715525 kubelet[2695]: I0430 00:47:55.713188 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f08d26f4974860c52b7ffb589a377c58-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-7-874bc1dee9\" (UID: \"f08d26f4974860c52b7ffb589a377c58\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:56.375521 kubelet[2695]: I0430 00:47:56.375476 2695 apiserver.go:52] "Watching apiserver" Apr 30 00:47:56.411724 kubelet[2695]: I0430 00:47:56.411655 2695 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:47:56.497222 kubelet[2695]: I0430 00:47:56.497158 2695 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:56.546279 kubelet[2695]: E0430 00:47:56.546235 2695 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-7-874bc1dee9\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-7-874bc1dee9" Apr 30 00:47:56.612614 kubelet[2695]: I0430 00:47:56.612526 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-7-874bc1dee9" podStartSLOduration=1.6125064550000001 podStartE2EDuration="1.612506455s" podCreationTimestamp="2025-04-30 00:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:47:56.555395932 +0000 UTC m=+1.266034964" watchObservedRunningTime="2025-04-30 00:47:56.612506455 +0000 UTC m=+1.323145487" Apr 30 00:47:56.654471 kubelet[2695]: I0430 00:47:56.653831 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-7-874bc1dee9" podStartSLOduration=1.6538112470000002 podStartE2EDuration="1.653811247s" podCreationTimestamp="2025-04-30 00:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:47:56.613736751 +0000 UTC m=+1.324375783" watchObservedRunningTime="2025-04-30 00:47:56.653811247 +0000 UTC m=+1.364450279" Apr 30 00:47:56.703178 kubelet[2695]: I0430 00:47:56.703129 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-7-874bc1dee9" podStartSLOduration=1.703110226 podStartE2EDuration="1.703110226s" podCreationTimestamp="2025-04-30 00:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:47:56.654305173 +0000 UTC m=+1.364944245" watchObservedRunningTime="2025-04-30 00:47:56.703110226 +0000 UTC m=+1.413749258" Apr 30 00:47:59.938602 kubelet[2695]: I0430 00:47:59.938478 2695 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:47:59.940268 kubelet[2695]: I0430 00:47:59.939526 2695 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:47:59.940355 containerd[1475]: time="2025-04-30T00:47:59.939203420Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:48:00.771681 systemd[1]: Created slice kubepods-besteffort-pod3500feca_e526_41af_b6f3_310da3263199.slice - libcontainer container kubepods-besteffort-pod3500feca_e526_41af_b6f3_310da3263199.slice. Apr 30 00:48:00.820097 sudo[1839]: pam_unix(sudo:session): session closed for user root Apr 30 00:48:00.851158 kubelet[2695]: I0430 00:48:00.851003 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jn2t\" (UniqueName: \"kubernetes.io/projected/3500feca-e526-41af-b6f3-310da3263199-kube-api-access-7jn2t\") pod \"kube-proxy-tmqks\" (UID: \"3500feca-e526-41af-b6f3-310da3263199\") " pod="kube-system/kube-proxy-tmqks" Apr 30 00:48:00.851158 kubelet[2695]: I0430 00:48:00.851051 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3500feca-e526-41af-b6f3-310da3263199-kube-proxy\") pod \"kube-proxy-tmqks\" (UID: \"3500feca-e526-41af-b6f3-310da3263199\") " pod="kube-system/kube-proxy-tmqks" Apr 30 00:48:00.851158 kubelet[2695]: I0430 00:48:00.851072 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3500feca-e526-41af-b6f3-310da3263199-xtables-lock\") pod \"kube-proxy-tmqks\" (UID: \"3500feca-e526-41af-b6f3-310da3263199\") " pod="kube-system/kube-proxy-tmqks" Apr 30 00:48:00.851158 kubelet[2695]: I0430 00:48:00.851087 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3500feca-e526-41af-b6f3-310da3263199-lib-modules\") pod \"kube-proxy-tmqks\" (UID: \"3500feca-e526-41af-b6f3-310da3263199\") " pod="kube-system/kube-proxy-tmqks" Apr 30 00:48:00.982707 sshd[1836]: pam_unix(sshd:session): session closed for user core Apr 30 00:48:00.991333 systemd[1]: sshd@6-49.13.50.0:22-139.178.68.195:34850.service: Deactivated successfully. Apr 30 00:48:00.993257 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:48:00.993429 systemd[1]: session-7.scope: Consumed 6.042s CPU time, 150.1M memory peak, 0B memory swap peak. Apr 30 00:48:00.995079 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:48:00.996530 systemd-logind[1458]: Removed session 7. Apr 30 00:48:01.010126 systemd[1]: Created slice kubepods-besteffort-pod6941118e_522b_4d9e_8e11_ac4d35f3c485.slice - libcontainer container kubepods-besteffort-pod6941118e_522b_4d9e_8e11_ac4d35f3c485.slice. Apr 30 00:48:01.053415 kubelet[2695]: I0430 00:48:01.052472 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6941118e-522b-4d9e-8e11-ac4d35f3c485-var-lib-calico\") pod \"tigera-operator-789496d6f5-ttlrq\" (UID: \"6941118e-522b-4d9e-8e11-ac4d35f3c485\") " pod="tigera-operator/tigera-operator-789496d6f5-ttlrq" Apr 30 00:48:01.053415 kubelet[2695]: I0430 00:48:01.053196 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr5x4\" (UniqueName: \"kubernetes.io/projected/6941118e-522b-4d9e-8e11-ac4d35f3c485-kube-api-access-qr5x4\") pod \"tigera-operator-789496d6f5-ttlrq\" (UID: \"6941118e-522b-4d9e-8e11-ac4d35f3c485\") " pod="tigera-operator/tigera-operator-789496d6f5-ttlrq" Apr 30 00:48:01.083751 containerd[1475]: time="2025-04-30T00:48:01.083592972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmqks,Uid:3500feca-e526-41af-b6f3-310da3263199,Namespace:kube-system,Attempt:0,}" Apr 30 00:48:01.113814 containerd[1475]: time="2025-04-30T00:48:01.113430738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:48:01.113814 containerd[1475]: time="2025-04-30T00:48:01.113491259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:48:01.113814 containerd[1475]: time="2025-04-30T00:48:01.113511539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:48:01.113814 containerd[1475]: time="2025-04-30T00:48:01.113631501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:48:01.142903 systemd[1]: Started cri-containerd-b04e4ad11cfa4ce147e406f9c4e6ae0ef0fc3a29d3d9dafe9fd2c4f18ee26c01.scope - libcontainer container b04e4ad11cfa4ce147e406f9c4e6ae0ef0fc3a29d3d9dafe9fd2c4f18ee26c01. Apr 30 00:48:01.179189 containerd[1475]: time="2025-04-30T00:48:01.179130703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmqks,Uid:3500feca-e526-41af-b6f3-310da3263199,Namespace:kube-system,Attempt:0,} returns sandbox id \"b04e4ad11cfa4ce147e406f9c4e6ae0ef0fc3a29d3d9dafe9fd2c4f18ee26c01\"" Apr 30 00:48:01.186421 containerd[1475]: time="2025-04-30T00:48:01.186369961Z" level=info msg="CreateContainer within sandbox \"b04e4ad11cfa4ce147e406f9c4e6ae0ef0fc3a29d3d9dafe9fd2c4f18ee26c01\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:48:01.209829 containerd[1475]: time="2025-04-30T00:48:01.209723444Z" level=info msg="CreateContainer within sandbox \"b04e4ad11cfa4ce147e406f9c4e6ae0ef0fc3a29d3d9dafe9fd2c4f18ee26c01\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dc85564ac41601e302afa9d53904fecff359065c6660077926c48e74add48024\"" Apr 30 00:48:01.211298 containerd[1475]: time="2025-04-30T00:48:01.210793704Z" level=info msg="StartContainer for \"dc85564ac41601e302afa9d53904fecff359065c6660077926c48e74add48024\"" Apr 30 00:48:01.237814 systemd[1]: Started cri-containerd-dc85564ac41601e302afa9d53904fecff359065c6660077926c48e74add48024.scope - libcontainer container dc85564ac41601e302afa9d53904fecff359065c6660077926c48e74add48024. Apr 30 00:48:01.272620 containerd[1475]: time="2025-04-30T00:48:01.272479994Z" level=info msg="StartContainer for \"dc85564ac41601e302afa9d53904fecff359065c6660077926c48e74add48024\" returns successfully" Apr 30 00:48:01.319608 containerd[1475]: time="2025-04-30T00:48:01.319481205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-ttlrq,Uid:6941118e-522b-4d9e-8e11-ac4d35f3c485,Namespace:tigera-operator,Attempt:0,}" Apr 30 00:48:01.350906 containerd[1475]: time="2025-04-30T00:48:01.350803359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:48:01.351270 containerd[1475]: time="2025-04-30T00:48:01.351124205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:48:01.351300 containerd[1475]: time="2025-04-30T00:48:01.351258167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:48:01.351627 containerd[1475]: time="2025-04-30T00:48:01.351535573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:48:01.374797 systemd[1]: Started cri-containerd-ac081b967f36243cac3c8c25c1bc6aa7f4ff43cce7fdf07d04a9b6ba89b506cc.scope - libcontainer container ac081b967f36243cac3c8c25c1bc6aa7f4ff43cce7fdf07d04a9b6ba89b506cc. Apr 30 00:48:01.421539 containerd[1475]: time="2025-04-30T00:48:01.421482659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-ttlrq,Uid:6941118e-522b-4d9e-8e11-ac4d35f3c485,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ac081b967f36243cac3c8c25c1bc6aa7f4ff43cce7fdf07d04a9b6ba89b506cc\"" Apr 30 00:48:01.423796 containerd[1475]: time="2025-04-30T00:48:01.423755942Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 00:48:01.526388 kubelet[2695]: I0430 00:48:01.526284 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tmqks" podStartSLOduration=1.526252766 podStartE2EDuration="1.526252766s" podCreationTimestamp="2025-04-30 00:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:48:01.524412011 +0000 UTC m=+6.235051043" watchObservedRunningTime="2025-04-30 00:48:01.526252766 +0000 UTC m=+6.236891838" Apr 30 00:48:03.550227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806242211.mount: Deactivated successfully. Apr 30 00:48:03.882684 containerd[1475]: time="2025-04-30T00:48:03.880502064Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:03.882684 containerd[1475]: time="2025-04-30T00:48:03.881893133Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" Apr 30 00:48:03.882684 containerd[1475]: time="2025-04-30T00:48:03.882408184Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:03.886010 containerd[1475]: time="2025-04-30T00:48:03.885957418Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:03.887018 containerd[1475]: time="2025-04-30T00:48:03.886972680Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.463175977s" Apr 30 00:48:03.887018 containerd[1475]: time="2025-04-30T00:48:03.887015401Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" Apr 30 00:48:03.891059 containerd[1475]: time="2025-04-30T00:48:03.891024085Z" level=info msg="CreateContainer within sandbox \"ac081b967f36243cac3c8c25c1bc6aa7f4ff43cce7fdf07d04a9b6ba89b506cc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 00:48:03.906131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920049738.mount: Deactivated successfully. Apr 30 00:48:03.909084 containerd[1475]: time="2025-04-30T00:48:03.909028542Z" level=info msg="CreateContainer within sandbox \"ac081b967f36243cac3c8c25c1bc6aa7f4ff43cce7fdf07d04a9b6ba89b506cc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"48b0784b6754af34868382d702a1389fd6fe4b08f9756e428af81ee756f2599e\"" Apr 30 00:48:03.910982 containerd[1475]: time="2025-04-30T00:48:03.909796078Z" level=info msg="StartContainer for \"48b0784b6754af34868382d702a1389fd6fe4b08f9756e428af81ee756f2599e\"" Apr 30 00:48:03.940778 systemd[1]: Started cri-containerd-48b0784b6754af34868382d702a1389fd6fe4b08f9756e428af81ee756f2599e.scope - libcontainer container 48b0784b6754af34868382d702a1389fd6fe4b08f9756e428af81ee756f2599e. Apr 30 00:48:03.971790 containerd[1475]: time="2025-04-30T00:48:03.971720336Z" level=info msg="StartContainer for \"48b0784b6754af34868382d702a1389fd6fe4b08f9756e428af81ee756f2599e\" returns successfully" Apr 30 00:48:08.805982 kubelet[2695]: I0430 00:48:08.805911 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-ttlrq" podStartSLOduration=6.340680718 podStartE2EDuration="8.805895976s" podCreationTimestamp="2025-04-30 00:48:00 +0000 UTC" firstStartedPulling="2025-04-30 00:48:01.423019728 +0000 UTC m=+6.133658760" lastFinishedPulling="2025-04-30 00:48:03.888234986 +0000 UTC m=+8.598874018" observedRunningTime="2025-04-30 00:48:04.543120992 +0000 UTC m=+9.253760064" watchObservedRunningTime="2025-04-30 00:48:08.805895976 +0000 UTC m=+13.516534968" Apr 30 00:48:08.981932 systemd[1]: Created slice kubepods-besteffort-podce0e8cd7_4c35_4907_a471_b2d8eb39cb52.slice - libcontainer container kubepods-besteffort-podce0e8cd7_4c35_4907_a471_b2d8eb39cb52.slice. Apr 30 00:48:09.003834 kubelet[2695]: I0430 00:48:09.003709 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ce0e8cd7-4c35-4907-a471-b2d8eb39cb52-typha-certs\") pod \"calico-typha-7785b54679-4nwbb\" (UID: \"ce0e8cd7-4c35-4907-a471-b2d8eb39cb52\") " pod="calico-system/calico-typha-7785b54679-4nwbb" Apr 30 00:48:09.003834 kubelet[2695]: I0430 00:48:09.003752 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxzs7\" (UniqueName: \"kubernetes.io/projected/ce0e8cd7-4c35-4907-a471-b2d8eb39cb52-kube-api-access-xxzs7\") pod \"calico-typha-7785b54679-4nwbb\" (UID: \"ce0e8cd7-4c35-4907-a471-b2d8eb39cb52\") " pod="calico-system/calico-typha-7785b54679-4nwbb" Apr 30 00:48:09.003834 kubelet[2695]: I0430 00:48:09.003773 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce0e8cd7-4c35-4907-a471-b2d8eb39cb52-tigera-ca-bundle\") pod \"calico-typha-7785b54679-4nwbb\" (UID: \"ce0e8cd7-4c35-4907-a471-b2d8eb39cb52\") " pod="calico-system/calico-typha-7785b54679-4nwbb" Apr 30 00:48:09.108379 systemd[1]: Created slice kubepods-besteffort-pod001f2072_9c2b_40bf_a66a_d6c019439f54.slice - libcontainer container kubepods-besteffort-pod001f2072_9c2b_40bf_a66a_d6c019439f54.slice. Apr 30 00:48:09.204623 kubelet[2695]: I0430 00:48:09.204422 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/001f2072-9c2b-40bf-a66a-d6c019439f54-policysync\") pod \"calico-node-6b4p2\" (UID: \"001f2072-9c2b-40bf-a66a-d6c019439f54\") " pod="calico-system/calico-node-6b4p2" Apr 30 00:48:09.204623 kubelet[2695]: I0430 00:48:09.204471 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/001f2072-9c2b-40bf-a66a-d6c019439f54-cni-log-dir\") pod \"calico-node-6b4p2\" (UID: \"001f2072-9c2b-40bf-a66a-d6c019439f54\") " pod="calico-system/calico-node-6b4p2" Apr 30 00:48:09.204623 kubelet[2695]: I0430 00:48:09.204495 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/001f2072-9c2b-40bf-a66a-d6c019439f54-tigera-ca-bundle\") pod \"calico-node-6b4p2\" (UID: \"001f2072-9c2b-40bf-a66a-d6c019439f54\") " pod="calico-system/calico-node-6b4p2" Apr 30 00:48:09.204623 kubelet[2695]: I0430 00:48:09.204516 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/001f2072-9c2b-40bf-a66a-d6c019439f54-lib-modules\") pod \"calico-node-6b4p2\" (UID: \"001f2072-9c2b-40bf-a66a-d6c019439f54\") " pod="calico-system/calico-node-6b4p2" Apr 30 00:48:09.204623 kubelet[2695]: I0430 00:48:09.204533 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/001f2072-9c2b-40bf-a66a-d6c019439f54-var-run-calico\") pod \"calico-node-6b4p2\" (UID: \"001f2072-9c2b-40bf-a66a-d6c019439f54\") " pod="calico-system/calico-node-6b4p2" Apr 30 00:48:09.204966 kubelet[2695]: I0430 00:48:09.204549 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/001f2072-9c2b-40bf-a66a-d6c019439f54-cni-net-dir\") pod \"calico-node-6b4p2\" (UID: \"001f2072-9c2b-40bf-a66a-d6c019439f54\") " pod="calico-system/calico-node-6b4p2" Apr 30 00:48:09.204966 kubelet[2695]: I0430 00:48:09.204582 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/001f2072-9c2b-40bf-a66a-d6c019439f54-flexvol-driver-host\") pod \"calico-node-6b4p2\" (UID: \"001f2072-9c2b-40bf-a66a-d6c019439f54\") " pod="calico-system/calico-node-6b4p2" Apr 30 00:48:09.204966 kubelet[2695]: I0430 00:48:09.204639 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/001f2072-9c2b-40bf-a66a-d6c019439f54-cni-bin-dir\") pod \"calico-node-6b4p2\" (UID: \"001f2072-9c2b-40bf-a66a-d6c019439f54\") " pod="calico-system/calico-node-6b4p2" Apr 30 00:48:09.204966 kubelet[2695]: I0430 00:48:09.204683 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58f5k\" (UniqueName: \"kubernetes.io/projected/001f2072-9c2b-40bf-a66a-d6c019439f54-kube-api-access-58f5k\") pod \"calico-node-6b4p2\" (UID: \"001f2072-9c2b-40bf-a66a-d6c019439f54\") " pod="calico-system/calico-node-6b4p2" Apr 30 00:48:09.204966 kubelet[2695]: I0430 00:48:09.204718 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/001f2072-9c2b-40bf-a66a-d6c019439f54-node-certs\") pod \"calico-node-6b4p2\" (UID: \"001f2072-9c2b-40bf-a66a-d6c019439f54\") " pod="calico-system/calico-node-6b4p2" Apr 30 00:48:09.205099 kubelet[2695]: I0430 00:48:09.204741 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/001f2072-9c2b-40bf-a66a-d6c019439f54-var-lib-calico\") pod \"calico-node-6b4p2\" (UID: \"001f2072-9c2b-40bf-a66a-d6c019439f54\") " pod="calico-system/calico-node-6b4p2" Apr 30 00:48:09.205099 kubelet[2695]: I0430 00:48:09.204766 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/001f2072-9c2b-40bf-a66a-d6c019439f54-xtables-lock\") pod \"calico-node-6b4p2\" (UID: \"001f2072-9c2b-40bf-a66a-d6c019439f54\") " pod="calico-system/calico-node-6b4p2" Apr 30 00:48:09.215054 kubelet[2695]: E0430 00:48:09.214644 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:09.286794 containerd[1475]: time="2025-04-30T00:48:09.286750845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7785b54679-4nwbb,Uid:ce0e8cd7-4c35-4907-a471-b2d8eb39cb52,Namespace:calico-system,Attempt:0,}" Apr 30 00:48:09.309915 kubelet[2695]: I0430 00:48:09.305245 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2-varrun\") pod \"csi-node-driver-6xsdl\" (UID: \"b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2\") " pod="calico-system/csi-node-driver-6xsdl" Apr 30 00:48:09.309915 kubelet[2695]: I0430 00:48:09.307513 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2-registration-dir\") pod \"csi-node-driver-6xsdl\" (UID: \"b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2\") " pod="calico-system/csi-node-driver-6xsdl" Apr 30 00:48:09.309915 kubelet[2695]: I0430 00:48:09.307612 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2-socket-dir\") pod \"csi-node-driver-6xsdl\" (UID: \"b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2\") " pod="calico-system/csi-node-driver-6xsdl" Apr 30 00:48:09.309915 kubelet[2695]: I0430 00:48:09.307652 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpns5\" (UniqueName: \"kubernetes.io/projected/b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2-kube-api-access-tpns5\") pod \"csi-node-driver-6xsdl\" (UID: \"b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2\") " pod="calico-system/csi-node-driver-6xsdl" Apr 30 00:48:09.309915 kubelet[2695]: I0430 00:48:09.307693 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2-kubelet-dir\") pod \"csi-node-driver-6xsdl\" (UID: \"b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2\") " pod="calico-system/csi-node-driver-6xsdl" Apr 30 00:48:09.313405 kubelet[2695]: E0430 00:48:09.313378 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.313602 kubelet[2695]: W0430 00:48:09.313587 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.313693 kubelet[2695]: E0430 00:48:09.313681 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.323541 kubelet[2695]: E0430 00:48:09.323430 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.323541 kubelet[2695]: W0430 00:48:09.323449 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.323541 kubelet[2695]: E0430 00:48:09.323479 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.351307 containerd[1475]: time="2025-04-30T00:48:09.347244033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:48:09.351307 containerd[1475]: time="2025-04-30T00:48:09.347822408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:48:09.351307 containerd[1475]: time="2025-04-30T00:48:09.347842569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:48:09.351307 containerd[1475]: time="2025-04-30T00:48:09.348136177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:48:09.360866 kubelet[2695]: E0430 00:48:09.360769 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.361282 kubelet[2695]: W0430 00:48:09.361087 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.361282 kubelet[2695]: E0430 00:48:09.361120 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.378833 systemd[1]: Started cri-containerd-0403ba982934a39318185223b9979b3b8ecd89578e9ac99e2b9a909cf30e4a28.scope - libcontainer container 0403ba982934a39318185223b9979b3b8ecd89578e9ac99e2b9a909cf30e4a28. Apr 30 00:48:09.409309 kubelet[2695]: E0430 00:48:09.409198 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.409487 kubelet[2695]: W0430 00:48:09.409430 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.409487 kubelet[2695]: E0430 00:48:09.409460 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.410035 kubelet[2695]: E0430 00:48:09.409900 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.410240 kubelet[2695]: W0430 00:48:09.410117 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.410240 kubelet[2695]: E0430 00:48:09.410140 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.410694 kubelet[2695]: E0430 00:48:09.410681 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.410850 kubelet[2695]: W0430 00:48:09.410742 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.410850 kubelet[2695]: E0430 00:48:09.410762 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.411447 kubelet[2695]: E0430 00:48:09.411321 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.411447 kubelet[2695]: W0430 00:48:09.411335 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.411447 kubelet[2695]: E0430 00:48:09.411357 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.411884 kubelet[2695]: E0430 00:48:09.411856 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.411884 kubelet[2695]: W0430 00:48:09.411868 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.412130 kubelet[2695]: E0430 00:48:09.411967 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.412796 kubelet[2695]: E0430 00:48:09.412529 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.412796 kubelet[2695]: W0430 00:48:09.412541 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.412796 kubelet[2695]: E0430 00:48:09.412556 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.413366 kubelet[2695]: E0430 00:48:09.413189 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.413366 kubelet[2695]: W0430 00:48:09.413203 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.413366 kubelet[2695]: E0430 00:48:09.413312 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.414321 kubelet[2695]: E0430 00:48:09.413826 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.414321 kubelet[2695]: W0430 00:48:09.413840 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.414321 kubelet[2695]: E0430 00:48:09.414253 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.414742 kubelet[2695]: E0430 00:48:09.414648 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.414742 kubelet[2695]: W0430 00:48:09.414664 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.414742 kubelet[2695]: E0430 00:48:09.414683 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.415324 kubelet[2695]: E0430 00:48:09.415137 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.415324 kubelet[2695]: W0430 00:48:09.415151 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.415502 kubelet[2695]: E0430 00:48:09.415433 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.416227 kubelet[2695]: E0430 00:48:09.415603 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.416227 kubelet[2695]: W0430 00:48:09.415614 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.416369 kubelet[2695]: E0430 00:48:09.416318 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.416659 kubelet[2695]: E0430 00:48:09.416646 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.416797 kubelet[2695]: W0430 00:48:09.416724 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.416864 kubelet[2695]: E0430 00:48:09.416852 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.417184 kubelet[2695]: E0430 00:48:09.417103 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.417184 kubelet[2695]: W0430 00:48:09.417115 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.417328 kubelet[2695]: E0430 00:48:09.417294 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.417494 kubelet[2695]: E0430 00:48:09.417471 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.417494 kubelet[2695]: W0430 00:48:09.417481 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.417660 kubelet[2695]: E0430 00:48:09.417602 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.418270 kubelet[2695]: E0430 00:48:09.418061 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.418270 kubelet[2695]: W0430 00:48:09.418077 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.418270 kubelet[2695]: E0430 00:48:09.418091 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.418524 kubelet[2695]: E0430 00:48:09.418510 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.418813 kubelet[2695]: W0430 00:48:09.418687 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.418959 kubelet[2695]: E0430 00:48:09.418906 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.419537 kubelet[2695]: E0430 00:48:09.419499 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.419537 kubelet[2695]: W0430 00:48:09.419519 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.419883 kubelet[2695]: E0430 00:48:09.419855 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.420241 kubelet[2695]: E0430 00:48:09.420102 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.420241 kubelet[2695]: W0430 00:48:09.420114 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.420428 kubelet[2695]: E0430 00:48:09.420345 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.420757 kubelet[2695]: E0430 00:48:09.420743 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.420922 kubelet[2695]: W0430 00:48:09.420851 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.421201 kubelet[2695]: E0430 00:48:09.421073 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.421984 kubelet[2695]: E0430 00:48:09.421934 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.422173 kubelet[2695]: W0430 00:48:09.422108 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.422496 kubelet[2695]: E0430 00:48:09.422238 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.422797 kubelet[2695]: E0430 00:48:09.422769 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.422893 kubelet[2695]: W0430 00:48:09.422874 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.423146 kubelet[2695]: E0430 00:48:09.423131 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.423844 containerd[1475]: time="2025-04-30T00:48:09.423779563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6b4p2,Uid:001f2072-9c2b-40bf-a66a-d6c019439f54,Namespace:calico-system,Attempt:0,}" Apr 30 00:48:09.425069 kubelet[2695]: E0430 00:48:09.425056 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.425288 kubelet[2695]: W0430 00:48:09.425164 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.425378 kubelet[2695]: E0430 00:48:09.425358 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.425494 kubelet[2695]: E0430 00:48:09.425464 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.425673 kubelet[2695]: W0430 00:48:09.425555 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.425673 kubelet[2695]: E0430 00:48:09.425614 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.426309 kubelet[2695]: E0430 00:48:09.426207 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.426309 kubelet[2695]: W0430 00:48:09.426221 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.426309 kubelet[2695]: E0430 00:48:09.426232 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.426833 kubelet[2695]: E0430 00:48:09.426654 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.426833 kubelet[2695]: W0430 00:48:09.426667 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.426833 kubelet[2695]: E0430 00:48:09.426677 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.449054 kubelet[2695]: E0430 00:48:09.448786 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.449054 kubelet[2695]: W0430 00:48:09.448894 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.449054 kubelet[2695]: E0430 00:48:09.448914 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.463518 containerd[1475]: time="2025-04-30T00:48:09.462846629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7785b54679-4nwbb,Uid:ce0e8cd7-4c35-4907-a471-b2d8eb39cb52,Namespace:calico-system,Attempt:0,} returns sandbox id \"0403ba982934a39318185223b9979b3b8ecd89578e9ac99e2b9a909cf30e4a28\"" Apr 30 00:48:09.468832 containerd[1475]: time="2025-04-30T00:48:09.467556393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 00:48:09.474610 containerd[1475]: time="2025-04-30T00:48:09.474476895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:48:09.475342 containerd[1475]: time="2025-04-30T00:48:09.475239595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:48:09.475342 containerd[1475]: time="2025-04-30T00:48:09.475295996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:48:09.475342 containerd[1475]: time="2025-04-30T00:48:09.475450600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:48:09.495788 systemd[1]: Started cri-containerd-d9a9cd8a897290bf1254fcf584415f2c8d3f70376a5608f9f8ba4a099607b75c.scope - libcontainer container d9a9cd8a897290bf1254fcf584415f2c8d3f70376a5608f9f8ba4a099607b75c. Apr 30 00:48:09.523691 containerd[1475]: time="2025-04-30T00:48:09.523596065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6b4p2,Uid:001f2072-9c2b-40bf-a66a-d6c019439f54,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9a9cd8a897290bf1254fcf584415f2c8d3f70376a5608f9f8ba4a099607b75c\"" Apr 30 00:48:09.587134 kubelet[2695]: E0430 00:48:09.587082 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.587134 kubelet[2695]: W0430 00:48:09.587119 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.587504 kubelet[2695]: E0430 00:48:09.587152 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.587504 kubelet[2695]: E0430 00:48:09.587498 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.587668 kubelet[2695]: W0430 00:48:09.587516 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.587668 kubelet[2695]: E0430 00:48:09.587632 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.587976 kubelet[2695]: E0430 00:48:09.587946 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.587976 kubelet[2695]: W0430 00:48:09.587970 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.588394 kubelet[2695]: E0430 00:48:09.587990 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.588627 kubelet[2695]: E0430 00:48:09.588596 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.588627 kubelet[2695]: W0430 00:48:09.588625 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.588899 kubelet[2695]: E0430 00:48:09.588647 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:09.589050 kubelet[2695]: E0430 00:48:09.588998 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:09.589050 kubelet[2695]: W0430 00:48:09.589015 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:09.589227 kubelet[2695]: E0430 00:48:09.589061 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:11.427496 kubelet[2695]: E0430 00:48:11.427325 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:13.429614 kubelet[2695]: E0430 00:48:13.427822 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:15.428384 kubelet[2695]: E0430 00:48:15.427766 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:17.428337 kubelet[2695]: E0430 00:48:17.428257 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:19.429185 kubelet[2695]: E0430 00:48:19.427958 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:21.429505 kubelet[2695]: E0430 00:48:21.427643 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:23.427641 kubelet[2695]: E0430 00:48:23.427490 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:25.429489 kubelet[2695]: E0430 00:48:25.428930 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:27.427949 kubelet[2695]: E0430 00:48:27.427472 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:29.429670 kubelet[2695]: E0430 00:48:29.428543 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:31.429670 kubelet[2695]: E0430 00:48:31.428473 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:33.428069 kubelet[2695]: E0430 00:48:33.427664 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:35.428588 kubelet[2695]: E0430 00:48:35.428019 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:37.429652 kubelet[2695]: E0430 00:48:37.427957 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:39.428792 kubelet[2695]: E0430 00:48:39.428092 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:41.428752 kubelet[2695]: E0430 00:48:41.428646 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:43.428585 kubelet[2695]: E0430 00:48:43.428161 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:45.429654 kubelet[2695]: E0430 00:48:45.428194 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:47.428163 kubelet[2695]: E0430 00:48:47.427948 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:49.427746 kubelet[2695]: E0430 00:48:49.427552 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:51.428169 kubelet[2695]: E0430 00:48:51.427922 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:53.429343 kubelet[2695]: E0430 00:48:53.429258 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:53.449615 containerd[1475]: time="2025-04-30T00:48:53.449521156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:53.451601 containerd[1475]: time="2025-04-30T00:48:53.451334558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" Apr 30 00:48:53.453080 containerd[1475]: time="2025-04-30T00:48:53.453002714Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:53.457714 containerd[1475]: time="2025-04-30T00:48:53.457654764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:53.459882 containerd[1475]: time="2025-04-30T00:48:53.459629774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 43.990784547s" Apr 30 00:48:53.459882 containerd[1475]: time="2025-04-30T00:48:53.459704417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" Apr 30 00:48:53.463239 containerd[1475]: time="2025-04-30T00:48:53.463135012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 00:48:53.480007 containerd[1475]: time="2025-04-30T00:48:53.479676881Z" level=info msg="CreateContainer within sandbox \"0403ba982934a39318185223b9979b3b8ecd89578e9ac99e2b9a909cf30e4a28\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 00:48:53.500264 containerd[1475]: time="2025-04-30T00:48:53.500208730Z" level=info msg="CreateContainer within sandbox \"0403ba982934a39318185223b9979b3b8ecd89578e9ac99e2b9a909cf30e4a28\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d3dfaadf36ccd102ee2f9dad83c8c4498c8f54b5db7bb95943af0bceb50076e4\"" Apr 30 00:48:53.501810 containerd[1475]: time="2025-04-30T00:48:53.500949444Z" level=info msg="StartContainer for \"d3dfaadf36ccd102ee2f9dad83c8c4498c8f54b5db7bb95943af0bceb50076e4\"" Apr 30 00:48:53.533789 systemd[1]: Started cri-containerd-d3dfaadf36ccd102ee2f9dad83c8c4498c8f54b5db7bb95943af0bceb50076e4.scope - libcontainer container d3dfaadf36ccd102ee2f9dad83c8c4498c8f54b5db7bb95943af0bceb50076e4. Apr 30 00:48:53.574585 containerd[1475]: time="2025-04-30T00:48:53.574498533Z" level=info msg="StartContainer for \"d3dfaadf36ccd102ee2f9dad83c8c4498c8f54b5db7bb95943af0bceb50076e4\" returns successfully" Apr 30 00:48:53.666234 kubelet[2695]: I0430 00:48:53.666084 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7785b54679-4nwbb" podStartSLOduration=1.672075135 podStartE2EDuration="45.666059877s" podCreationTimestamp="2025-04-30 00:48:08 +0000 UTC" firstStartedPulling="2025-04-30 00:48:09.467236424 +0000 UTC m=+14.177875456" lastFinishedPulling="2025-04-30 00:48:53.461221126 +0000 UTC m=+58.171860198" observedRunningTime="2025-04-30 00:48:53.665762824 +0000 UTC m=+58.376401896" watchObservedRunningTime="2025-04-30 00:48:53.666059877 +0000 UTC m=+58.376698949" Apr 30 00:48:53.678268 kubelet[2695]: E0430 00:48:53.678145 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.678989 kubelet[2695]: W0430 00:48:53.678686 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.678989 kubelet[2695]: E0430 00:48:53.678730 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.680513 kubelet[2695]: E0430 00:48:53.680083 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.680513 kubelet[2695]: W0430 00:48:53.680106 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.680513 kubelet[2695]: E0430 00:48:53.680280 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.683273 kubelet[2695]: E0430 00:48:53.682868 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.683273 kubelet[2695]: W0430 00:48:53.682889 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.683273 kubelet[2695]: E0430 00:48:53.682909 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.684760 kubelet[2695]: E0430 00:48:53.684483 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.684760 kubelet[2695]: W0430 00:48:53.684505 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.685210 kubelet[2695]: E0430 00:48:53.684653 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.686356 kubelet[2695]: E0430 00:48:53.685936 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.686566 kubelet[2695]: W0430 00:48:53.686451 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.686566 kubelet[2695]: E0430 00:48:53.686483 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.687149 kubelet[2695]: E0430 00:48:53.687019 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.687763 kubelet[2695]: W0430 00:48:53.687433 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.687763 kubelet[2695]: E0430 00:48:53.687465 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.688851 kubelet[2695]: E0430 00:48:53.688677 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.688851 kubelet[2695]: W0430 00:48:53.688693 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.688851 kubelet[2695]: E0430 00:48:53.688710 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.689711 kubelet[2695]: E0430 00:48:53.689696 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.689879 kubelet[2695]: W0430 00:48:53.689795 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.689879 kubelet[2695]: E0430 00:48:53.689816 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.691187 kubelet[2695]: E0430 00:48:53.691168 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.691456 kubelet[2695]: W0430 00:48:53.691272 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.691456 kubelet[2695]: E0430 00:48:53.691294 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.692412 kubelet[2695]: E0430 00:48:53.692392 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.692864 kubelet[2695]: W0430 00:48:53.692688 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.692864 kubelet[2695]: E0430 00:48:53.692714 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.693926 kubelet[2695]: E0430 00:48:53.693747 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.693926 kubelet[2695]: W0430 00:48:53.693765 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.693926 kubelet[2695]: E0430 00:48:53.693780 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.695228 kubelet[2695]: E0430 00:48:53.694958 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.695228 kubelet[2695]: W0430 00:48:53.694975 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.695228 kubelet[2695]: E0430 00:48:53.694989 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.696329 kubelet[2695]: E0430 00:48:53.696145 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.696329 kubelet[2695]: W0430 00:48:53.696160 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.696329 kubelet[2695]: E0430 00:48:53.696174 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.697734 kubelet[2695]: E0430 00:48:53.697598 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.697734 kubelet[2695]: W0430 00:48:53.697613 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.697734 kubelet[2695]: E0430 00:48:53.697631 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.698369 kubelet[2695]: E0430 00:48:53.698032 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.698541 kubelet[2695]: W0430 00:48:53.698453 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.698541 kubelet[2695]: E0430 00:48:53.698476 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.753459 kubelet[2695]: E0430 00:48:53.753303 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.753459 kubelet[2695]: W0430 00:48:53.753399 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.753459 kubelet[2695]: E0430 00:48:53.753437 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.754181 kubelet[2695]: E0430 00:48:53.754147 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.754276 kubelet[2695]: W0430 00:48:53.754187 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.754276 kubelet[2695]: E0430 00:48:53.754226 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.755000 kubelet[2695]: E0430 00:48:53.754904 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.755120 kubelet[2695]: W0430 00:48:53.755002 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.755120 kubelet[2695]: E0430 00:48:53.755045 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.755592 kubelet[2695]: E0430 00:48:53.755540 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.755592 kubelet[2695]: W0430 00:48:53.755556 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.755817 kubelet[2695]: E0430 00:48:53.755651 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.756116 kubelet[2695]: E0430 00:48:53.756096 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.756116 kubelet[2695]: W0430 00:48:53.756111 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.756457 kubelet[2695]: E0430 00:48:53.756186 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.756762 kubelet[2695]: E0430 00:48:53.756741 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.756762 kubelet[2695]: W0430 00:48:53.756755 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.756981 kubelet[2695]: E0430 00:48:53.756826 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.757217 kubelet[2695]: E0430 00:48:53.757202 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.757217 kubelet[2695]: W0430 00:48:53.757213 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.757476 kubelet[2695]: E0430 00:48:53.757268 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.757699 kubelet[2695]: E0430 00:48:53.757686 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.757699 kubelet[2695]: W0430 00:48:53.757697 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.758285 kubelet[2695]: E0430 00:48:53.758170 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.758671 kubelet[2695]: E0430 00:48:53.758645 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.758738 kubelet[2695]: W0430 00:48:53.758676 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.758738 kubelet[2695]: E0430 00:48:53.758713 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.759028 kubelet[2695]: E0430 00:48:53.759014 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.759028 kubelet[2695]: W0430 00:48:53.759029 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.759169 kubelet[2695]: E0430 00:48:53.759087 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.759344 kubelet[2695]: E0430 00:48:53.759330 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.759344 kubelet[2695]: W0430 00:48:53.759343 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.759471 kubelet[2695]: E0430 00:48:53.759401 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.759662 kubelet[2695]: E0430 00:48:53.759648 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.759662 kubelet[2695]: W0430 00:48:53.759661 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.759862 kubelet[2695]: E0430 00:48:53.759741 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.759957 kubelet[2695]: E0430 00:48:53.759944 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.759957 kubelet[2695]: W0430 00:48:53.759957 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.760018 kubelet[2695]: E0430 00:48:53.759975 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.760635 kubelet[2695]: E0430 00:48:53.760380 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.760635 kubelet[2695]: W0430 00:48:53.760397 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.760635 kubelet[2695]: E0430 00:48:53.760418 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.760928 kubelet[2695]: E0430 00:48:53.760902 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.760928 kubelet[2695]: W0430 00:48:53.760918 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.761019 kubelet[2695]: E0430 00:48:53.760938 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.761448 kubelet[2695]: E0430 00:48:53.761347 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.761448 kubelet[2695]: W0430 00:48:53.761363 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.761448 kubelet[2695]: E0430 00:48:53.761383 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.761874 kubelet[2695]: E0430 00:48:53.761747 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.761874 kubelet[2695]: W0430 00:48:53.761760 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.761874 kubelet[2695]: E0430 00:48:53.761779 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:53.762108 kubelet[2695]: E0430 00:48:53.762058 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:53.762108 kubelet[2695]: W0430 00:48:53.762070 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:53.762108 kubelet[2695]: E0430 00:48:53.762083 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.705947 kubelet[2695]: E0430 00:48:54.705784 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.705947 kubelet[2695]: W0430 00:48:54.705811 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.705947 kubelet[2695]: E0430 00:48:54.705838 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.707444 kubelet[2695]: E0430 00:48:54.707120 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.707444 kubelet[2695]: W0430 00:48:54.707149 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.707444 kubelet[2695]: E0430 00:48:54.707192 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.708028 kubelet[2695]: E0430 00:48:54.707648 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.708028 kubelet[2695]: W0430 00:48:54.707674 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.708028 kubelet[2695]: E0430 00:48:54.707693 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.708273 kubelet[2695]: E0430 00:48:54.708240 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.708440 kubelet[2695]: W0430 00:48:54.708368 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.708578 kubelet[2695]: E0430 00:48:54.708395 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.709082 kubelet[2695]: E0430 00:48:54.708923 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.709082 kubelet[2695]: W0430 00:48:54.708946 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.709082 kubelet[2695]: E0430 00:48:54.708971 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.709393 kubelet[2695]: E0430 00:48:54.709324 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.709393 kubelet[2695]: W0430 00:48:54.709338 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.709598 kubelet[2695]: E0430 00:48:54.709350 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.710003 kubelet[2695]: E0430 00:48:54.709876 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.710003 kubelet[2695]: W0430 00:48:54.709897 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.710003 kubelet[2695]: E0430 00:48:54.709910 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.710306 kubelet[2695]: E0430 00:48:54.710186 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.710306 kubelet[2695]: W0430 00:48:54.710199 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.710306 kubelet[2695]: E0430 00:48:54.710210 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.710920 kubelet[2695]: E0430 00:48:54.710776 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.710920 kubelet[2695]: W0430 00:48:54.710792 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.710920 kubelet[2695]: E0430 00:48:54.710809 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.711139 kubelet[2695]: E0430 00:48:54.711096 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.711139 kubelet[2695]: W0430 00:48:54.711107 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.711290 kubelet[2695]: E0430 00:48:54.711129 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.711557 kubelet[2695]: E0430 00:48:54.711459 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.711557 kubelet[2695]: W0430 00:48:54.711471 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.711557 kubelet[2695]: E0430 00:48:54.711480 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.711917 kubelet[2695]: E0430 00:48:54.711800 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.711917 kubelet[2695]: W0430 00:48:54.711811 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.711917 kubelet[2695]: E0430 00:48:54.711821 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.712152 kubelet[2695]: E0430 00:48:54.712058 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.712152 kubelet[2695]: W0430 00:48:54.712069 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.712152 kubelet[2695]: E0430 00:48:54.712078 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.712302 kubelet[2695]: E0430 00:48:54.712292 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.712434 kubelet[2695]: W0430 00:48:54.712336 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.712434 kubelet[2695]: E0430 00:48:54.712347 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.712793 kubelet[2695]: E0430 00:48:54.712714 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.712793 kubelet[2695]: W0430 00:48:54.712725 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.712793 kubelet[2695]: E0430 00:48:54.712735 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.762391 kubelet[2695]: E0430 00:48:54.762336 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.762391 kubelet[2695]: W0430 00:48:54.762382 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.762836 kubelet[2695]: E0430 00:48:54.762415 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.762973 kubelet[2695]: E0430 00:48:54.762886 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.762973 kubelet[2695]: W0430 00:48:54.762907 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.762973 kubelet[2695]: E0430 00:48:54.762938 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.763314 kubelet[2695]: E0430 00:48:54.763281 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.763314 kubelet[2695]: W0430 00:48:54.763306 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.763707 kubelet[2695]: E0430 00:48:54.763333 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.763842 kubelet[2695]: E0430 00:48:54.763821 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.763842 kubelet[2695]: W0430 00:48:54.763839 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.764091 kubelet[2695]: E0430 00:48:54.763871 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.764274 kubelet[2695]: E0430 00:48:54.764236 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.764274 kubelet[2695]: W0430 00:48:54.764272 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.764629 kubelet[2695]: E0430 00:48:54.764395 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.764808 kubelet[2695]: E0430 00:48:54.764768 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.764808 kubelet[2695]: W0430 00:48:54.764794 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.765060 kubelet[2695]: E0430 00:48:54.764925 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.765452 kubelet[2695]: E0430 00:48:54.765294 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.765452 kubelet[2695]: W0430 00:48:54.765312 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.765452 kubelet[2695]: E0430 00:48:54.765388 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.765731 kubelet[2695]: E0430 00:48:54.765714 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.765731 kubelet[2695]: W0430 00:48:54.765729 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.765824 kubelet[2695]: E0430 00:48:54.765747 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.765948 kubelet[2695]: E0430 00:48:54.765936 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.765948 kubelet[2695]: W0430 00:48:54.765947 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.766034 kubelet[2695]: E0430 00:48:54.765969 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.766810 kubelet[2695]: E0430 00:48:54.766694 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.766810 kubelet[2695]: W0430 00:48:54.766713 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.766810 kubelet[2695]: E0430 00:48:54.766737 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.767458 kubelet[2695]: E0430 00:48:54.767435 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.767548 kubelet[2695]: W0430 00:48:54.767529 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.768101 kubelet[2695]: E0430 00:48:54.767808 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.768312 kubelet[2695]: E0430 00:48:54.768297 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.768404 kubelet[2695]: W0430 00:48:54.768381 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.768480 kubelet[2695]: E0430 00:48:54.768465 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.770887 kubelet[2695]: E0430 00:48:54.769919 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.770887 kubelet[2695]: W0430 00:48:54.769934 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.770887 kubelet[2695]: E0430 00:48:54.769957 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.771885 kubelet[2695]: E0430 00:48:54.771869 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.771996 kubelet[2695]: W0430 00:48:54.771979 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.772059 kubelet[2695]: E0430 00:48:54.772048 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.772311 kubelet[2695]: E0430 00:48:54.772292 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.772394 kubelet[2695]: W0430 00:48:54.772382 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.772464 kubelet[2695]: E0430 00:48:54.772452 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.772705 kubelet[2695]: E0430 00:48:54.772692 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.772784 kubelet[2695]: W0430 00:48:54.772764 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.772869 kubelet[2695]: E0430 00:48:54.772857 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.773167 kubelet[2695]: E0430 00:48:54.773154 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.773232 kubelet[2695]: W0430 00:48:54.773221 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.773293 kubelet[2695]: E0430 00:48:54.773282 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.774063 kubelet[2695]: E0430 00:48:54.774041 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:48:54.774063 kubelet[2695]: W0430 00:48:54.774058 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:48:54.774169 kubelet[2695]: E0430 00:48:54.774070 2695 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:48:54.953211 containerd[1475]: time="2025-04-30T00:48:54.953146479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:54.954465 containerd[1475]: time="2025-04-30T00:48:54.954397015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" Apr 30 00:48:54.956175 containerd[1475]: time="2025-04-30T00:48:54.956047250Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:54.960857 containerd[1475]: time="2025-04-30T00:48:54.960792426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:54.962437 containerd[1475]: time="2025-04-30T00:48:54.962061964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.498875549s" Apr 30 00:48:54.962437 containerd[1475]: time="2025-04-30T00:48:54.962104606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" Apr 30 00:48:54.965504 containerd[1475]: time="2025-04-30T00:48:54.965441877Z" level=info msg="CreateContainer within sandbox \"d9a9cd8a897290bf1254fcf584415f2c8d3f70376a5608f9f8ba4a099607b75c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 00:48:54.984160 containerd[1475]: time="2025-04-30T00:48:54.984112486Z" level=info msg="CreateContainer within sandbox \"d9a9cd8a897290bf1254fcf584415f2c8d3f70376a5608f9f8ba4a099607b75c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ec9cb73cd7fcdfd9420ba154ac0f3bb5115cd2b18f2985a262c031afecbbd087\"" Apr 30 00:48:54.985646 containerd[1475]: time="2025-04-30T00:48:54.984734274Z" level=info msg="StartContainer for \"ec9cb73cd7fcdfd9420ba154ac0f3bb5115cd2b18f2985a262c031afecbbd087\"" Apr 30 00:48:55.019749 systemd[1]: Started cri-containerd-ec9cb73cd7fcdfd9420ba154ac0f3bb5115cd2b18f2985a262c031afecbbd087.scope - libcontainer container ec9cb73cd7fcdfd9420ba154ac0f3bb5115cd2b18f2985a262c031afecbbd087. Apr 30 00:48:55.054323 containerd[1475]: time="2025-04-30T00:48:55.054174281Z" level=info msg="StartContainer for \"ec9cb73cd7fcdfd9420ba154ac0f3bb5115cd2b18f2985a262c031afecbbd087\" returns successfully" Apr 30 00:48:55.075214 systemd[1]: cri-containerd-ec9cb73cd7fcdfd9420ba154ac0f3bb5115cd2b18f2985a262c031afecbbd087.scope: Deactivated successfully. Apr 30 00:48:55.152155 containerd[1475]: time="2025-04-30T00:48:55.152042348Z" level=info msg="shim disconnected" id=ec9cb73cd7fcdfd9420ba154ac0f3bb5115cd2b18f2985a262c031afecbbd087 namespace=k8s.io Apr 30 00:48:55.152155 containerd[1475]: time="2025-04-30T00:48:55.152173754Z" level=warning msg="cleaning up after shim disconnected" id=ec9cb73cd7fcdfd9420ba154ac0f3bb5115cd2b18f2985a262c031afecbbd087 namespace=k8s.io Apr 30 00:48:55.152155 containerd[1475]: time="2025-04-30T00:48:55.152190755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:48:55.429347 kubelet[2695]: E0430 00:48:55.428208 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:55.472647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec9cb73cd7fcdfd9420ba154ac0f3bb5115cd2b18f2985a262c031afecbbd087-rootfs.mount: Deactivated successfully. Apr 30 00:48:55.653371 containerd[1475]: time="2025-04-30T00:48:55.651257776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 00:48:57.428065 kubelet[2695]: E0430 00:48:57.427596 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:59.427930 kubelet[2695]: E0430 00:48:59.427875 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:48:59.688855 containerd[1475]: time="2025-04-30T00:48:59.688682344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:59.690531 containerd[1475]: time="2025-04-30T00:48:59.690467107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" Apr 30 00:48:59.691687 containerd[1475]: time="2025-04-30T00:48:59.691595920Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:59.694351 containerd[1475]: time="2025-04-30T00:48:59.694166519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:48:59.695625 containerd[1475]: time="2025-04-30T00:48:59.695310452Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.044010114s" Apr 30 00:48:59.695625 containerd[1475]: time="2025-04-30T00:48:59.695352054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" Apr 30 00:48:59.708720 containerd[1475]: time="2025-04-30T00:48:59.708628709Z" level=info msg="CreateContainer within sandbox \"d9a9cd8a897290bf1254fcf584415f2c8d3f70376a5608f9f8ba4a099607b75c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 00:48:59.726658 containerd[1475]: time="2025-04-30T00:48:59.726502017Z" level=info msg="CreateContainer within sandbox \"d9a9cd8a897290bf1254fcf584415f2c8d3f70376a5608f9f8ba4a099607b75c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"184e51d909a75fab5a9737d87afc56249842c1471afefa51fb3b0706260c66e2\"" Apr 30 00:48:59.728449 containerd[1475]: time="2025-04-30T00:48:59.728064690Z" level=info msg="StartContainer for \"184e51d909a75fab5a9737d87afc56249842c1471afefa51fb3b0706260c66e2\"" Apr 30 00:48:59.766900 systemd[1]: Started cri-containerd-184e51d909a75fab5a9737d87afc56249842c1471afefa51fb3b0706260c66e2.scope - libcontainer container 184e51d909a75fab5a9737d87afc56249842c1471afefa51fb3b0706260c66e2. Apr 30 00:48:59.805729 containerd[1475]: time="2025-04-30T00:48:59.805417435Z" level=info msg="StartContainer for \"184e51d909a75fab5a9737d87afc56249842c1471afefa51fb3b0706260c66e2\" returns successfully" Apr 30 00:49:00.286340 containerd[1475]: time="2025-04-30T00:49:00.286280966Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:49:00.288953 systemd[1]: cri-containerd-184e51d909a75fab5a9737d87afc56249842c1471afefa51fb3b0706260c66e2.scope: Deactivated successfully. Apr 30 00:49:00.316167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-184e51d909a75fab5a9737d87afc56249842c1471afefa51fb3b0706260c66e2-rootfs.mount: Deactivated successfully. Apr 30 00:49:00.397665 kubelet[2695]: I0430 00:49:00.397630 2695 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 00:49:00.423469 containerd[1475]: time="2025-04-30T00:49:00.423349981Z" level=info msg="shim disconnected" id=184e51d909a75fab5a9737d87afc56249842c1471afefa51fb3b0706260c66e2 namespace=k8s.io Apr 30 00:49:00.423469 containerd[1475]: time="2025-04-30T00:49:00.423456626Z" level=warning msg="cleaning up after shim disconnected" id=184e51d909a75fab5a9737d87afc56249842c1471afefa51fb3b0706260c66e2 namespace=k8s.io Apr 30 00:49:00.423469 containerd[1475]: time="2025-04-30T00:49:00.423467266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:49:00.451787 systemd[1]: Created slice kubepods-burstable-podd84adf8b_78ca_4caa_b7fa_141d6645016d.slice - libcontainer container kubepods-burstable-podd84adf8b_78ca_4caa_b7fa_141d6645016d.slice. Apr 30 00:49:00.464641 systemd[1]: Created slice kubepods-burstable-pod7d0658fc_ea0f_4318_925a_445b639eef93.slice - libcontainer container kubepods-burstable-pod7d0658fc_ea0f_4318_925a_445b639eef93.slice. Apr 30 00:49:00.476263 systemd[1]: Created slice kubepods-besteffort-pod5fde6995_50ea_4fad_b1cb_118dd741b7ae.slice - libcontainer container kubepods-besteffort-pod5fde6995_50ea_4fad_b1cb_118dd741b7ae.slice. Apr 30 00:49:00.484901 systemd[1]: Created slice kubepods-besteffort-pod42091737_1585_49ef_b158_7687df3ab4ee.slice - libcontainer container kubepods-besteffort-pod42091737_1585_49ef_b158_7687df3ab4ee.slice. Apr 30 00:49:00.492542 systemd[1]: Created slice kubepods-besteffort-pod6e063804_4633_4df5_bdc8_5123bd9ecf26.slice - libcontainer container kubepods-besteffort-pod6e063804_4633_4df5_bdc8_5123bd9ecf26.slice. Apr 30 00:49:00.502502 kubelet[2695]: I0430 00:49:00.502213 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6e063804-4633-4df5-bdc8-5123bd9ecf26-calico-apiserver-certs\") pod \"calico-apiserver-c5755df56-rk6l6\" (UID: \"6e063804-4633-4df5-bdc8-5123bd9ecf26\") " pod="calico-apiserver/calico-apiserver-c5755df56-rk6l6" Apr 30 00:49:00.502502 kubelet[2695]: I0430 00:49:00.502257 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42628\" (UniqueName: \"kubernetes.io/projected/5fde6995-50ea-4fad-b1cb-118dd741b7ae-kube-api-access-42628\") pod \"calico-kube-controllers-7665649944-24vtf\" (UID: \"5fde6995-50ea-4fad-b1cb-118dd741b7ae\") " pod="calico-system/calico-kube-controllers-7665649944-24vtf" Apr 30 00:49:00.502502 kubelet[2695]: I0430 00:49:00.502288 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42091737-1585-49ef-b158-7687df3ab4ee-calico-apiserver-certs\") pod \"calico-apiserver-c5755df56-cw4q4\" (UID: \"42091737-1585-49ef-b158-7687df3ab4ee\") " pod="calico-apiserver/calico-apiserver-c5755df56-cw4q4" Apr 30 00:49:00.502502 kubelet[2695]: I0430 00:49:00.502308 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d84adf8b-78ca-4caa-b7fa-141d6645016d-config-volume\") pod \"coredns-668d6bf9bc-nmff2\" (UID: \"d84adf8b-78ca-4caa-b7fa-141d6645016d\") " pod="kube-system/coredns-668d6bf9bc-nmff2" Apr 30 00:49:00.502502 kubelet[2695]: I0430 00:49:00.502328 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4gbk\" (UniqueName: \"kubernetes.io/projected/42091737-1585-49ef-b158-7687df3ab4ee-kube-api-access-l4gbk\") pod \"calico-apiserver-c5755df56-cw4q4\" (UID: \"42091737-1585-49ef-b158-7687df3ab4ee\") " pod="calico-apiserver/calico-apiserver-c5755df56-cw4q4" Apr 30 00:49:00.502992 kubelet[2695]: I0430 00:49:00.502350 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ckjn\" (UniqueName: \"kubernetes.io/projected/6e063804-4633-4df5-bdc8-5123bd9ecf26-kube-api-access-2ckjn\") pod \"calico-apiserver-c5755df56-rk6l6\" (UID: \"6e063804-4633-4df5-bdc8-5123bd9ecf26\") " pod="calico-apiserver/calico-apiserver-c5755df56-rk6l6" Apr 30 00:49:00.502992 kubelet[2695]: I0430 00:49:00.502367 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fde6995-50ea-4fad-b1cb-118dd741b7ae-tigera-ca-bundle\") pod \"calico-kube-controllers-7665649944-24vtf\" (UID: \"5fde6995-50ea-4fad-b1cb-118dd741b7ae\") " pod="calico-system/calico-kube-controllers-7665649944-24vtf" Apr 30 00:49:00.502992 kubelet[2695]: I0430 00:49:00.502389 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d0658fc-ea0f-4318-925a-445b639eef93-config-volume\") pod \"coredns-668d6bf9bc-6dwx2\" (UID: \"7d0658fc-ea0f-4318-925a-445b639eef93\") " pod="kube-system/coredns-668d6bf9bc-6dwx2" Apr 30 00:49:00.502992 kubelet[2695]: I0430 00:49:00.502411 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrsww\" (UniqueName: \"kubernetes.io/projected/7d0658fc-ea0f-4318-925a-445b639eef93-kube-api-access-zrsww\") pod \"coredns-668d6bf9bc-6dwx2\" (UID: \"7d0658fc-ea0f-4318-925a-445b639eef93\") " pod="kube-system/coredns-668d6bf9bc-6dwx2" Apr 30 00:49:00.502992 kubelet[2695]: I0430 00:49:00.502427 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s79rk\" (UniqueName: \"kubernetes.io/projected/d84adf8b-78ca-4caa-b7fa-141d6645016d-kube-api-access-s79rk\") pod \"coredns-668d6bf9bc-nmff2\" (UID: \"d84adf8b-78ca-4caa-b7fa-141d6645016d\") " pod="kube-system/coredns-668d6bf9bc-nmff2" Apr 30 00:49:00.671972 containerd[1475]: time="2025-04-30T00:49:00.671746853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 00:49:00.760042 containerd[1475]: time="2025-04-30T00:49:00.759889592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nmff2,Uid:d84adf8b-78ca-4caa-b7fa-141d6645016d,Namespace:kube-system,Attempt:0,}" Apr 30 00:49:00.774310 containerd[1475]: time="2025-04-30T00:49:00.773806959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6dwx2,Uid:7d0658fc-ea0f-4318-925a-445b639eef93,Namespace:kube-system,Attempt:0,}" Apr 30 00:49:00.785151 containerd[1475]: time="2025-04-30T00:49:00.785111645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7665649944-24vtf,Uid:5fde6995-50ea-4fad-b1cb-118dd741b7ae,Namespace:calico-system,Attempt:0,}" Apr 30 00:49:00.789460 containerd[1475]: time="2025-04-30T00:49:00.789380283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5755df56-cw4q4,Uid:42091737-1585-49ef-b158-7687df3ab4ee,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:49:00.796962 containerd[1475]: time="2025-04-30T00:49:00.796902393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5755df56-rk6l6,Uid:6e063804-4633-4df5-bdc8-5123bd9ecf26,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:49:00.953478 containerd[1475]: time="2025-04-30T00:49:00.953221383Z" level=error msg="Failed to destroy network for sandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.956084 containerd[1475]: time="2025-04-30T00:49:00.956024793Z" level=error msg="encountered an error cleaning up failed sandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.956703 containerd[1475]: time="2025-04-30T00:49:00.956106637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7665649944-24vtf,Uid:5fde6995-50ea-4fad-b1cb-118dd741b7ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.956797 kubelet[2695]: E0430 00:49:00.956331 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.956797 kubelet[2695]: E0430 00:49:00.956396 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7665649944-24vtf" Apr 30 00:49:00.956797 kubelet[2695]: E0430 00:49:00.956416 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7665649944-24vtf" Apr 30 00:49:00.956892 kubelet[2695]: E0430 00:49:00.956464 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7665649944-24vtf_calico-system(5fde6995-50ea-4fad-b1cb-118dd741b7ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7665649944-24vtf_calico-system(5fde6995-50ea-4fad-b1cb-118dd741b7ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7665649944-24vtf" podUID="5fde6995-50ea-4fad-b1cb-118dd741b7ae" Apr 30 00:49:00.966397 containerd[1475]: time="2025-04-30T00:49:00.966341713Z" level=error msg="Failed to destroy network for sandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.966794 containerd[1475]: time="2025-04-30T00:49:00.966754052Z" level=error msg="encountered an error cleaning up failed sandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.966843 containerd[1475]: time="2025-04-30T00:49:00.966826616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5755df56-cw4q4,Uid:42091737-1585-49ef-b158-7687df3ab4ee,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.967247 kubelet[2695]: E0430 00:49:00.967204 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.967318 kubelet[2695]: E0430 00:49:00.967289 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5755df56-cw4q4" Apr 30 00:49:00.967364 kubelet[2695]: E0430 00:49:00.967316 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5755df56-cw4q4" Apr 30 00:49:00.967686 kubelet[2695]: E0430 00:49:00.967469 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5755df56-cw4q4_calico-apiserver(42091737-1585-49ef-b158-7687df3ab4ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5755df56-cw4q4_calico-apiserver(42091737-1585-49ef-b158-7687df3ab4ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5755df56-cw4q4" podUID="42091737-1585-49ef-b158-7687df3ab4ee" Apr 30 00:49:00.974349 containerd[1475]: time="2025-04-30T00:49:00.974305283Z" level=error msg="Failed to destroy network for sandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.974678 containerd[1475]: time="2025-04-30T00:49:00.974647019Z" level=error msg="encountered an error cleaning up failed sandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.974721 containerd[1475]: time="2025-04-30T00:49:00.974699342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nmff2,Uid:d84adf8b-78ca-4caa-b7fa-141d6645016d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.977297 kubelet[2695]: E0430 00:49:00.976777 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.977297 kubelet[2695]: E0430 00:49:00.976829 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nmff2" Apr 30 00:49:00.977297 kubelet[2695]: E0430 00:49:00.976848 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nmff2" Apr 30 00:49:00.977466 kubelet[2695]: E0430 00:49:00.976883 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nmff2_kube-system(d84adf8b-78ca-4caa-b7fa-141d6645016d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nmff2_kube-system(d84adf8b-78ca-4caa-b7fa-141d6645016d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nmff2" podUID="d84adf8b-78ca-4caa-b7fa-141d6645016d" Apr 30 00:49:00.979647 containerd[1475]: time="2025-04-30T00:49:00.979398240Z" level=error msg="Failed to destroy network for sandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.979826 containerd[1475]: time="2025-04-30T00:49:00.979790059Z" level=error msg="encountered an error cleaning up failed sandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.979877 containerd[1475]: time="2025-04-30T00:49:00.979854382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6dwx2,Uid:7d0658fc-ea0f-4318-925a-445b639eef93,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.980200 kubelet[2695]: E0430 00:49:00.980048 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:00.980200 kubelet[2695]: E0430 00:49:00.980142 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6dwx2" Apr 30 00:49:00.980200 kubelet[2695]: E0430 00:49:00.980163 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6dwx2" Apr 30 00:49:00.980384 kubelet[2695]: E0430 00:49:00.980206 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6dwx2_kube-system(7d0658fc-ea0f-4318-925a-445b639eef93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6dwx2_kube-system(7d0658fc-ea0f-4318-925a-445b639eef93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6dwx2" podUID="7d0658fc-ea0f-4318-925a-445b639eef93" Apr 30 00:49:01.000840 containerd[1475]: time="2025-04-30T00:49:01.000662069Z" level=error msg="Failed to destroy network for sandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.001512 containerd[1475]: time="2025-04-30T00:49:01.001363822Z" level=error msg="encountered an error cleaning up failed sandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.001936 containerd[1475]: time="2025-04-30T00:49:01.001483067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5755df56-rk6l6,Uid:6e063804-4633-4df5-bdc8-5123bd9ecf26,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.002665 kubelet[2695]: E0430 00:49:01.002276 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.002665 kubelet[2695]: E0430 00:49:01.002365 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5755df56-rk6l6" Apr 30 00:49:01.002665 kubelet[2695]: E0430 00:49:01.002402 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5755df56-rk6l6" Apr 30 00:49:01.002927 kubelet[2695]: E0430 00:49:01.002453 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5755df56-rk6l6_calico-apiserver(6e063804-4633-4df5-bdc8-5123bd9ecf26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5755df56-rk6l6_calico-apiserver(6e063804-4633-4df5-bdc8-5123bd9ecf26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5755df56-rk6l6" podUID="6e063804-4633-4df5-bdc8-5123bd9ecf26" Apr 30 00:49:01.437269 systemd[1]: Created slice kubepods-besteffort-podb8fd7f6d_a4f8_4bdf_9227_3046a8ddecc2.slice - libcontainer container kubepods-besteffort-podb8fd7f6d_a4f8_4bdf_9227_3046a8ddecc2.slice. Apr 30 00:49:01.440276 containerd[1475]: time="2025-04-30T00:49:01.440210460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6xsdl,Uid:b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2,Namespace:calico-system,Attempt:0,}" Apr 30 00:49:01.518219 containerd[1475]: time="2025-04-30T00:49:01.518148976Z" level=error msg="Failed to destroy network for sandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.518783 containerd[1475]: time="2025-04-30T00:49:01.518715723Z" level=error msg="encountered an error cleaning up failed sandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.518874 containerd[1475]: time="2025-04-30T00:49:01.518839649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6xsdl,Uid:b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.519160 kubelet[2695]: E0430 00:49:01.519125 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.519704 kubelet[2695]: E0430 00:49:01.519520 2695 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6xsdl" Apr 30 00:49:01.519704 kubelet[2695]: E0430 00:49:01.519551 2695 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6xsdl" Apr 30 00:49:01.519704 kubelet[2695]: E0430 00:49:01.519634 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6xsdl_calico-system(b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6xsdl_calico-system(b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:49:01.673235 kubelet[2695]: I0430 00:49:01.672988 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:01.675059 containerd[1475]: time="2025-04-30T00:49:01.675012456Z" level=info msg="StopPodSandbox for \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\"" Apr 30 00:49:01.675211 containerd[1475]: time="2025-04-30T00:49:01.675184984Z" level=info msg="Ensure that sandbox 659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1 in task-service has been cleanup successfully" Apr 30 00:49:01.676109 kubelet[2695]: I0430 00:49:01.676074 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:01.677170 containerd[1475]: time="2025-04-30T00:49:01.676679214Z" level=info msg="StopPodSandbox for \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\"" Apr 30 00:49:01.677170 containerd[1475]: time="2025-04-30T00:49:01.676866582Z" level=info msg="Ensure that sandbox b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642 in task-service has been cleanup successfully" Apr 30 00:49:01.680453 kubelet[2695]: I0430 00:49:01.680415 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:01.682959 containerd[1475]: time="2025-04-30T00:49:01.682923465Z" level=info msg="StopPodSandbox for \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\"" Apr 30 00:49:01.683491 containerd[1475]: time="2025-04-30T00:49:01.683447930Z" level=info msg="Ensure that sandbox f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0 in task-service has been cleanup successfully" Apr 30 00:49:01.684076 kubelet[2695]: I0430 00:49:01.684014 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:01.687663 containerd[1475]: time="2025-04-30T00:49:01.687342351Z" level=info msg="StopPodSandbox for \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\"" Apr 30 00:49:01.687663 containerd[1475]: time="2025-04-30T00:49:01.687515719Z" level=info msg="Ensure that sandbox b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5 in task-service has been cleanup successfully" Apr 30 00:49:01.691952 kubelet[2695]: I0430 00:49:01.691012 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:01.692654 containerd[1475]: time="2025-04-30T00:49:01.692366666Z" level=info msg="StopPodSandbox for \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\"" Apr 30 00:49:01.694518 containerd[1475]: time="2025-04-30T00:49:01.694429602Z" level=info msg="Ensure that sandbox 00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a in task-service has been cleanup successfully" Apr 30 00:49:01.701148 kubelet[2695]: I0430 00:49:01.701016 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:01.705216 containerd[1475]: time="2025-04-30T00:49:01.705013936Z" level=info msg="StopPodSandbox for \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\"" Apr 30 00:49:01.705487 containerd[1475]: time="2025-04-30T00:49:01.705190624Z" level=info msg="Ensure that sandbox ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941 in task-service has been cleanup successfully" Apr 30 00:49:01.725347 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0-shm.mount: Deactivated successfully. Apr 30 00:49:01.727316 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1-shm.mount: Deactivated successfully. Apr 30 00:49:01.759502 containerd[1475]: time="2025-04-30T00:49:01.759362072Z" level=error msg="StopPodSandbox for \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\" failed" error="failed to destroy network for sandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.759692 kubelet[2695]: E0430 00:49:01.759597 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:01.759748 kubelet[2695]: E0430 00:49:01.759676 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1"} Apr 30 00:49:01.759748 kubelet[2695]: E0430 00:49:01.759742 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d84adf8b-78ca-4caa-b7fa-141d6645016d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:49:01.759836 kubelet[2695]: E0430 00:49:01.759763 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d84adf8b-78ca-4caa-b7fa-141d6645016d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nmff2" podUID="d84adf8b-78ca-4caa-b7fa-141d6645016d" Apr 30 00:49:01.774188 containerd[1475]: time="2025-04-30T00:49:01.773806826Z" level=error msg="StopPodSandbox for \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\" failed" error="failed to destroy network for sandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.774943 kubelet[2695]: E0430 00:49:01.774858 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:01.775016 kubelet[2695]: E0430 00:49:01.774908 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a"} Apr 30 00:49:01.775083 kubelet[2695]: E0430 00:49:01.775036 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42091737-1585-49ef-b158-7687df3ab4ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:49:01.775083 kubelet[2695]: E0430 00:49:01.775063 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42091737-1585-49ef-b158-7687df3ab4ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5755df56-cw4q4" podUID="42091737-1585-49ef-b158-7687df3ab4ee" Apr 30 00:49:01.782507 containerd[1475]: time="2025-04-30T00:49:01.782364865Z" level=error msg="StopPodSandbox for \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\" failed" error="failed to destroy network for sandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.783184 kubelet[2695]: E0430 00:49:01.782781 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:01.783285 kubelet[2695]: E0430 00:49:01.783199 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642"} Apr 30 00:49:01.783285 kubelet[2695]: E0430 00:49:01.783260 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e063804-4633-4df5-bdc8-5123bd9ecf26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:49:01.783285 kubelet[2695]: E0430 00:49:01.783287 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e063804-4633-4df5-bdc8-5123bd9ecf26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5755df56-rk6l6" podUID="6e063804-4633-4df5-bdc8-5123bd9ecf26" Apr 30 00:49:01.785582 containerd[1475]: time="2025-04-30T00:49:01.785403247Z" level=error msg="StopPodSandbox for \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\" failed" error="failed to destroy network for sandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.786014 kubelet[2695]: E0430 00:49:01.785955 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:01.786014 kubelet[2695]: E0430 00:49:01.786010 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5"} Apr 30 00:49:01.786114 kubelet[2695]: E0430 00:49:01.786040 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:49:01.786114 kubelet[2695]: E0430 00:49:01.786060 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6xsdl" podUID="b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2" Apr 30 00:49:01.791737 containerd[1475]: time="2025-04-30T00:49:01.791338924Z" level=error msg="StopPodSandbox for \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\" failed" error="failed to destroy network for sandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.791850 kubelet[2695]: E0430 00:49:01.791557 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:01.791850 kubelet[2695]: E0430 00:49:01.791633 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0"} Apr 30 00:49:01.791850 kubelet[2695]: E0430 00:49:01.791665 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d0658fc-ea0f-4318-925a-445b639eef93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:49:01.791850 kubelet[2695]: E0430 00:49:01.791695 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d0658fc-ea0f-4318-925a-445b639eef93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6dwx2" podUID="7d0658fc-ea0f-4318-925a-445b639eef93" Apr 30 00:49:01.793790 containerd[1475]: time="2025-04-30T00:49:01.793546227Z" level=error msg="StopPodSandbox for \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\" failed" error="failed to destroy network for sandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:49:01.794100 kubelet[2695]: E0430 00:49:01.793931 2695 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:01.794100 kubelet[2695]: E0430 00:49:01.793992 2695 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941"} Apr 30 00:49:01.794100 kubelet[2695]: E0430 00:49:01.794046 2695 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5fde6995-50ea-4fad-b1cb-118dd741b7ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:49:01.794100 kubelet[2695]: E0430 00:49:01.794073 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5fde6995-50ea-4fad-b1cb-118dd741b7ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7665649944-24vtf" podUID="5fde6995-50ea-4fad-b1cb-118dd741b7ae" Apr 30 00:49:07.083351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480961784.mount: Deactivated successfully. Apr 30 00:49:07.114344 containerd[1475]: time="2025-04-30T00:49:07.114275897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:49:07.115547 containerd[1475]: time="2025-04-30T00:49:07.115475914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" Apr 30 00:49:07.116479 containerd[1475]: time="2025-04-30T00:49:07.116400798Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:49:07.119681 containerd[1475]: time="2025-04-30T00:49:07.119617911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:49:07.120592 containerd[1475]: time="2025-04-30T00:49:07.120210899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 6.448417925s" Apr 30 00:49:07.120592 containerd[1475]: time="2025-04-30T00:49:07.120251501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" Apr 30 00:49:07.136360 containerd[1475]: time="2025-04-30T00:49:07.136298624Z" level=info msg="CreateContainer within sandbox \"d9a9cd8a897290bf1254fcf584415f2c8d3f70376a5608f9f8ba4a099607b75c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 00:49:07.153244 containerd[1475]: time="2025-04-30T00:49:07.153185546Z" level=info msg="CreateContainer within sandbox \"d9a9cd8a897290bf1254fcf584415f2c8d3f70376a5608f9f8ba4a099607b75c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"91ec67a91fc72d92696f8f0f07d84d6cde1a69dab9b2097c8ba6ed438f4ea0db\"" Apr 30 00:49:07.155542 containerd[1475]: time="2025-04-30T00:49:07.155499856Z" level=info msg="StartContainer for \"91ec67a91fc72d92696f8f0f07d84d6cde1a69dab9b2097c8ba6ed438f4ea0db\"" Apr 30 00:49:07.182775 systemd[1]: Started cri-containerd-91ec67a91fc72d92696f8f0f07d84d6cde1a69dab9b2097c8ba6ed438f4ea0db.scope - libcontainer container 91ec67a91fc72d92696f8f0f07d84d6cde1a69dab9b2097c8ba6ed438f4ea0db. Apr 30 00:49:07.217619 containerd[1475]: time="2025-04-30T00:49:07.217578605Z" level=info msg="StartContainer for \"91ec67a91fc72d92696f8f0f07d84d6cde1a69dab9b2097c8ba6ed438f4ea0db\" returns successfully" Apr 30 00:49:07.325753 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 00:49:07.325923 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 00:49:07.747029 kubelet[2695]: I0430 00:49:07.745539 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6b4p2" podStartSLOduration=1.149509999 podStartE2EDuration="58.745521963s" podCreationTimestamp="2025-04-30 00:48:09 +0000 UTC" firstStartedPulling="2025-04-30 00:48:09.525378391 +0000 UTC m=+14.236017423" lastFinishedPulling="2025-04-30 00:49:07.121390355 +0000 UTC m=+71.832029387" observedRunningTime="2025-04-30 00:49:07.745346475 +0000 UTC m=+72.455985547" watchObservedRunningTime="2025-04-30 00:49:07.745521963 +0000 UTC m=+72.456160995" Apr 30 00:49:09.185591 kernel: bpftool[3985]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 00:49:09.462646 systemd-networkd[1379]: vxlan.calico: Link UP Apr 30 00:49:09.462654 systemd-networkd[1379]: vxlan.calico: Gained carrier Apr 30 00:49:10.630933 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Apr 30 00:49:13.430223 containerd[1475]: time="2025-04-30T00:49:13.429685753Z" level=info msg="StopPodSandbox for \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\"" Apr 30 00:49:13.433430 containerd[1475]: time="2025-04-30T00:49:13.430888091Z" level=info msg="StopPodSandbox for \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\"" Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.552 [INFO][4109] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.553 [INFO][4109] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" iface="eth0" netns="/var/run/netns/cni-5435e202-3b22-4b7f-fb39-2844e3559bcc" Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.554 [INFO][4109] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" iface="eth0" netns="/var/run/netns/cni-5435e202-3b22-4b7f-fb39-2844e3559bcc" Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.555 [INFO][4109] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" iface="eth0" netns="/var/run/netns/cni-5435e202-3b22-4b7f-fb39-2844e3559bcc" Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.555 [INFO][4109] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.555 [INFO][4109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.605 [INFO][4122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" HandleID="k8s-pod-network.ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.605 [INFO][4122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.605 [INFO][4122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.615 [WARNING][4122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" HandleID="k8s-pod-network.ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.615 [INFO][4122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" HandleID="k8s-pod-network.ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.618 [INFO][4122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:13.623637 containerd[1475]: 2025-04-30 00:49:13.621 [INFO][4109] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:13.626705 containerd[1475]: time="2025-04-30T00:49:13.626656567Z" level=info msg="TearDown network for sandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\" successfully" Apr 30 00:49:13.626705 containerd[1475]: time="2025-04-30T00:49:13.626696129Z" level=info msg="StopPodSandbox for \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\" returns successfully" Apr 30 00:49:13.627705 systemd[1]: run-netns-cni\x2d5435e202\x2d3b22\x2d4b7f\x2dfb39\x2d2844e3559bcc.mount: Deactivated successfully. Apr 30 00:49:13.629586 containerd[1475]: time="2025-04-30T00:49:13.629443381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7665649944-24vtf,Uid:5fde6995-50ea-4fad-b1cb-118dd741b7ae,Namespace:calico-system,Attempt:1,}" Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.560 [INFO][4108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.560 [INFO][4108] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" iface="eth0" netns="/var/run/netns/cni-929573f8-568d-d8db-88ed-31e8b6bab558" Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.560 [INFO][4108] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" iface="eth0" netns="/var/run/netns/cni-929573f8-568d-d8db-88ed-31e8b6bab558" Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.560 [INFO][4108] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" iface="eth0" netns="/var/run/netns/cni-929573f8-568d-d8db-88ed-31e8b6bab558" Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.561 [INFO][4108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.561 [INFO][4108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.605 [INFO][4124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" HandleID="k8s-pod-network.f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.605 [INFO][4124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.618 [INFO][4124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.633 [WARNING][4124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" HandleID="k8s-pod-network.f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.633 [INFO][4124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" HandleID="k8s-pod-network.f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.637 [INFO][4124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:13.643025 containerd[1475]: 2025-04-30 00:49:13.639 [INFO][4108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:13.644609 containerd[1475]: time="2025-04-30T00:49:13.643537020Z" level=info msg="TearDown network for sandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\" successfully" Apr 30 00:49:13.644609 containerd[1475]: time="2025-04-30T00:49:13.643651026Z" level=info msg="StopPodSandbox for \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\" returns successfully" Apr 30 00:49:13.645751 containerd[1475]: time="2025-04-30T00:49:13.645711965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6dwx2,Uid:7d0658fc-ea0f-4318-925a-445b639eef93,Namespace:kube-system,Attempt:1,}" Apr 30 00:49:13.647675 systemd[1]: run-netns-cni\x2d929573f8\x2d568d\x2dd8db\x2d88ed\x2d31e8b6bab558.mount: Deactivated successfully. Apr 30 00:49:13.848995 systemd-networkd[1379]: calic4b6733a1e6: Link UP Apr 30 00:49:13.850430 systemd-networkd[1379]: calic4b6733a1e6: Gained carrier Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.720 [INFO][4137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0 calico-kube-controllers-7665649944- calico-system 5fde6995-50ea-4fad-b1cb-118dd741b7ae 824 0 2025-04-30 00:48:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7665649944 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-7-874bc1dee9 calico-kube-controllers-7665649944-24vtf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic4b6733a1e6 [] []}} ContainerID="04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" Namespace="calico-system" Pod="calico-kube-controllers-7665649944-24vtf" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-" Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.721 [INFO][4137] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" Namespace="calico-system" Pod="calico-kube-controllers-7665649944-24vtf" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.772 [INFO][4160] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" HandleID="k8s-pod-network.04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.791 [INFO][4160] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" HandleID="k8s-pod-network.04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ba800), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-7-874bc1dee9", "pod":"calico-kube-controllers-7665649944-24vtf", "timestamp":"2025-04-30 00:49:13.772884815 +0000 UTC"}, Hostname:"ci-4081-3-3-7-874bc1dee9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.791 [INFO][4160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.792 [INFO][4160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.792 [INFO][4160] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-7-874bc1dee9' Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.798 [INFO][4160] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.809 [INFO][4160] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.816 [INFO][4160] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.818 [INFO][4160] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.821 [INFO][4160] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.821 [INFO][4160] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.823 [INFO][4160] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144 Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.832 [INFO][4160] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.841 [INFO][4160] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.1/26] block=192.168.55.0/26 handle="k8s-pod-network.04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.841 [INFO][4160] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.1/26] handle="k8s-pod-network.04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.841 [INFO][4160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:13.879739 containerd[1475]: 2025-04-30 00:49:13.841 [INFO][4160] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.1/26] IPv6=[] ContainerID="04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" HandleID="k8s-pod-network.04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:13.880365 containerd[1475]: 2025-04-30 00:49:13.844 [INFO][4137] cni-plugin/k8s.go 386: Populated endpoint ContainerID="04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" Namespace="calico-system" Pod="calico-kube-controllers-7665649944-24vtf" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0", GenerateName:"calico-kube-controllers-7665649944-", Namespace:"calico-system", SelfLink:"", UID:"5fde6995-50ea-4fad-b1cb-118dd741b7ae", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7665649944", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"", Pod:"calico-kube-controllers-7665649944-24vtf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4b6733a1e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:13.880365 containerd[1475]: 2025-04-30 00:49:13.845 [INFO][4137] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.1/32] ContainerID="04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" Namespace="calico-system" Pod="calico-kube-controllers-7665649944-24vtf" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:13.880365 containerd[1475]: 2025-04-30 00:49:13.845 [INFO][4137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4b6733a1e6 ContainerID="04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" Namespace="calico-system" Pod="calico-kube-controllers-7665649944-24vtf" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:13.880365 containerd[1475]: 2025-04-30 00:49:13.850 [INFO][4137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" Namespace="calico-system" Pod="calico-kube-controllers-7665649944-24vtf" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:13.880365 containerd[1475]: 2025-04-30 00:49:13.852 [INFO][4137] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" Namespace="calico-system" Pod="calico-kube-controllers-7665649944-24vtf" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0", GenerateName:"calico-kube-controllers-7665649944-", Namespace:"calico-system", SelfLink:"", UID:"5fde6995-50ea-4fad-b1cb-118dd741b7ae", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7665649944", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144", Pod:"calico-kube-controllers-7665649944-24vtf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4b6733a1e6", MAC:"3a:d0:35:31:a2:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:13.880365 containerd[1475]: 2025-04-30 00:49:13.876 [INFO][4137] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144" Namespace="calico-system" Pod="calico-kube-controllers-7665649944-24vtf" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:13.908053 containerd[1475]: time="2025-04-30T00:49:13.907941164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:49:13.908783 containerd[1475]: time="2025-04-30T00:49:13.908263939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:49:13.908783 containerd[1475]: time="2025-04-30T00:49:13.908337103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:13.908783 containerd[1475]: time="2025-04-30T00:49:13.908580435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:13.936986 systemd[1]: Started cri-containerd-04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144.scope - libcontainer container 04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144. Apr 30 00:49:13.960744 systemd-networkd[1379]: cali128ccf17a88: Link UP Apr 30 00:49:13.962322 systemd-networkd[1379]: cali128ccf17a88: Gained carrier Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.742 [INFO][4146] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0 coredns-668d6bf9bc- kube-system 7d0658fc-ea0f-4318-925a-445b639eef93 825 0 2025-04-30 00:48:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-7-874bc1dee9 coredns-668d6bf9bc-6dwx2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali128ccf17a88 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6dwx2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-" Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.743 [INFO][4146] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6dwx2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.783 [INFO][4165] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" HandleID="k8s-pod-network.bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.811 [INFO][4165] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" HandleID="k8s-pod-network.bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319830), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-7-874bc1dee9", "pod":"coredns-668d6bf9bc-6dwx2", "timestamp":"2025-04-30 00:49:13.783290796 +0000 UTC"}, Hostname:"ci-4081-3-3-7-874bc1dee9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.811 [INFO][4165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.841 [INFO][4165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.841 [INFO][4165] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-7-874bc1dee9' Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.900 [INFO][4165] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.909 [INFO][4165] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.918 [INFO][4165] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.922 [INFO][4165] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.926 [INFO][4165] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.926 [INFO][4165] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.930 [INFO][4165] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.942 [INFO][4165] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.953 [INFO][4165] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.2/26] block=192.168.55.0/26 handle="k8s-pod-network.bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.953 [INFO][4165] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.2/26] handle="k8s-pod-network.bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.953 [INFO][4165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:13.985992 containerd[1475]: 2025-04-30 00:49:13.953 [INFO][4165] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.2/26] IPv6=[] ContainerID="bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" HandleID="k8s-pod-network.bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:13.986732 containerd[1475]: 2025-04-30 00:49:13.956 [INFO][4146] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6dwx2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d0658fc-ea0f-4318-925a-445b639eef93", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"", Pod:"coredns-668d6bf9bc-6dwx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali128ccf17a88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:13.986732 containerd[1475]: 2025-04-30 00:49:13.956 [INFO][4146] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.2/32] ContainerID="bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6dwx2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:13.986732 containerd[1475]: 2025-04-30 00:49:13.956 [INFO][4146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali128ccf17a88 ContainerID="bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6dwx2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:13.986732 containerd[1475]: 2025-04-30 00:49:13.963 [INFO][4146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6dwx2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:13.986732 containerd[1475]: 2025-04-30 00:49:13.964 [INFO][4146] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6dwx2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d0658fc-ea0f-4318-925a-445b639eef93", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d", Pod:"coredns-668d6bf9bc-6dwx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali128ccf17a88", MAC:"5e:a3:81:d1:66:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:13.986732 containerd[1475]: 2025-04-30 00:49:13.980 [INFO][4146] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-6dwx2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:14.016083 containerd[1475]: time="2025-04-30T00:49:14.016044896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7665649944-24vtf,Uid:5fde6995-50ea-4fad-b1cb-118dd741b7ae,Namespace:calico-system,Attempt:1,} returns sandbox id \"04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144\"" Apr 30 00:49:14.021144 containerd[1475]: time="2025-04-30T00:49:14.021103940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 00:49:14.026041 containerd[1475]: time="2025-04-30T00:49:14.025508833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:49:14.026041 containerd[1475]: time="2025-04-30T00:49:14.025631359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:49:14.026041 containerd[1475]: time="2025-04-30T00:49:14.025649920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:14.026041 containerd[1475]: time="2025-04-30T00:49:14.025738884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:14.049763 systemd[1]: Started cri-containerd-bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d.scope - libcontainer container bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d. Apr 30 00:49:14.088841 containerd[1475]: time="2025-04-30T00:49:14.088784809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6dwx2,Uid:7d0658fc-ea0f-4318-925a-445b639eef93,Namespace:kube-system,Attempt:1,} returns sandbox id \"bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d\"" Apr 30 00:49:14.095198 containerd[1475]: time="2025-04-30T00:49:14.094976508Z" level=info msg="CreateContainer within sandbox \"bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:49:14.136473 containerd[1475]: time="2025-04-30T00:49:14.136408749Z" level=info msg="CreateContainer within sandbox \"bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"289f18a4324a89f415c8cd4a0ab4e3cfe7d895263ffdc5406bf5990f5428f227\"" Apr 30 00:49:14.138668 containerd[1475]: time="2025-04-30T00:49:14.137149745Z" level=info msg="StartContainer for \"289f18a4324a89f415c8cd4a0ab4e3cfe7d895263ffdc5406bf5990f5428f227\"" Apr 30 00:49:14.166763 systemd[1]: Started cri-containerd-289f18a4324a89f415c8cd4a0ab4e3cfe7d895263ffdc5406bf5990f5428f227.scope - libcontainer container 289f18a4324a89f415c8cd4a0ab4e3cfe7d895263ffdc5406bf5990f5428f227. Apr 30 00:49:14.200336 containerd[1475]: time="2025-04-30T00:49:14.200244193Z" level=info msg="StartContainer for \"289f18a4324a89f415c8cd4a0ab4e3cfe7d895263ffdc5406bf5990f5428f227\" returns successfully" Apr 30 00:49:14.430087 containerd[1475]: time="2025-04-30T00:49:14.429482025Z" level=info msg="StopPodSandbox for \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\"" Apr 30 00:49:14.430250 containerd[1475]: time="2025-04-30T00:49:14.430132896Z" level=info msg="StopPodSandbox for \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\"" Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.493 [INFO][4349] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.494 [INFO][4349] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" iface="eth0" netns="/var/run/netns/cni-1a107828-1653-0541-e851-1a8b7d8511f7" Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.494 [INFO][4349] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" iface="eth0" netns="/var/run/netns/cni-1a107828-1653-0541-e851-1a8b7d8511f7" Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.494 [INFO][4349] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" iface="eth0" netns="/var/run/netns/cni-1a107828-1653-0541-e851-1a8b7d8511f7" Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.494 [INFO][4349] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.494 [INFO][4349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.530 [INFO][4363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" HandleID="k8s-pod-network.00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.530 [INFO][4363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.530 [INFO][4363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.541 [WARNING][4363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" HandleID="k8s-pod-network.00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.541 [INFO][4363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" HandleID="k8s-pod-network.00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.544 [INFO][4363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:14.552054 containerd[1475]: 2025-04-30 00:49:14.546 [INFO][4349] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:14.552054 containerd[1475]: time="2025-04-30T00:49:14.551926219Z" level=info msg="TearDown network for sandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\" successfully" Apr 30 00:49:14.552054 containerd[1475]: time="2025-04-30T00:49:14.551953460Z" level=info msg="StopPodSandbox for \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\" returns successfully" Apr 30 00:49:14.554258 containerd[1475]: time="2025-04-30T00:49:14.553630061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5755df56-cw4q4,Uid:42091737-1585-49ef-b158-7687df3ab4ee,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.500 [INFO][4350] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.500 [INFO][4350] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" iface="eth0" netns="/var/run/netns/cni-260ec311-fefd-9ea7-325e-900d6bc5bbc6" Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.500 [INFO][4350] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" iface="eth0" netns="/var/run/netns/cni-260ec311-fefd-9ea7-325e-900d6bc5bbc6" Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.500 [INFO][4350] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" iface="eth0" netns="/var/run/netns/cni-260ec311-fefd-9ea7-325e-900d6bc5bbc6" Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.500 [INFO][4350] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.500 [INFO][4350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.532 [INFO][4368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" HandleID="k8s-pod-network.b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Workload="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.532 [INFO][4368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.544 [INFO][4368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.561 [WARNING][4368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" HandleID="k8s-pod-network.b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Workload="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.561 [INFO][4368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" HandleID="k8s-pod-network.b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Workload="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.564 [INFO][4368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:14.569364 containerd[1475]: 2025-04-30 00:49:14.567 [INFO][4350] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:14.570274 containerd[1475]: time="2025-04-30T00:49:14.569721839Z" level=info msg="TearDown network for sandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\" successfully" Apr 30 00:49:14.570274 containerd[1475]: time="2025-04-30T00:49:14.569753960Z" level=info msg="StopPodSandbox for \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\" returns successfully" Apr 30 00:49:14.570503 containerd[1475]: time="2025-04-30T00:49:14.570393871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6xsdl,Uid:b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2,Namespace:calico-system,Attempt:1,}" Apr 30 00:49:14.635988 systemd[1]: run-netns-cni\x2d260ec311\x2dfefd\x2d9ea7\x2d325e\x2d900d6bc5bbc6.mount: Deactivated successfully. Apr 30 00:49:14.636433 systemd[1]: run-netns-cni\x2d1a107828\x2d1653\x2d0541\x2de851\x2d1a8b7d8511f7.mount: Deactivated successfully. Apr 30 00:49:14.744193 systemd-networkd[1379]: cali156a8db1f68: Link UP Apr 30 00:49:14.746751 systemd-networkd[1379]: cali156a8db1f68: Gained carrier Apr 30 00:49:14.768978 kubelet[2695]: I0430 00:49:14.768891 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6dwx2" podStartSLOduration=74.768376074 podStartE2EDuration="1m14.768376074s" podCreationTimestamp="2025-04-30 00:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:49:14.766952005 +0000 UTC m=+79.477591077" watchObservedRunningTime="2025-04-30 00:49:14.768376074 +0000 UTC m=+79.479015066" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.628 [INFO][4377] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0 calico-apiserver-c5755df56- calico-apiserver 42091737-1585-49ef-b158-7687df3ab4ee 841 0 2025-04-30 00:48:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c5755df56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-7-874bc1dee9 calico-apiserver-c5755df56-cw4q4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali156a8db1f68 [] []}} ContainerID="3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-cw4q4" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.629 [INFO][4377] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-cw4q4" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.673 [INFO][4404] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" HandleID="k8s-pod-network.3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.690 [INFO][4404] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" HandleID="k8s-pod-network.3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000384af0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-7-874bc1dee9", "pod":"calico-apiserver-c5755df56-cw4q4", "timestamp":"2025-04-30 00:49:14.673707821 +0000 UTC"}, Hostname:"ci-4081-3-3-7-874bc1dee9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.690 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.690 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.690 [INFO][4404] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-7-874bc1dee9' Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.694 [INFO][4404] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.701 [INFO][4404] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.708 [INFO][4404] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.711 [INFO][4404] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.715 [INFO][4404] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.715 [INFO][4404] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.718 [INFO][4404] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4 Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.723 [INFO][4404] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.732 [INFO][4404] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.3/26] block=192.168.55.0/26 handle="k8s-pod-network.3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.732 [INFO][4404] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.3/26] handle="k8s-pod-network.3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.732 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:14.777659 containerd[1475]: 2025-04-30 00:49:14.732 [INFO][4404] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.3/26] IPv6=[] ContainerID="3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" HandleID="k8s-pod-network.3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:14.779031 containerd[1475]: 2025-04-30 00:49:14.736 [INFO][4377] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-cw4q4" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0", GenerateName:"calico-apiserver-c5755df56-", Namespace:"calico-apiserver", SelfLink:"", UID:"42091737-1585-49ef-b158-7687df3ab4ee", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5755df56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"", Pod:"calico-apiserver-c5755df56-cw4q4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali156a8db1f68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:14.779031 containerd[1475]: 2025-04-30 00:49:14.736 [INFO][4377] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.3/32] ContainerID="3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-cw4q4" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:14.779031 containerd[1475]: 2025-04-30 00:49:14.736 [INFO][4377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali156a8db1f68 ContainerID="3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-cw4q4" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:14.779031 containerd[1475]: 2025-04-30 00:49:14.747 [INFO][4377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-cw4q4" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:14.779031 containerd[1475]: 2025-04-30 00:49:14.748 [INFO][4377] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-cw4q4" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0", GenerateName:"calico-apiserver-c5755df56-", Namespace:"calico-apiserver", SelfLink:"", UID:"42091737-1585-49ef-b158-7687df3ab4ee", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5755df56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4", Pod:"calico-apiserver-c5755df56-cw4q4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali156a8db1f68", MAC:"02:3a:45:57:ba:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:14.779031 containerd[1475]: 2025-04-30 00:49:14.769 [INFO][4377] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-cw4q4" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:14.841757 containerd[1475]: time="2025-04-30T00:49:14.840809772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:49:14.841757 containerd[1475]: time="2025-04-30T00:49:14.840872975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:49:14.841757 containerd[1475]: time="2025-04-30T00:49:14.840884576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:14.841757 containerd[1475]: time="2025-04-30T00:49:14.840978100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:14.886749 systemd[1]: Started cri-containerd-3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4.scope - libcontainer container 3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4. Apr 30 00:49:14.887181 systemd-networkd[1379]: cali67038f3938e: Link UP Apr 30 00:49:14.887349 systemd-networkd[1379]: cali67038f3938e: Gained carrier Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.646 [INFO][4385] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0 csi-node-driver- calico-system b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2 842 0 2025-04-30 00:48:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-7-874bc1dee9 csi-node-driver-6xsdl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali67038f3938e [] []}} ContainerID="f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" Namespace="calico-system" Pod="csi-node-driver-6xsdl" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-" Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.646 [INFO][4385] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" Namespace="calico-system" Pod="csi-node-driver-6xsdl" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.677 [INFO][4409] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" HandleID="k8s-pod-network.f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" Workload="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.695 [INFO][4409] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" HandleID="k8s-pod-network.f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" Workload="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003190b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-7-874bc1dee9", "pod":"csi-node-driver-6xsdl", "timestamp":"2025-04-30 00:49:14.677473883 +0000 UTC"}, Hostname:"ci-4081-3-3-7-874bc1dee9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.695 [INFO][4409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.733 [INFO][4409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.733 [INFO][4409] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-7-874bc1dee9' Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.796 [INFO][4409] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.818 [INFO][4409] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.837 [INFO][4409] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.843 [INFO][4409] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.849 [INFO][4409] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.849 [INFO][4409] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.855 [INFO][4409] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7 Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.864 [INFO][4409] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.872 [INFO][4409] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.4/26] block=192.168.55.0/26 handle="k8s-pod-network.f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.872 [INFO][4409] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.4/26] handle="k8s-pod-network.f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.872 [INFO][4409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:14.907182 containerd[1475]: 2025-04-30 00:49:14.872 [INFO][4409] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.4/26] IPv6=[] ContainerID="f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" HandleID="k8s-pod-network.f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" Workload="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:14.907723 containerd[1475]: 2025-04-30 00:49:14.880 [INFO][4385] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" Namespace="calico-system" Pod="csi-node-driver-6xsdl" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"", Pod:"csi-node-driver-6xsdl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali67038f3938e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:14.907723 containerd[1475]: 2025-04-30 00:49:14.882 [INFO][4385] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.4/32] ContainerID="f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" Namespace="calico-system" Pod="csi-node-driver-6xsdl" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:14.907723 containerd[1475]: 2025-04-30 00:49:14.882 [INFO][4385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67038f3938e ContainerID="f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" Namespace="calico-system" Pod="csi-node-driver-6xsdl" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:14.907723 containerd[1475]: 2025-04-30 00:49:14.885 [INFO][4385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" Namespace="calico-system" Pod="csi-node-driver-6xsdl" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:14.907723 containerd[1475]: 2025-04-30 00:49:14.885 [INFO][4385] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" Namespace="calico-system" Pod="csi-node-driver-6xsdl" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7", Pod:"csi-node-driver-6xsdl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali67038f3938e", MAC:"7e:72:46:af:42:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:14.907723 containerd[1475]: 2025-04-30 00:49:14.903 [INFO][4385] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7" Namespace="calico-system" Pod="csi-node-driver-6xsdl" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:14.944704 containerd[1475]: time="2025-04-30T00:49:14.944397176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:49:14.944704 containerd[1475]: time="2025-04-30T00:49:14.944521342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:49:14.944704 containerd[1475]: time="2025-04-30T00:49:14.944540662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:14.945628 containerd[1475]: time="2025-04-30T00:49:14.945000845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:14.985848 systemd[1]: Started cri-containerd-f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7.scope - libcontainer container f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7. Apr 30 00:49:14.995165 containerd[1475]: time="2025-04-30T00:49:14.994950657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5755df56-cw4q4,Uid:42091737-1585-49ef-b158-7687df3ab4ee,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4\"" Apr 30 00:49:15.022222 containerd[1475]: time="2025-04-30T00:49:15.022111851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6xsdl,Uid:b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2,Namespace:calico-system,Attempt:1,} returns sandbox id \"f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7\"" Apr 30 00:49:15.175876 systemd-networkd[1379]: calic4b6733a1e6: Gained IPv6LL Apr 30 00:49:15.430548 containerd[1475]: time="2025-04-30T00:49:15.430503538Z" level=info msg="StopPodSandbox for \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\"" Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.485 [INFO][4548] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.486 [INFO][4548] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" iface="eth0" netns="/var/run/netns/cni-cc278fc7-30cc-e4cb-77eb-a253cdda769c" Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.486 [INFO][4548] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" iface="eth0" netns="/var/run/netns/cni-cc278fc7-30cc-e4cb-77eb-a253cdda769c" Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.487 [INFO][4548] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" iface="eth0" netns="/var/run/netns/cni-cc278fc7-30cc-e4cb-77eb-a253cdda769c" Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.487 [INFO][4548] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.487 [INFO][4548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.515 [INFO][4556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" HandleID="k8s-pod-network.b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.516 [INFO][4556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.516 [INFO][4556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.533 [WARNING][4556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" HandleID="k8s-pod-network.b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.534 [INFO][4556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" HandleID="k8s-pod-network.b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.538 [INFO][4556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:15.543851 containerd[1475]: 2025-04-30 00:49:15.541 [INFO][4548] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:15.546014 containerd[1475]: time="2025-04-30T00:49:15.545903643Z" level=info msg="TearDown network for sandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\" successfully" Apr 30 00:49:15.546014 containerd[1475]: time="2025-04-30T00:49:15.546004448Z" level=info msg="StopPodSandbox for \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\" returns successfully" Apr 30 00:49:15.546608 containerd[1475]: time="2025-04-30T00:49:15.546538514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5755df56-rk6l6,Uid:6e063804-4633-4df5-bdc8-5123bd9ecf26,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:49:15.630285 systemd[1]: run-netns-cni\x2dcc278fc7\x2d30cc\x2de4cb\x2d77eb\x2da253cdda769c.mount: Deactivated successfully. Apr 30 00:49:15.751514 systemd-networkd[1379]: cali179e4c567a4: Link UP Apr 30 00:49:15.753776 systemd-networkd[1379]: cali179e4c567a4: Gained carrier Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.618 [INFO][4563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0 calico-apiserver-c5755df56- calico-apiserver 6e063804-4633-4df5-bdc8-5123bd9ecf26 862 0 2025-04-30 00:48:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c5755df56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-7-874bc1dee9 calico-apiserver-c5755df56-rk6l6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali179e4c567a4 [] []}} ContainerID="137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-rk6l6" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-" Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.618 [INFO][4563] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-rk6l6" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.666 [INFO][4576] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" HandleID="k8s-pod-network.137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.684 [INFO][4576] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" HandleID="k8s-pod-network.137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003038c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-7-874bc1dee9", "pod":"calico-apiserver-c5755df56-rk6l6", "timestamp":"2025-04-30 00:49:15.666316711 +0000 UTC"}, Hostname:"ci-4081-3-3-7-874bc1dee9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.684 [INFO][4576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.685 [INFO][4576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.685 [INFO][4576] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-7-874bc1dee9' Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.689 [INFO][4576] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.697 [INFO][4576] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.711 [INFO][4576] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.716 [INFO][4576] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.723 [INFO][4576] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.723 [INFO][4576] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.726 [INFO][4576] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.733 [INFO][4576] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.743 [INFO][4576] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.5/26] block=192.168.55.0/26 handle="k8s-pod-network.137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.743 [INFO][4576] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.5/26] handle="k8s-pod-network.137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.743 [INFO][4576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:15.775729 containerd[1475]: 2025-04-30 00:49:15.743 [INFO][4576] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.5/26] IPv6=[] ContainerID="137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" HandleID="k8s-pod-network.137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:15.777693 containerd[1475]: 2025-04-30 00:49:15.747 [INFO][4563] cni-plugin/k8s.go 386: Populated endpoint ContainerID="137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-rk6l6" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0", GenerateName:"calico-apiserver-c5755df56-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e063804-4633-4df5-bdc8-5123bd9ecf26", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5755df56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"", Pod:"calico-apiserver-c5755df56-rk6l6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali179e4c567a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:15.777693 containerd[1475]: 2025-04-30 00:49:15.747 [INFO][4563] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.5/32] ContainerID="137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-rk6l6" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:15.777693 containerd[1475]: 2025-04-30 00:49:15.747 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali179e4c567a4 ContainerID="137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-rk6l6" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:15.777693 containerd[1475]: 2025-04-30 00:49:15.751 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-rk6l6" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:15.777693 containerd[1475]: 2025-04-30 00:49:15.754 [INFO][4563] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-rk6l6" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0", GenerateName:"calico-apiserver-c5755df56-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e063804-4633-4df5-bdc8-5123bd9ecf26", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5755df56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d", Pod:"calico-apiserver-c5755df56-rk6l6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali179e4c567a4", MAC:"ba:34:87:3b:81:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:15.777693 containerd[1475]: 2025-04-30 00:49:15.771 [INFO][4563] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d" Namespace="calico-apiserver" Pod="calico-apiserver-c5755df56-rk6l6" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:15.811639 containerd[1475]: time="2025-04-30T00:49:15.811486818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:49:15.812442 containerd[1475]: time="2025-04-30T00:49:15.812374580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:49:15.812656 containerd[1475]: time="2025-04-30T00:49:15.812466745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:15.813625 containerd[1475]: time="2025-04-30T00:49:15.813548837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:15.839775 systemd[1]: Started cri-containerd-137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d.scope - libcontainer container 137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d. Apr 30 00:49:15.879008 systemd-networkd[1379]: cali128ccf17a88: Gained IPv6LL Apr 30 00:49:15.882793 containerd[1475]: time="2025-04-30T00:49:15.882737106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5755df56-rk6l6,Uid:6e063804-4633-4df5-bdc8-5123bd9ecf26,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d\"" Apr 30 00:49:15.943077 systemd-networkd[1379]: cali156a8db1f68: Gained IPv6LL Apr 30 00:49:16.326861 systemd-networkd[1379]: cali67038f3938e: Gained IPv6LL Apr 30 00:49:17.044799 containerd[1475]: time="2025-04-30T00:49:17.044584645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:49:17.046005 containerd[1475]: time="2025-04-30T00:49:17.045955592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" Apr 30 00:49:17.046888 containerd[1475]: time="2025-04-30T00:49:17.046815594Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:49:17.050614 containerd[1475]: time="2025-04-30T00:49:17.050443610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:49:17.051495 containerd[1475]: time="2025-04-30T00:49:17.051103002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 3.02995686s" Apr 30 00:49:17.051495 containerd[1475]: time="2025-04-30T00:49:17.051146484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" Apr 30 00:49:17.061087 containerd[1475]: time="2025-04-30T00:49:17.060516700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:49:17.098823 containerd[1475]: time="2025-04-30T00:49:17.098673394Z" level=info msg="CreateContainer within sandbox \"04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 00:49:17.121136 containerd[1475]: time="2025-04-30T00:49:17.120982118Z" level=info msg="CreateContainer within sandbox \"04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"26e13113531f5bb9d379b9e2bbaa7b113b6ae2f7fb676ba3c1a3039956385752\"" Apr 30 00:49:17.124582 containerd[1475]: time="2025-04-30T00:49:17.124332681Z" level=info msg="StartContainer for \"26e13113531f5bb9d379b9e2bbaa7b113b6ae2f7fb676ba3c1a3039956385752\"" Apr 30 00:49:17.172788 systemd[1]: Started cri-containerd-26e13113531f5bb9d379b9e2bbaa7b113b6ae2f7fb676ba3c1a3039956385752.scope - libcontainer container 26e13113531f5bb9d379b9e2bbaa7b113b6ae2f7fb676ba3c1a3039956385752. Apr 30 00:49:17.212428 containerd[1475]: time="2025-04-30T00:49:17.212381919Z" level=info msg="StartContainer for \"26e13113531f5bb9d379b9e2bbaa7b113b6ae2f7fb676ba3c1a3039956385752\" returns successfully" Apr 30 00:49:17.438424 containerd[1475]: time="2025-04-30T00:49:17.438030244Z" level=info msg="StopPodSandbox for \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\"" Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.500 [INFO][4692] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.502 [INFO][4692] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" iface="eth0" netns="/var/run/netns/cni-95d1d30f-8bf0-d666-0b6c-60855407e012" Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.502 [INFO][4692] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" iface="eth0" netns="/var/run/netns/cni-95d1d30f-8bf0-d666-0b6c-60855407e012" Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.502 [INFO][4692] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" iface="eth0" netns="/var/run/netns/cni-95d1d30f-8bf0-d666-0b6c-60855407e012" Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.502 [INFO][4692] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.502 [INFO][4692] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.536 [INFO][4699] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" HandleID="k8s-pod-network.659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.537 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.537 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.549 [WARNING][4699] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" HandleID="k8s-pod-network.659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.549 [INFO][4699] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" HandleID="k8s-pod-network.659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.552 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:17.556084 containerd[1475]: 2025-04-30 00:49:17.553 [INFO][4692] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:17.556952 containerd[1475]: time="2025-04-30T00:49:17.556340273Z" level=info msg="TearDown network for sandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\" successfully" Apr 30 00:49:17.556952 containerd[1475]: time="2025-04-30T00:49:17.556390475Z" level=info msg="StopPodSandbox for \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\" returns successfully" Apr 30 00:49:17.562106 containerd[1475]: time="2025-04-30T00:49:17.561973146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nmff2,Uid:d84adf8b-78ca-4caa-b7fa-141d6645016d,Namespace:kube-system,Attempt:1,}" Apr 30 00:49:17.608185 systemd-networkd[1379]: cali179e4c567a4: Gained IPv6LL Apr 30 00:49:17.729919 systemd-networkd[1379]: cali4ec5c9ddc85: Link UP Apr 30 00:49:17.731699 systemd-networkd[1379]: cali4ec5c9ddc85: Gained carrier Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.624 [INFO][4705] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0 coredns-668d6bf9bc- kube-system d84adf8b-78ca-4caa-b7fa-141d6645016d 876 0 2025-04-30 00:48:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-7-874bc1dee9 coredns-668d6bf9bc-nmff2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4ec5c9ddc85 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmff2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-" Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.624 [INFO][4705] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmff2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.659 [INFO][4717] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" HandleID="k8s-pod-network.ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.677 [INFO][4717] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" HandleID="k8s-pod-network.ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d110), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-7-874bc1dee9", "pod":"coredns-668d6bf9bc-nmff2", "timestamp":"2025-04-30 00:49:17.65981138 +0000 UTC"}, Hostname:"ci-4081-3-3-7-874bc1dee9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.678 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.678 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.678 [INFO][4717] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-7-874bc1dee9' Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.681 [INFO][4717] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.688 [INFO][4717] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.696 [INFO][4717] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.700 [INFO][4717] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.703 [INFO][4717] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.703 [INFO][4717] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.706 [INFO][4717] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5 Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.712 [INFO][4717] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.720 [INFO][4717] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.6/26] block=192.168.55.0/26 handle="k8s-pod-network.ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.720 [INFO][4717] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.6/26] handle="k8s-pod-network.ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" host="ci-4081-3-3-7-874bc1dee9" Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.720 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:17.755148 containerd[1475]: 2025-04-30 00:49:17.720 [INFO][4717] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.6/26] IPv6=[] ContainerID="ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" HandleID="k8s-pod-network.ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:17.756873 containerd[1475]: 2025-04-30 00:49:17.723 [INFO][4705] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmff2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d84adf8b-78ca-4caa-b7fa-141d6645016d", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"", Pod:"coredns-668d6bf9bc-nmff2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ec5c9ddc85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:17.756873 containerd[1475]: 2025-04-30 00:49:17.723 [INFO][4705] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.6/32] ContainerID="ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmff2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:17.756873 containerd[1475]: 2025-04-30 00:49:17.724 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ec5c9ddc85 ContainerID="ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmff2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:17.756873 containerd[1475]: 2025-04-30 00:49:17.731 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmff2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:17.756873 containerd[1475]: 2025-04-30 00:49:17.732 [INFO][4705] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmff2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d84adf8b-78ca-4caa-b7fa-141d6645016d", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5", Pod:"coredns-668d6bf9bc-nmff2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ec5c9ddc85", MAC:"3a:75:27:bc:df:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:17.756873 containerd[1475]: 2025-04-30 00:49:17.751 [INFO][4705] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmff2" WorkloadEndpoint="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:17.797869 containerd[1475]: time="2025-04-30T00:49:17.792152011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:49:17.797869 containerd[1475]: time="2025-04-30T00:49:17.792204614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:49:17.797869 containerd[1475]: time="2025-04-30T00:49:17.792215334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:17.797869 containerd[1475]: time="2025-04-30T00:49:17.792337500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:49:17.809739 kubelet[2695]: I0430 00:49:17.809198 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7665649944-24vtf" podStartSLOduration=65.767775783 podStartE2EDuration="1m8.809173518s" podCreationTimestamp="2025-04-30 00:48:09 +0000 UTC" firstStartedPulling="2025-04-30 00:49:14.018674183 +0000 UTC m=+78.729313215" lastFinishedPulling="2025-04-30 00:49:17.060071918 +0000 UTC m=+81.770710950" observedRunningTime="2025-04-30 00:49:17.809043512 +0000 UTC m=+82.519682544" watchObservedRunningTime="2025-04-30 00:49:17.809173518 +0000 UTC m=+82.519812550" Apr 30 00:49:17.826584 systemd[1]: Started cri-containerd-ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5.scope - libcontainer container ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5. Apr 30 00:49:17.887433 containerd[1475]: time="2025-04-30T00:49:17.887390919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nmff2,Uid:d84adf8b-78ca-4caa-b7fa-141d6645016d,Namespace:kube-system,Attempt:1,} returns sandbox id \"ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5\"" Apr 30 00:49:17.894901 containerd[1475]: time="2025-04-30T00:49:17.894844081Z" level=info msg="CreateContainer within sandbox \"ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:49:17.915095 containerd[1475]: time="2025-04-30T00:49:17.914832412Z" level=info msg="CreateContainer within sandbox \"ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ddbe5fbc86b570908048955a265021761fb7037c9b802dd911556a2eb0e0e49\"" Apr 30 00:49:17.915823 containerd[1475]: time="2025-04-30T00:49:17.915791979Z" level=info msg="StartContainer for \"5ddbe5fbc86b570908048955a265021761fb7037c9b802dd911556a2eb0e0e49\"" Apr 30 00:49:17.951042 systemd[1]: Started cri-containerd-5ddbe5fbc86b570908048955a265021761fb7037c9b802dd911556a2eb0e0e49.scope - libcontainer container 5ddbe5fbc86b570908048955a265021761fb7037c9b802dd911556a2eb0e0e49. Apr 30 00:49:17.983923 containerd[1475]: time="2025-04-30T00:49:17.983739081Z" level=info msg="StartContainer for \"5ddbe5fbc86b570908048955a265021761fb7037c9b802dd911556a2eb0e0e49\" returns successfully" Apr 30 00:49:18.095053 systemd[1]: run-netns-cni\x2d95d1d30f\x2d8bf0\x2dd666\x2d0b6c\x2d60855407e012.mount: Deactivated successfully. Apr 30 00:49:18.803733 kubelet[2695]: I0430 00:49:18.803448 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nmff2" podStartSLOduration=78.803423223 podStartE2EDuration="1m18.803423223s" podCreationTimestamp="2025-04-30 00:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:49:18.801333442 +0000 UTC m=+83.511972474" watchObservedRunningTime="2025-04-30 00:49:18.803423223 +0000 UTC m=+83.514062295" Apr 30 00:49:19.654974 systemd-networkd[1379]: cali4ec5c9ddc85: Gained IPv6LL Apr 30 00:49:55.439388 containerd[1475]: time="2025-04-30T00:49:55.439319827Z" level=info msg="StopPodSandbox for \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\"" Apr 30 00:49:55.522556 containerd[1475]: 2025-04-30 00:49:55.482 [WARNING][4932] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d84adf8b-78ca-4caa-b7fa-141d6645016d", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5", Pod:"coredns-668d6bf9bc-nmff2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ec5c9ddc85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:55.522556 containerd[1475]: 2025-04-30 00:49:55.483 [INFO][4932] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:55.522556 containerd[1475]: 2025-04-30 00:49:55.483 [INFO][4932] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" iface="eth0" netns="" Apr 30 00:49:55.522556 containerd[1475]: 2025-04-30 00:49:55.483 [INFO][4932] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:55.522556 containerd[1475]: 2025-04-30 00:49:55.483 [INFO][4932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:55.522556 containerd[1475]: 2025-04-30 00:49:55.505 [INFO][4939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" HandleID="k8s-pod-network.659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:55.522556 containerd[1475]: 2025-04-30 00:49:55.505 [INFO][4939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:55.522556 containerd[1475]: 2025-04-30 00:49:55.506 [INFO][4939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:55.522556 containerd[1475]: 2025-04-30 00:49:55.517 [WARNING][4939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" HandleID="k8s-pod-network.659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:55.522556 containerd[1475]: 2025-04-30 00:49:55.517 [INFO][4939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" HandleID="k8s-pod-network.659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:55.522556 containerd[1475]: 2025-04-30 00:49:55.519 [INFO][4939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:55.522556 containerd[1475]: 2025-04-30 00:49:55.521 [INFO][4932] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:55.522556 containerd[1475]: time="2025-04-30T00:49:55.522525804Z" level=info msg="TearDown network for sandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\" successfully" Apr 30 00:49:55.522556 containerd[1475]: time="2025-04-30T00:49:55.522550523Z" level=info msg="StopPodSandbox for \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\" returns successfully" Apr 30 00:49:55.523976 containerd[1475]: time="2025-04-30T00:49:55.523924495Z" level=info msg="RemovePodSandbox for \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\"" Apr 30 00:49:55.526344 containerd[1475]: time="2025-04-30T00:49:55.526285220Z" level=info msg="Forcibly stopping sandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\"" Apr 30 00:49:55.625536 containerd[1475]: 2025-04-30 00:49:55.578 [WARNING][4957] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d84adf8b-78ca-4caa-b7fa-141d6645016d", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"ed1d39e2532dff034c67bcdb1ef3f1747c643c781e78ce9d03ffa83fa89aeab5", Pod:"coredns-668d6bf9bc-nmff2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ec5c9ddc85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:55.625536 containerd[1475]: 2025-04-30 00:49:55.578 [INFO][4957] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:55.625536 containerd[1475]: 2025-04-30 00:49:55.578 [INFO][4957] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" iface="eth0" netns="" Apr 30 00:49:55.625536 containerd[1475]: 2025-04-30 00:49:55.578 [INFO][4957] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:55.625536 containerd[1475]: 2025-04-30 00:49:55.578 [INFO][4957] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:55.625536 containerd[1475]: 2025-04-30 00:49:55.603 [INFO][4964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" HandleID="k8s-pod-network.659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:55.625536 containerd[1475]: 2025-04-30 00:49:55.603 [INFO][4964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:55.625536 containerd[1475]: 2025-04-30 00:49:55.603 [INFO][4964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:55.625536 containerd[1475]: 2025-04-30 00:49:55.618 [WARNING][4964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" HandleID="k8s-pod-network.659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:55.625536 containerd[1475]: 2025-04-30 00:49:55.618 [INFO][4964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" HandleID="k8s-pod-network.659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--nmff2-eth0" Apr 30 00:49:55.625536 containerd[1475]: 2025-04-30 00:49:55.621 [INFO][4964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:55.625536 containerd[1475]: 2025-04-30 00:49:55.623 [INFO][4957] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1" Apr 30 00:49:55.626021 containerd[1475]: time="2025-04-30T00:49:55.625609250Z" level=info msg="TearDown network for sandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\" successfully" Apr 30 00:49:55.629752 containerd[1475]: time="2025-04-30T00:49:55.629681531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:49:55.629921 containerd[1475]: time="2025-04-30T00:49:55.629791925Z" level=info msg="RemovePodSandbox \"659b36b6f0716aed06ab0a9c42ea55540eebf4a635c5e77aba00fe7800868dd1\" returns successfully" Apr 30 00:49:55.630638 containerd[1475]: time="2025-04-30T00:49:55.630600406Z" level=info msg="StopPodSandbox for \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\"" Apr 30 00:49:55.720581 containerd[1475]: 2025-04-30 00:49:55.673 [WARNING][4982] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d0658fc-ea0f-4318-925a-445b639eef93", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d", Pod:"coredns-668d6bf9bc-6dwx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali128ccf17a88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:55.720581 containerd[1475]: 2025-04-30 00:49:55.673 [INFO][4982] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:55.720581 containerd[1475]: 2025-04-30 00:49:55.673 [INFO][4982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" iface="eth0" netns="" Apr 30 00:49:55.720581 containerd[1475]: 2025-04-30 00:49:55.673 [INFO][4982] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:55.720581 containerd[1475]: 2025-04-30 00:49:55.673 [INFO][4982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:55.720581 containerd[1475]: 2025-04-30 00:49:55.698 [INFO][4989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" HandleID="k8s-pod-network.f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:55.720581 containerd[1475]: 2025-04-30 00:49:55.698 [INFO][4989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:55.720581 containerd[1475]: 2025-04-30 00:49:55.698 [INFO][4989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:55.720581 containerd[1475]: 2025-04-30 00:49:55.714 [WARNING][4989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" HandleID="k8s-pod-network.f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:55.720581 containerd[1475]: 2025-04-30 00:49:55.714 [INFO][4989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" HandleID="k8s-pod-network.f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:55.720581 containerd[1475]: 2025-04-30 00:49:55.716 [INFO][4989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:55.720581 containerd[1475]: 2025-04-30 00:49:55.718 [INFO][4982] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:55.721717 containerd[1475]: time="2025-04-30T00:49:55.721665679Z" level=info msg="TearDown network for sandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\" successfully" Apr 30 00:49:55.721994 containerd[1475]: time="2025-04-30T00:49:55.721826871Z" level=info msg="StopPodSandbox for \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\" returns successfully" Apr 30 00:49:55.722701 containerd[1475]: time="2025-04-30T00:49:55.722609153Z" level=info msg="RemovePodSandbox for \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\"" Apr 30 00:49:55.723536 containerd[1475]: time="2025-04-30T00:49:55.722667350Z" level=info msg="Forcibly stopping sandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\"" Apr 30 00:49:55.804426 containerd[1475]: 2025-04-30 00:49:55.764 [WARNING][5007] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d0658fc-ea0f-4318-925a-445b639eef93", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"bd26b1ac304aa5c5e2ff0d2d4af21dbd5a5cd1c88b47120da69ff5eabe921c3d", Pod:"coredns-668d6bf9bc-6dwx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali128ccf17a88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:55.804426 containerd[1475]: 2025-04-30 00:49:55.764 [INFO][5007] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:55.804426 containerd[1475]: 2025-04-30 00:49:55.764 [INFO][5007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" iface="eth0" netns="" Apr 30 00:49:55.804426 containerd[1475]: 2025-04-30 00:49:55.764 [INFO][5007] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:55.804426 containerd[1475]: 2025-04-30 00:49:55.764 [INFO][5007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:55.804426 containerd[1475]: 2025-04-30 00:49:55.787 [INFO][5014] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" HandleID="k8s-pod-network.f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:55.804426 containerd[1475]: 2025-04-30 00:49:55.787 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:55.804426 containerd[1475]: 2025-04-30 00:49:55.787 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:55.804426 containerd[1475]: 2025-04-30 00:49:55.797 [WARNING][5014] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" HandleID="k8s-pod-network.f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:55.804426 containerd[1475]: 2025-04-30 00:49:55.797 [INFO][5014] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" HandleID="k8s-pod-network.f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Workload="ci--4081--3--3--7--874bc1dee9-k8s-coredns--668d6bf9bc--6dwx2-eth0" Apr 30 00:49:55.804426 containerd[1475]: 2025-04-30 00:49:55.800 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:55.804426 containerd[1475]: 2025-04-30 00:49:55.802 [INFO][5007] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0" Apr 30 00:49:55.805814 containerd[1475]: time="2025-04-30T00:49:55.804486074Z" level=info msg="TearDown network for sandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\" successfully" Apr 30 00:49:55.808912 containerd[1475]: time="2025-04-30T00:49:55.808870300Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:49:55.808912 containerd[1475]: time="2025-04-30T00:49:55.808933337Z" level=info msg="RemovePodSandbox \"f49138ee3bd719a8ec8055ac8e22662a88d58d215c3ead2e78deb39fa1070aa0\" returns successfully" Apr 30 00:49:55.809526 containerd[1475]: time="2025-04-30T00:49:55.809279880Z" level=info msg="StopPodSandbox for \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\"" Apr 30 00:49:55.913703 containerd[1475]: 2025-04-30 00:49:55.866 [WARNING][5032] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0", GenerateName:"calico-apiserver-c5755df56-", Namespace:"calico-apiserver", SelfLink:"", UID:"42091737-1585-49ef-b158-7687df3ab4ee", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5755df56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4", Pod:"calico-apiserver-c5755df56-cw4q4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali156a8db1f68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:55.913703 containerd[1475]: 2025-04-30 00:49:55.866 [INFO][5032] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:55.913703 containerd[1475]: 2025-04-30 00:49:55.866 [INFO][5032] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" iface="eth0" netns="" Apr 30 00:49:55.913703 containerd[1475]: 2025-04-30 00:49:55.866 [INFO][5032] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:55.913703 containerd[1475]: 2025-04-30 00:49:55.866 [INFO][5032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:55.913703 containerd[1475]: 2025-04-30 00:49:55.887 [INFO][5039] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" HandleID="k8s-pod-network.00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:55.913703 containerd[1475]: 2025-04-30 00:49:55.887 [INFO][5039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:55.913703 containerd[1475]: 2025-04-30 00:49:55.887 [INFO][5039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:55.913703 containerd[1475]: 2025-04-30 00:49:55.904 [WARNING][5039] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" HandleID="k8s-pod-network.00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:55.913703 containerd[1475]: 2025-04-30 00:49:55.904 [INFO][5039] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" HandleID="k8s-pod-network.00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:55.913703 containerd[1475]: 2025-04-30 00:49:55.908 [INFO][5039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:55.913703 containerd[1475]: 2025-04-30 00:49:55.912 [INFO][5032] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:55.914127 containerd[1475]: time="2025-04-30T00:49:55.913779897Z" level=info msg="TearDown network for sandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\" successfully" Apr 30 00:49:55.914127 containerd[1475]: time="2025-04-30T00:49:55.913805576Z" level=info msg="StopPodSandbox for \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\" returns successfully" Apr 30 00:49:55.914372 containerd[1475]: time="2025-04-30T00:49:55.914340790Z" level=info msg="RemovePodSandbox for \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\"" Apr 30 00:49:55.914409 containerd[1475]: time="2025-04-30T00:49:55.914386187Z" level=info msg="Forcibly stopping sandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\"" Apr 30 00:49:56.010054 containerd[1475]: 2025-04-30 00:49:55.963 [WARNING][5057] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0", GenerateName:"calico-apiserver-c5755df56-", Namespace:"calico-apiserver", SelfLink:"", UID:"42091737-1585-49ef-b158-7687df3ab4ee", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5755df56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4", Pod:"calico-apiserver-c5755df56-cw4q4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali156a8db1f68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:56.010054 containerd[1475]: 2025-04-30 00:49:55.964 [INFO][5057] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:56.010054 containerd[1475]: 2025-04-30 00:49:55.964 [INFO][5057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" iface="eth0" netns="" Apr 30 00:49:56.010054 containerd[1475]: 2025-04-30 00:49:55.964 [INFO][5057] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:56.010054 containerd[1475]: 2025-04-30 00:49:55.964 [INFO][5057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:56.010054 containerd[1475]: 2025-04-30 00:49:55.985 [INFO][5064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" HandleID="k8s-pod-network.00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:56.010054 containerd[1475]: 2025-04-30 00:49:55.985 [INFO][5064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:56.010054 containerd[1475]: 2025-04-30 00:49:55.985 [INFO][5064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:56.010054 containerd[1475]: 2025-04-30 00:49:56.001 [WARNING][5064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" HandleID="k8s-pod-network.00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:56.010054 containerd[1475]: 2025-04-30 00:49:56.001 [INFO][5064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" HandleID="k8s-pod-network.00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--cw4q4-eth0" Apr 30 00:49:56.010054 containerd[1475]: 2025-04-30 00:49:56.006 [INFO][5064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:56.010054 containerd[1475]: 2025-04-30 00:49:56.007 [INFO][5057] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a" Apr 30 00:49:56.010054 containerd[1475]: time="2025-04-30T00:49:56.009983412Z" level=info msg="TearDown network for sandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\" successfully" Apr 30 00:49:56.018818 containerd[1475]: time="2025-04-30T00:49:56.018664080Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:49:56.018945 containerd[1475]: time="2025-04-30T00:49:56.018844231Z" level=info msg="RemovePodSandbox \"00197901078a89ec933ad27f8e090fc3c4ad5a0a6e4fa18573a7610cecaf349a\" returns successfully" Apr 30 00:49:56.019720 containerd[1475]: time="2025-04-30T00:49:56.019619674Z" level=info msg="StopPodSandbox for \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\"" Apr 30 00:49:56.105445 containerd[1475]: 2025-04-30 00:49:56.067 [WARNING][5082] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7", Pod:"csi-node-driver-6xsdl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali67038f3938e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:56.105445 containerd[1475]: 2025-04-30 00:49:56.067 [INFO][5082] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:56.105445 containerd[1475]: 2025-04-30 00:49:56.067 [INFO][5082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" iface="eth0" netns="" Apr 30 00:49:56.105445 containerd[1475]: 2025-04-30 00:49:56.067 [INFO][5082] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:56.105445 containerd[1475]: 2025-04-30 00:49:56.067 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:56.105445 containerd[1475]: 2025-04-30 00:49:56.087 [INFO][5089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" HandleID="k8s-pod-network.b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Workload="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:56.105445 containerd[1475]: 2025-04-30 00:49:56.088 [INFO][5089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:56.105445 containerd[1475]: 2025-04-30 00:49:56.088 [INFO][5089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:56.105445 containerd[1475]: 2025-04-30 00:49:56.099 [WARNING][5089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" HandleID="k8s-pod-network.b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Workload="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:56.105445 containerd[1475]: 2025-04-30 00:49:56.099 [INFO][5089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" HandleID="k8s-pod-network.b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Workload="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:56.105445 containerd[1475]: 2025-04-30 00:49:56.102 [INFO][5089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:56.105445 containerd[1475]: 2025-04-30 00:49:56.103 [INFO][5082] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:56.105445 containerd[1475]: time="2025-04-30T00:49:56.105372004Z" level=info msg="TearDown network for sandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\" successfully" Apr 30 00:49:56.105445 containerd[1475]: time="2025-04-30T00:49:56.105408763Z" level=info msg="StopPodSandbox for \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\" returns successfully" Apr 30 00:49:56.106975 containerd[1475]: time="2025-04-30T00:49:56.106531789Z" level=info msg="RemovePodSandbox for \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\"" Apr 30 00:49:56.106975 containerd[1475]: time="2025-04-30T00:49:56.106592586Z" level=info msg="Forcibly stopping sandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\"" Apr 30 00:49:56.191402 containerd[1475]: 2025-04-30 00:49:56.149 [WARNING][5107] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8fd7f6d-a4f8-4bdf-9227-3046a8ddecc2", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7", Pod:"csi-node-driver-6xsdl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali67038f3938e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:56.191402 containerd[1475]: 2025-04-30 00:49:56.149 [INFO][5107] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:56.191402 containerd[1475]: 2025-04-30 00:49:56.149 [INFO][5107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" iface="eth0" netns="" Apr 30 00:49:56.191402 containerd[1475]: 2025-04-30 00:49:56.149 [INFO][5107] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:56.191402 containerd[1475]: 2025-04-30 00:49:56.149 [INFO][5107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:56.191402 containerd[1475]: 2025-04-30 00:49:56.171 [INFO][5115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" HandleID="k8s-pod-network.b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Workload="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:56.191402 containerd[1475]: 2025-04-30 00:49:56.172 [INFO][5115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:56.191402 containerd[1475]: 2025-04-30 00:49:56.172 [INFO][5115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:56.191402 containerd[1475]: 2025-04-30 00:49:56.183 [WARNING][5115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" HandleID="k8s-pod-network.b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Workload="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:56.191402 containerd[1475]: 2025-04-30 00:49:56.183 [INFO][5115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" HandleID="k8s-pod-network.b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Workload="ci--4081--3--3--7--874bc1dee9-k8s-csi--node--driver--6xsdl-eth0" Apr 30 00:49:56.191402 containerd[1475]: 2025-04-30 00:49:56.186 [INFO][5115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:56.191402 containerd[1475]: 2025-04-30 00:49:56.189 [INFO][5107] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5" Apr 30 00:49:56.192449 containerd[1475]: time="2025-04-30T00:49:56.191446879Z" level=info msg="TearDown network for sandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\" successfully" Apr 30 00:49:56.200819 containerd[1475]: time="2025-04-30T00:49:56.200720519Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:49:56.200819 containerd[1475]: time="2025-04-30T00:49:56.200804155Z" level=info msg="RemovePodSandbox \"b8d91dedebc034bfc97e7e916440d2e666a5c5585be4dbea3c7d908c178904f5\" returns successfully" Apr 30 00:49:56.201599 containerd[1475]: time="2025-04-30T00:49:56.201411486Z" level=info msg="StopPodSandbox for \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\"" Apr 30 00:49:56.287608 containerd[1475]: 2025-04-30 00:49:56.248 [WARNING][5133] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0", GenerateName:"calico-kube-controllers-7665649944-", Namespace:"calico-system", SelfLink:"", UID:"5fde6995-50ea-4fad-b1cb-118dd741b7ae", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7665649944", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144", Pod:"calico-kube-controllers-7665649944-24vtf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4b6733a1e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:56.287608 containerd[1475]: 2025-04-30 00:49:56.248 [INFO][5133] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:56.287608 containerd[1475]: 2025-04-30 00:49:56.249 [INFO][5133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" iface="eth0" netns="" Apr 30 00:49:56.287608 containerd[1475]: 2025-04-30 00:49:56.249 [INFO][5133] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:56.287608 containerd[1475]: 2025-04-30 00:49:56.249 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:56.287608 containerd[1475]: 2025-04-30 00:49:56.271 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" HandleID="k8s-pod-network.ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:56.287608 containerd[1475]: 2025-04-30 00:49:56.271 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:56.287608 containerd[1475]: 2025-04-30 00:49:56.271 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:56.287608 containerd[1475]: 2025-04-30 00:49:56.281 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" HandleID="k8s-pod-network.ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:56.287608 containerd[1475]: 2025-04-30 00:49:56.281 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" HandleID="k8s-pod-network.ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:56.287608 containerd[1475]: 2025-04-30 00:49:56.284 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:56.287608 containerd[1475]: 2025-04-30 00:49:56.285 [INFO][5133] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:56.287608 containerd[1475]: time="2025-04-30T00:49:56.287370486Z" level=info msg="TearDown network for sandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\" successfully" Apr 30 00:49:56.287608 containerd[1475]: time="2025-04-30T00:49:56.287397965Z" level=info msg="StopPodSandbox for \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\" returns successfully" Apr 30 00:49:56.290434 containerd[1475]: time="2025-04-30T00:49:56.289840649Z" level=info msg="RemovePodSandbox for \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\"" Apr 30 00:49:56.290434 containerd[1475]: time="2025-04-30T00:49:56.289899646Z" level=info msg="Forcibly stopping sandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\"" Apr 30 00:49:56.382337 containerd[1475]: 2025-04-30 00:49:56.342 [WARNING][5159] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0", GenerateName:"calico-kube-controllers-7665649944-", Namespace:"calico-system", SelfLink:"", UID:"5fde6995-50ea-4fad-b1cb-118dd741b7ae", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7665649944", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"04b96417e26c0256572790d74b2973f6b805869942c804bd225279edd9d7d144", Pod:"calico-kube-controllers-7665649944-24vtf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4b6733a1e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:56.382337 containerd[1475]: 2025-04-30 00:49:56.342 [INFO][5159] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:56.382337 containerd[1475]: 2025-04-30 00:49:56.343 [INFO][5159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" iface="eth0" netns="" Apr 30 00:49:56.382337 containerd[1475]: 2025-04-30 00:49:56.343 [INFO][5159] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:56.382337 containerd[1475]: 2025-04-30 00:49:56.343 [INFO][5159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:56.382337 containerd[1475]: 2025-04-30 00:49:56.363 [INFO][5167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" HandleID="k8s-pod-network.ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:56.382337 containerd[1475]: 2025-04-30 00:49:56.363 [INFO][5167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:56.382337 containerd[1475]: 2025-04-30 00:49:56.364 [INFO][5167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:56.382337 containerd[1475]: 2025-04-30 00:49:56.376 [WARNING][5167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" HandleID="k8s-pod-network.ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:56.382337 containerd[1475]: 2025-04-30 00:49:56.376 [INFO][5167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" HandleID="k8s-pod-network.ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--kube--controllers--7665649944--24vtf-eth0" Apr 30 00:49:56.382337 containerd[1475]: 2025-04-30 00:49:56.378 [INFO][5167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:56.382337 containerd[1475]: 2025-04-30 00:49:56.380 [INFO][5159] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941" Apr 30 00:49:56.382337 containerd[1475]: time="2025-04-30T00:49:56.382081311Z" level=info msg="TearDown network for sandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\" successfully" Apr 30 00:49:56.389326 containerd[1475]: time="2025-04-30T00:49:56.389255130Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:49:56.389447 containerd[1475]: time="2025-04-30T00:49:56.389366805Z" level=info msg="RemovePodSandbox \"ac0aba49ad7237aa7e90a70f5b69168bf0f4c823f24a158fe5a78699bc539941\" returns successfully" Apr 30 00:49:56.390271 containerd[1475]: time="2025-04-30T00:49:56.390147328Z" level=info msg="StopPodSandbox for \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\"" Apr 30 00:49:56.472765 containerd[1475]: 2025-04-30 00:49:56.435 [WARNING][5185] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0", GenerateName:"calico-apiserver-c5755df56-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e063804-4633-4df5-bdc8-5123bd9ecf26", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5755df56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d", Pod:"calico-apiserver-c5755df56-rk6l6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali179e4c567a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:56.472765 containerd[1475]: 2025-04-30 00:49:56.435 [INFO][5185] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:56.472765 containerd[1475]: 2025-04-30 00:49:56.435 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" iface="eth0" netns="" Apr 30 00:49:56.472765 containerd[1475]: 2025-04-30 00:49:56.435 [INFO][5185] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:56.472765 containerd[1475]: 2025-04-30 00:49:56.435 [INFO][5185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:56.472765 containerd[1475]: 2025-04-30 00:49:56.454 [INFO][5193] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" HandleID="k8s-pod-network.b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:56.472765 containerd[1475]: 2025-04-30 00:49:56.455 [INFO][5193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:56.472765 containerd[1475]: 2025-04-30 00:49:56.455 [INFO][5193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:56.472765 containerd[1475]: 2025-04-30 00:49:56.467 [WARNING][5193] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" HandleID="k8s-pod-network.b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:56.472765 containerd[1475]: 2025-04-30 00:49:56.467 [INFO][5193] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" HandleID="k8s-pod-network.b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:56.472765 containerd[1475]: 2025-04-30 00:49:56.469 [INFO][5193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:56.472765 containerd[1475]: 2025-04-30 00:49:56.471 [INFO][5185] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:56.473610 containerd[1475]: time="2025-04-30T00:49:56.472704449Z" level=info msg="TearDown network for sandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\" successfully" Apr 30 00:49:56.473610 containerd[1475]: time="2025-04-30T00:49:56.473445414Z" level=info msg="StopPodSandbox for \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\" returns successfully" Apr 30 00:49:56.474394 containerd[1475]: time="2025-04-30T00:49:56.473990628Z" level=info msg="RemovePodSandbox for \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\"" Apr 30 00:49:56.474394 containerd[1475]: time="2025-04-30T00:49:56.474020667Z" level=info msg="Forcibly stopping sandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\"" Apr 30 00:49:56.562416 containerd[1475]: 2025-04-30 00:49:56.523 [WARNING][5211] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0", GenerateName:"calico-apiserver-c5755df56-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e063804-4633-4df5-bdc8-5123bd9ecf26", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5755df56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-7-874bc1dee9", ContainerID:"137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d", Pod:"calico-apiserver-c5755df56-rk6l6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali179e4c567a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:49:56.562416 containerd[1475]: 2025-04-30 00:49:56.523 [INFO][5211] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:56.562416 containerd[1475]: 2025-04-30 00:49:56.523 [INFO][5211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" iface="eth0" netns="" Apr 30 00:49:56.562416 containerd[1475]: 2025-04-30 00:49:56.523 [INFO][5211] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:56.562416 containerd[1475]: 2025-04-30 00:49:56.523 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:56.562416 containerd[1475]: 2025-04-30 00:49:56.546 [INFO][5218] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" HandleID="k8s-pod-network.b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:56.562416 containerd[1475]: 2025-04-30 00:49:56.546 [INFO][5218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:49:56.562416 containerd[1475]: 2025-04-30 00:49:56.546 [INFO][5218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:49:56.562416 containerd[1475]: 2025-04-30 00:49:56.556 [WARNING][5218] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" HandleID="k8s-pod-network.b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:56.562416 containerd[1475]: 2025-04-30 00:49:56.556 [INFO][5218] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" HandleID="k8s-pod-network.b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Workload="ci--4081--3--3--7--874bc1dee9-k8s-calico--apiserver--c5755df56--rk6l6-eth0" Apr 30 00:49:56.562416 containerd[1475]: 2025-04-30 00:49:56.558 [INFO][5218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:49:56.562416 containerd[1475]: 2025-04-30 00:49:56.560 [INFO][5211] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642" Apr 30 00:49:56.562416 containerd[1475]: time="2025-04-30T00:49:56.562121845Z" level=info msg="TearDown network for sandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\" successfully" Apr 30 00:49:56.567297 containerd[1475]: time="2025-04-30T00:49:56.567097169Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:49:56.567297 containerd[1475]: time="2025-04-30T00:49:56.567190445Z" level=info msg="RemovePodSandbox \"b18c174b28b0b6756eb2d0e670684e2f07a44032317fda92802128562d0ce642\" returns successfully" Apr 30 00:50:10.072052 containerd[1475]: time="2025-04-30T00:50:10.071989719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" Apr 30 00:50:10.081114 containerd[1475]: time="2025-04-30T00:50:10.080715934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 53.019024979s" Apr 30 00:50:10.081114 containerd[1475]: time="2025-04-30T00:50:10.080761973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:50:10.082698 containerd[1475]: time="2025-04-30T00:50:10.082643076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 00:50:10.087601 containerd[1475]: time="2025-04-30T00:50:10.086810149Z" level=info msg="CreateContainer within sandbox \"3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:50:10.095121 containerd[1475]: time="2025-04-30T00:50:10.094984221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:10.098839 containerd[1475]: time="2025-04-30T00:50:10.098787065Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:10.099938 containerd[1475]: time="2025-04-30T00:50:10.099477764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:10.103019 containerd[1475]: time="2025-04-30T00:50:10.102815063Z" level=info msg="CreateContainer within sandbox \"3eaca9e8268c04b5450d2c91e73d10585af815053905809eadb1e7208819f7e4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1fd3023f4df8110df1f89b197a267f87543024af7fe21da8bdde10196d060ff2\"" Apr 30 00:50:10.104924 containerd[1475]: time="2025-04-30T00:50:10.104658247Z" level=info msg="StartContainer for \"1fd3023f4df8110df1f89b197a267f87543024af7fe21da8bdde10196d060ff2\"" Apr 30 00:50:10.149115 systemd[1]: Started cri-containerd-1fd3023f4df8110df1f89b197a267f87543024af7fe21da8bdde10196d060ff2.scope - libcontainer container 1fd3023f4df8110df1f89b197a267f87543024af7fe21da8bdde10196d060ff2. Apr 30 00:50:10.184965 containerd[1475]: time="2025-04-30T00:50:10.184914329Z" level=info msg="StartContainer for \"1fd3023f4df8110df1f89b197a267f87543024af7fe21da8bdde10196d060ff2\" returns successfully" Apr 30 00:50:10.977982 kubelet[2695]: I0430 00:50:10.977916 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c5755df56-cw4q4" podStartSLOduration=67.895060851 podStartE2EDuration="2m2.977899718s" podCreationTimestamp="2025-04-30 00:48:08 +0000 UTC" firstStartedPulling="2025-04-30 00:49:14.999257305 +0000 UTC m=+79.709896337" lastFinishedPulling="2025-04-30 00:50:10.082096172 +0000 UTC m=+134.792735204" observedRunningTime="2025-04-30 00:50:10.974295308 +0000 UTC m=+135.684934340" watchObservedRunningTime="2025-04-30 00:50:10.977899718 +0000 UTC m=+135.688538750" Apr 30 00:50:19.612948 containerd[1475]: time="2025-04-30T00:50:19.612856076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:19.615290 containerd[1475]: time="2025-04-30T00:50:19.614711037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" Apr 30 00:50:19.616381 containerd[1475]: time="2025-04-30T00:50:19.616332083Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:19.621185 containerd[1475]: time="2025-04-30T00:50:19.621143220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:19.621899 containerd[1475]: time="2025-04-30T00:50:19.621854685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 9.539054575s" Apr 30 00:50:19.621899 containerd[1475]: time="2025-04-30T00:50:19.621899124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" Apr 30 00:50:19.624139 containerd[1475]: time="2025-04-30T00:50:19.624055119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:50:19.625348 containerd[1475]: time="2025-04-30T00:50:19.625304092Z" level=info msg="CreateContainer within sandbox \"f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 00:50:19.653056 containerd[1475]: time="2025-04-30T00:50:19.652903266Z" level=info msg="CreateContainer within sandbox \"f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"772d4db1a1954a3665b47888523bf26eb6c9a8fb7fe500c1be758b6114504537\"" Apr 30 00:50:19.655192 containerd[1475]: time="2025-04-30T00:50:19.654133000Z" level=info msg="StartContainer for \"772d4db1a1954a3665b47888523bf26eb6c9a8fb7fe500c1be758b6114504537\"" Apr 30 00:50:19.703736 systemd[1]: Started cri-containerd-772d4db1a1954a3665b47888523bf26eb6c9a8fb7fe500c1be758b6114504537.scope - libcontainer container 772d4db1a1954a3665b47888523bf26eb6c9a8fb7fe500c1be758b6114504537. Apr 30 00:50:19.736959 containerd[1475]: time="2025-04-30T00:50:19.736918203Z" level=info msg="StartContainer for \"772d4db1a1954a3665b47888523bf26eb6c9a8fb7fe500c1be758b6114504537\" returns successfully" Apr 30 00:50:20.048638 containerd[1475]: time="2025-04-30T00:50:20.047738328Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:20.048991 containerd[1475]: time="2025-04-30T00:50:20.048948704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 00:50:20.051510 containerd[1475]: time="2025-04-30T00:50:20.051466933Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 427.331056ms" Apr 30 00:50:20.051795 containerd[1475]: time="2025-04-30T00:50:20.051513892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:50:20.053410 containerd[1475]: time="2025-04-30T00:50:20.052953342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 00:50:20.056467 containerd[1475]: time="2025-04-30T00:50:20.056425992Z" level=info msg="CreateContainer within sandbox \"137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:50:20.075634 containerd[1475]: time="2025-04-30T00:50:20.075539644Z" level=info msg="CreateContainer within sandbox \"137476559a1b141c1148d9fdf4f7efd083b19d679078f5e8af023a7f1f097e8d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b675a979571dade295bb480003b48b7245642107686ac5ff209a4a2934a8acb8\"" Apr 30 00:50:20.079609 containerd[1475]: time="2025-04-30T00:50:20.076427946Z" level=info msg="StartContainer for \"b675a979571dade295bb480003b48b7245642107686ac5ff209a4a2934a8acb8\"" Apr 30 00:50:20.113784 systemd[1]: Started cri-containerd-b675a979571dade295bb480003b48b7245642107686ac5ff209a4a2934a8acb8.scope - libcontainer container b675a979571dade295bb480003b48b7245642107686ac5ff209a4a2934a8acb8. Apr 30 00:50:20.153270 containerd[1475]: time="2025-04-30T00:50:20.153138670Z" level=info msg="StartContainer for \"b675a979571dade295bb480003b48b7245642107686ac5ff209a4a2934a8acb8\" returns successfully" Apr 30 00:50:22.193291 kubelet[2695]: I0430 00:50:22.193187 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c5755df56-rk6l6" podStartSLOduration=70.025271171 podStartE2EDuration="2m14.193166477s" podCreationTimestamp="2025-04-30 00:48:08 +0000 UTC" firstStartedPulling="2025-04-30 00:49:15.884735203 +0000 UTC m=+80.595374235" lastFinishedPulling="2025-04-30 00:50:20.052630509 +0000 UTC m=+144.763269541" observedRunningTime="2025-04-30 00:50:21.014447207 +0000 UTC m=+145.725086239" watchObservedRunningTime="2025-04-30 00:50:22.193166477 +0000 UTC m=+146.903805509" Apr 30 00:50:45.072623 containerd[1475]: time="2025-04-30T00:50:45.072541627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:45.075229 containerd[1475]: time="2025-04-30T00:50:45.073761706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" Apr 30 00:50:45.075229 containerd[1475]: time="2025-04-30T00:50:45.074579945Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:45.077611 containerd[1475]: time="2025-04-30T00:50:45.077515702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:50:45.079022 containerd[1475]: time="2025-04-30T00:50:45.078977661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 25.025982039s" Apr 30 00:50:45.079022 containerd[1475]: time="2025-04-30T00:50:45.079016701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" Apr 30 00:50:45.084386 containerd[1475]: time="2025-04-30T00:50:45.084199015Z" level=info msg="CreateContainer within sandbox \"f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 00:50:45.101778 containerd[1475]: time="2025-04-30T00:50:45.101688717Z" level=info msg="CreateContainer within sandbox \"f48bc2e13a79cd41d9801b3d0efdf1cf658650d5c1c1100e0cfa87383d0b9ef7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"085154013491409263f764fe76038b78d765c32ecf70015a92d8dda7cf179022\"" Apr 30 00:50:45.102487 containerd[1475]: time="2025-04-30T00:50:45.102449917Z" level=info msg="StartContainer for \"085154013491409263f764fe76038b78d765c32ecf70015a92d8dda7cf179022\"" Apr 30 00:50:45.144891 systemd[1]: Started cri-containerd-085154013491409263f764fe76038b78d765c32ecf70015a92d8dda7cf179022.scope - libcontainer container 085154013491409263f764fe76038b78d765c32ecf70015a92d8dda7cf179022. Apr 30 00:50:45.179969 containerd[1475]: time="2025-04-30T00:50:45.179925717Z" level=info msg="StartContainer for \"085154013491409263f764fe76038b78d765c32ecf70015a92d8dda7cf179022\" returns successfully" Apr 30 00:50:45.632352 kubelet[2695]: I0430 00:50:45.632223 2695 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 00:50:45.636732 kubelet[2695]: I0430 00:50:45.636693 2695 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 00:50:46.096140 kubelet[2695]: I0430 00:50:46.095541 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6xsdl" podStartSLOduration=67.039139734 podStartE2EDuration="2m37.09551967s" podCreationTimestamp="2025-04-30 00:48:09 +0000 UTC" firstStartedPulling="2025-04-30 00:49:15.024427243 +0000 UTC m=+79.735066275" lastFinishedPulling="2025-04-30 00:50:45.080807179 +0000 UTC m=+169.791446211" observedRunningTime="2025-04-30 00:50:46.09464955 +0000 UTC m=+170.805288622" watchObservedRunningTime="2025-04-30 00:50:46.09551967 +0000 UTC m=+170.806158662" Apr 30 00:51:09.744787 systemd[1]: run-containerd-runc-k8s.io-91ec67a91fc72d92696f8f0f07d84d6cde1a69dab9b2097c8ba6ed438f4ea0db-runc.kgVFcp.mount: Deactivated successfully. Apr 30 00:51:47.806397 systemd[1]: run-containerd-runc-k8s.io-26e13113531f5bb9d379b9e2bbaa7b113b6ae2f7fb676ba3c1a3039956385752-runc.Dpdfr9.mount: Deactivated successfully. Apr 30 00:53:12.631033 systemd[1]: Started sshd@7-49.13.50.0:22-139.178.68.195:41538.service - OpenSSH per-connection server daemon (139.178.68.195:41538). Apr 30 00:53:13.640154 sshd[5815]: Accepted publickey for core from 139.178.68.195 port 41538 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:13.643271 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:13.649688 systemd-logind[1458]: New session 8 of user core. Apr 30 00:53:13.654771 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:53:14.422838 sshd[5815]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:14.428016 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:53:14.428323 systemd[1]: sshd@7-49.13.50.0:22-139.178.68.195:41538.service: Deactivated successfully. Apr 30 00:53:14.431055 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:53:14.434660 systemd-logind[1458]: Removed session 8. Apr 30 00:53:19.600042 systemd[1]: Started sshd@8-49.13.50.0:22-139.178.68.195:58132.service - OpenSSH per-connection server daemon (139.178.68.195:58132). Apr 30 00:53:20.588429 sshd[5847]: Accepted publickey for core from 139.178.68.195 port 58132 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:20.591293 sshd[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:20.598626 systemd-logind[1458]: New session 9 of user core. Apr 30 00:53:20.602028 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:53:21.354474 sshd[5847]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:21.359239 systemd[1]: sshd@8-49.13.50.0:22-139.178.68.195:58132.service: Deactivated successfully. Apr 30 00:53:21.362454 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:53:21.363481 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:53:21.364936 systemd-logind[1458]: Removed session 9. Apr 30 00:53:26.540031 systemd[1]: Started sshd@9-49.13.50.0:22-139.178.68.195:37404.service - OpenSSH per-connection server daemon (139.178.68.195:37404). Apr 30 00:53:27.534270 sshd[5861]: Accepted publickey for core from 139.178.68.195 port 37404 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:27.535462 sshd[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:27.543754 systemd-logind[1458]: New session 10 of user core. Apr 30 00:53:27.547779 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:53:28.298490 sshd[5861]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:28.303303 systemd[1]: sshd@9-49.13.50.0:22-139.178.68.195:37404.service: Deactivated successfully. Apr 30 00:53:28.306764 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:53:28.309424 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:53:28.310624 systemd-logind[1458]: Removed session 10. Apr 30 00:53:28.475911 systemd[1]: Started sshd@10-49.13.50.0:22-139.178.68.195:37418.service - OpenSSH per-connection server daemon (139.178.68.195:37418). Apr 30 00:53:29.458071 sshd[5879]: Accepted publickey for core from 139.178.68.195 port 37418 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:29.460178 sshd[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:29.466805 systemd-logind[1458]: New session 11 of user core. Apr 30 00:53:29.470748 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:53:30.259740 sshd[5879]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:30.264426 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:53:30.265780 systemd[1]: sshd@10-49.13.50.0:22-139.178.68.195:37418.service: Deactivated successfully. Apr 30 00:53:30.268194 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:53:30.269841 systemd-logind[1458]: Removed session 11. Apr 30 00:53:30.432957 systemd[1]: Started sshd@11-49.13.50.0:22-139.178.68.195:37428.service - OpenSSH per-connection server daemon (139.178.68.195:37428). Apr 30 00:53:31.410139 sshd[5890]: Accepted publickey for core from 139.178.68.195 port 37428 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:31.412403 sshd[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:31.417427 systemd-logind[1458]: New session 12 of user core. Apr 30 00:53:31.427052 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:53:32.168246 sshd[5890]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:32.171959 systemd[1]: sshd@11-49.13.50.0:22-139.178.68.195:37428.service: Deactivated successfully. Apr 30 00:53:32.174074 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:53:32.176482 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:53:32.177555 systemd-logind[1458]: Removed session 12. Apr 30 00:53:37.353933 systemd[1]: Started sshd@12-49.13.50.0:22-139.178.68.195:56960.service - OpenSSH per-connection server daemon (139.178.68.195:56960). Apr 30 00:53:38.336027 sshd[5905]: Accepted publickey for core from 139.178.68.195 port 56960 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:38.337462 sshd[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:38.344727 systemd-logind[1458]: New session 13 of user core. Apr 30 00:53:38.352812 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:53:39.093257 sshd[5905]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:39.097408 systemd[1]: sshd@12-49.13.50.0:22-139.178.68.195:56960.service: Deactivated successfully. Apr 30 00:53:39.100025 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:53:39.103007 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:53:39.105050 systemd-logind[1458]: Removed session 13. Apr 30 00:53:39.269078 systemd[1]: Started sshd@13-49.13.50.0:22-139.178.68.195:56974.service - OpenSSH per-connection server daemon (139.178.68.195:56974). Apr 30 00:53:40.236630 sshd[5918]: Accepted publickey for core from 139.178.68.195 port 56974 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:40.238733 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:40.243930 systemd-logind[1458]: New session 14 of user core. Apr 30 00:53:40.255861 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:53:41.100021 sshd[5918]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:41.105281 systemd[1]: sshd@13-49.13.50.0:22-139.178.68.195:56974.service: Deactivated successfully. Apr 30 00:53:41.108912 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:53:41.111695 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:53:41.113792 systemd-logind[1458]: Removed session 14. Apr 30 00:53:41.277022 systemd[1]: Started sshd@14-49.13.50.0:22-139.178.68.195:56990.service - OpenSSH per-connection server daemon (139.178.68.195:56990). Apr 30 00:53:42.257843 sshd[5952]: Accepted publickey for core from 139.178.68.195 port 56990 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:42.259997 sshd[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:42.266281 systemd-logind[1458]: New session 15 of user core. Apr 30 00:53:42.274020 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:53:43.982961 sshd[5952]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:43.988867 systemd[1]: sshd@14-49.13.50.0:22-139.178.68.195:56990.service: Deactivated successfully. Apr 30 00:53:43.992336 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:53:43.994033 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:53:43.995952 systemd-logind[1458]: Removed session 15. Apr 30 00:53:44.159972 systemd[1]: Started sshd@15-49.13.50.0:22-139.178.68.195:56998.service - OpenSSH per-connection server daemon (139.178.68.195:56998). Apr 30 00:53:45.144154 sshd[5970]: Accepted publickey for core from 139.178.68.195 port 56998 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:45.145281 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:45.152235 systemd-logind[1458]: New session 16 of user core. Apr 30 00:53:45.158774 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:53:46.033978 sshd[5970]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:46.038761 systemd[1]: sshd@15-49.13.50.0:22-139.178.68.195:56998.service: Deactivated successfully. Apr 30 00:53:46.041629 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:53:46.044226 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:53:46.045959 systemd-logind[1458]: Removed session 16. Apr 30 00:53:46.215580 systemd[1]: Started sshd@16-49.13.50.0:22-139.178.68.195:49340.service - OpenSSH per-connection server daemon (139.178.68.195:49340). Apr 30 00:53:47.195484 sshd[5980]: Accepted publickey for core from 139.178.68.195 port 49340 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:47.197551 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:47.202769 systemd-logind[1458]: New session 17 of user core. Apr 30 00:53:47.211784 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:53:47.959917 sshd[5980]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:47.965225 systemd[1]: sshd@16-49.13.50.0:22-139.178.68.195:49340.service: Deactivated successfully. Apr 30 00:53:47.968822 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:53:47.970861 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:53:47.972467 systemd-logind[1458]: Removed session 17. Apr 30 00:53:53.140914 systemd[1]: Started sshd@17-49.13.50.0:22-139.178.68.195:49350.service - OpenSSH per-connection server daemon (139.178.68.195:49350). Apr 30 00:53:54.139849 sshd[6018]: Accepted publickey for core from 139.178.68.195 port 49350 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:53:54.142516 sshd[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:54.147935 systemd-logind[1458]: New session 18 of user core. Apr 30 00:53:54.151762 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:53:54.894013 sshd[6018]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:54.899167 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:53:54.899808 systemd[1]: sshd@17-49.13.50.0:22-139.178.68.195:49350.service: Deactivated successfully. Apr 30 00:53:54.902693 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:53:54.904699 systemd-logind[1458]: Removed session 18. Apr 30 00:54:00.075965 systemd[1]: Started sshd@18-49.13.50.0:22-139.178.68.195:49040.service - OpenSSH per-connection server daemon (139.178.68.195:49040). Apr 30 00:54:00.803681 systemd[1]: run-containerd-runc-k8s.io-26e13113531f5bb9d379b9e2bbaa7b113b6ae2f7fb676ba3c1a3039956385752-runc.o6yOtK.mount: Deactivated successfully. Apr 30 00:54:01.058215 sshd[6044]: Accepted publickey for core from 139.178.68.195 port 49040 ssh2: RSA SHA256:ACLXUt+7uFWNZVvklpgswHu5AM5+eT4ezI3y1kPpVUY Apr 30 00:54:01.061548 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:01.068308 systemd-logind[1458]: New session 19 of user core. Apr 30 00:54:01.074809 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:54:01.816688 sshd[6044]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:01.822716 systemd[1]: sshd@18-49.13.50.0:22-139.178.68.195:49040.service: Deactivated successfully. Apr 30 00:54:01.826505 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:54:01.830160 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:54:01.831856 systemd-logind[1458]: Removed session 19. Apr 30 00:54:18.020897 systemd[1]: cri-containerd-48b0784b6754af34868382d702a1389fd6fe4b08f9756e428af81ee756f2599e.scope: Deactivated successfully. Apr 30 00:54:18.021345 systemd[1]: cri-containerd-48b0784b6754af34868382d702a1389fd6fe4b08f9756e428af81ee756f2599e.scope: Consumed 7.173s CPU time. Apr 30 00:54:18.045382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48b0784b6754af34868382d702a1389fd6fe4b08f9756e428af81ee756f2599e-rootfs.mount: Deactivated successfully. Apr 30 00:54:18.046936 containerd[1475]: time="2025-04-30T00:54:18.046868210Z" level=info msg="shim disconnected" id=48b0784b6754af34868382d702a1389fd6fe4b08f9756e428af81ee756f2599e namespace=k8s.io Apr 30 00:54:18.046936 containerd[1475]: time="2025-04-30T00:54:18.046932288Z" level=warning msg="cleaning up after shim disconnected" id=48b0784b6754af34868382d702a1389fd6fe4b08f9756e428af81ee756f2599e namespace=k8s.io Apr 30 00:54:18.046936 containerd[1475]: time="2025-04-30T00:54:18.046940808Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:54:18.474964 kubelet[2695]: E0430 00:54:18.474320 2695 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57336->10.0.0.2:2379: read: connection timed out" Apr 30 00:54:18.614849 systemd[1]: cri-containerd-8c5035cd70163e5d709ef17ca12587cd3eaf1df9c4951e30cb6bfc85a47ef091.scope: Deactivated successfully. Apr 30 00:54:18.616227 systemd[1]: cri-containerd-8c5035cd70163e5d709ef17ca12587cd3eaf1df9c4951e30cb6bfc85a47ef091.scope: Consumed 6.090s CPU time, 18.1M memory peak, 0B memory swap peak. Apr 30 00:54:18.648120 containerd[1475]: time="2025-04-30T00:54:18.647864339Z" level=info msg="shim disconnected" id=8c5035cd70163e5d709ef17ca12587cd3eaf1df9c4951e30cb6bfc85a47ef091 namespace=k8s.io Apr 30 00:54:18.648120 containerd[1475]: time="2025-04-30T00:54:18.647931497Z" level=warning msg="cleaning up after shim disconnected" id=8c5035cd70163e5d709ef17ca12587cd3eaf1df9c4951e30cb6bfc85a47ef091 namespace=k8s.io Apr 30 00:54:18.648120 containerd[1475]: time="2025-04-30T00:54:18.647940137Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:54:18.649607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c5035cd70163e5d709ef17ca12587cd3eaf1df9c4951e30cb6bfc85a47ef091-rootfs.mount: Deactivated successfully. Apr 30 00:54:18.660597 containerd[1475]: time="2025-04-30T00:54:18.660421561Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:54:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:54:18.673101 kubelet[2695]: I0430 00:54:18.672770 2695 scope.go:117] "RemoveContainer" containerID="48b0784b6754af34868382d702a1389fd6fe4b08f9756e428af81ee756f2599e" Apr 30 00:54:18.686881 containerd[1475]: time="2025-04-30T00:54:18.686712057Z" level=info msg="CreateContainer within sandbox \"ac081b967f36243cac3c8c25c1bc6aa7f4ff43cce7fdf07d04a9b6ba89b506cc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 30 00:54:18.698869 containerd[1475]: time="2025-04-30T00:54:18.698468337Z" level=info msg="CreateContainer within sandbox \"ac081b967f36243cac3c8c25c1bc6aa7f4ff43cce7fdf07d04a9b6ba89b506cc\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a27db282e87ffbb95e0669367a4e4edf5b65c1da56d865b787f50f95cf40a62a\"" Apr 30 00:54:18.699031 containerd[1475]: time="2025-04-30T00:54:18.698985165Z" level=info msg="StartContainer for \"a27db282e87ffbb95e0669367a4e4edf5b65c1da56d865b787f50f95cf40a62a\"" Apr 30 00:54:18.725767 systemd[1]: Started cri-containerd-a27db282e87ffbb95e0669367a4e4edf5b65c1da56d865b787f50f95cf40a62a.scope - libcontainer container a27db282e87ffbb95e0669367a4e4edf5b65c1da56d865b787f50f95cf40a62a. Apr 30 00:54:18.754364 containerd[1475]: time="2025-04-30T00:54:18.754195174Z" level=info msg="StartContainer for \"a27db282e87ffbb95e0669367a4e4edf5b65c1da56d865b787f50f95cf40a62a\" returns successfully" Apr 30 00:54:19.676350 kubelet[2695]: I0430 00:54:19.675824 2695 scope.go:117] "RemoveContainer" containerID="8c5035cd70163e5d709ef17ca12587cd3eaf1df9c4951e30cb6bfc85a47ef091" Apr 30 00:54:19.680222 containerd[1475]: time="2025-04-30T00:54:19.680034121Z" level=info msg="CreateContainer within sandbox \"5c72a28561d6f291c550e23e39fb2a8204b1c9b0cac8696dc2b9fc868ab5c918\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 00:54:19.695280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1715727014.mount: Deactivated successfully. Apr 30 00:54:19.701923 containerd[1475]: time="2025-04-30T00:54:19.701866932Z" level=info msg="CreateContainer within sandbox \"5c72a28561d6f291c550e23e39fb2a8204b1c9b0cac8696dc2b9fc868ab5c918\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"bb075735e8c4cfe980a8d62da4474dd0671a26d9e6b185d2d0c14c4abfb55308\"" Apr 30 00:54:19.702744 containerd[1475]: time="2025-04-30T00:54:19.702666674Z" level=info msg="StartContainer for \"bb075735e8c4cfe980a8d62da4474dd0671a26d9e6b185d2d0c14c4abfb55308\"" Apr 30 00:54:19.742937 systemd[1]: Started cri-containerd-bb075735e8c4cfe980a8d62da4474dd0671a26d9e6b185d2d0c14c4abfb55308.scope - libcontainer container bb075735e8c4cfe980a8d62da4474dd0671a26d9e6b185d2d0c14c4abfb55308. Apr 30 00:54:19.781992 containerd[1475]: time="2025-04-30T00:54:19.781951305Z" level=info msg="StartContainer for \"bb075735e8c4cfe980a8d62da4474dd0671a26d9e6b185d2d0c14c4abfb55308\" returns successfully" Apr 30 00:54:19.803549 systemd[1]: run-containerd-runc-k8s.io-bb075735e8c4cfe980a8d62da4474dd0671a26d9e6b185d2d0c14c4abfb55308-runc.dDtSXe.mount: Deactivated successfully. Apr 30 00:54:21.737468 kubelet[2695]: E0430 00:54:21.737135 2695 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57142->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-7-874bc1dee9.183af2834d702116 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-7-874bc1dee9,UID:34ebbf0bd7e6f022cc3122d00c4a0a06,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-7-874bc1dee9,},FirstTimestamp:2025-04-30 00:54:11.313320214 +0000 UTC m=+376.023959286,LastTimestamp:2025-04-30 00:54:11.313320214 +0000 UTC m=+376.023959286,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-7-874bc1dee9,}"