Jan 29 11:58:58.923751 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:58:58.923801 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 29 11:58:58.923816 kernel: KASLR enabled Jan 29 11:58:58.923822 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 29 11:58:58.923829 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 29 11:58:58.923834 kernel: random: crng init done Jan 29 11:58:58.923841 kernel: ACPI: Early table checksum verification disabled Jan 29 11:58:58.923847 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 29 11:58:58.923854 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:58:58.923876 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:58.923883 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:58.923889 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:58.923896 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:58.923902 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:58.923909 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:58.923918 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:58.923925 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:58.923931 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:58:58.923938 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:58:58.923944 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 29 11:58:58.923951 kernel: NUMA: Failed to initialise from firmware Jan 29 11:58:58.923957 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 11:58:58.923963 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 29 11:58:58.923970 kernel: Zone ranges: Jan 29 11:58:58.923976 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 11:58:58.923984 kernel: DMA32 empty Jan 29 11:58:58.923991 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 29 11:58:58.923997 kernel: Movable zone start for each node Jan 29 11:58:58.924004 kernel: Early memory node ranges Jan 29 11:58:58.924010 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 29 11:58:58.924017 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 29 11:58:58.924023 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 29 11:58:58.924030 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 29 11:58:58.924036 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 29 11:58:58.924042 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 29 11:58:58.924049 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 29 11:58:58.924055 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 11:58:58.924064 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 29 11:58:58.924070 kernel: psci: probing for conduit method from ACPI. Jan 29 11:58:58.924076 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:58:58.924086 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:58:58.924093 kernel: psci: Trusted OS migration not required Jan 29 11:58:58.924100 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:58:58.924109 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:58:58.924116 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:58:58.924123 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:58:58.924130 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 11:58:58.924137 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:58:58.924144 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:58:58.924151 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:58:58.924157 kernel: CPU features: detected: Spectre-v4 Jan 29 11:58:58.924164 kernel: CPU features: detected: Spectre-BHB Jan 29 11:58:58.924171 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:58:58.924187 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:58:58.924196 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:58:58.924203 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:58:58.924210 kernel: alternatives: applying boot alternatives Jan 29 11:58:58.924218 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:58:58.924226 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:58:58.924233 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:58:58.924240 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:58:58.924248 kernel: Fallback order for Node 0: 0 Jan 29 11:58:58.924255 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 29 11:58:58.924263 kernel: Policy zone: Normal Jan 29 11:58:58.924272 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:58:58.924279 kernel: software IO TLB: area num 2. Jan 29 11:58:58.924286 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 29 11:58:58.924294 kernel: Memory: 3882936K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 213064K reserved, 0K cma-reserved) Jan 29 11:58:58.924301 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:58:58.924308 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:58:58.924326 kernel: rcu: RCU event tracing is enabled. Jan 29 11:58:58.924334 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:58:58.924341 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:58:58.924401 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:58:58.924415 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:58:58.924428 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:58:58.924436 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:58:58.924444 kernel: GICv3: 256 SPIs implemented Jan 29 11:58:58.924452 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:58:58.924460 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:58:58.924468 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:58:58.924475 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:58:58.924482 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:58:58.924490 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:58:58.924497 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:58:58.924504 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 29 11:58:58.924511 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 29 11:58:58.924519 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:58:58.924526 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:58:58.924533 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:58:58.924540 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:58:58.924547 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:58:58.924554 kernel: Console: colour dummy device 80x25 Jan 29 11:58:58.924561 kernel: ACPI: Core revision 20230628 Jan 29 11:58:58.924569 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:58:58.924576 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:58:58.924583 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:58:58.924591 kernel: landlock: Up and running. Jan 29 11:58:58.924598 kernel: SELinux: Initializing. Jan 29 11:58:58.924605 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:58:58.924618 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:58:58.924627 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:58:58.924635 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:58:58.924642 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:58:58.924649 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:58:58.924656 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:58:58.924674 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:58:58.924682 kernel: Remapping and enabling EFI services. Jan 29 11:58:58.924689 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:58:58.924696 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:58:58.924703 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:58:58.924710 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 29 11:58:58.924717 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:58:58.924725 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:58:58.924732 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:58:58.924739 kernel: SMP: Total of 2 processors activated. Jan 29 11:58:58.924748 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:58:58.924755 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:58:58.924769 kernel: CPU features: detected: Common not Private translations Jan 29 11:58:58.924778 kernel: CPU features: detected: CRC32 instructions Jan 29 11:58:58.924787 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:58:58.924795 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:58:58.924802 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:58:58.924810 kernel: CPU features: detected: Privileged Access Never Jan 29 11:58:58.924817 kernel: CPU features: detected: RAS Extension Support Jan 29 11:58:58.924827 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:58:58.924835 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:58:58.924842 kernel: alternatives: applying system-wide alternatives Jan 29 11:58:58.924850 kernel: devtmpfs: initialized Jan 29 11:58:58.924857 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:58:58.924865 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:58:58.924872 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:58:58.924881 kernel: SMBIOS 3.0.0 present. Jan 29 11:58:58.924889 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 29 11:58:58.924897 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:58:58.924905 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:58:58.924914 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:58:58.924921 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:58:58.924929 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:58:58.924936 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Jan 29 11:58:58.924944 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:58:58.924952 kernel: cpuidle: using governor menu Jan 29 11:58:58.924960 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:58:58.924967 kernel: ASID allocator initialised with 32768 entries Jan 29 11:58:58.924975 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:58:58.924983 kernel: Serial: AMBA PL011 UART driver Jan 29 11:58:58.924990 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:58:58.925006 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:58:58.925015 kernel: Modules: 509040 pages in range for PLT usage Jan 29 11:58:58.925023 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:58:58.925034 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:58:58.925043 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:58:58.925051 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:58:58.925059 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:58:58.925069 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:58:58.925077 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:58:58.925086 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:58:58.925094 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:58:58.925103 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:58:58.925115 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:58:58.925123 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:58:58.925130 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:58:58.925138 kernel: ACPI: Interpreter enabled Jan 29 11:58:58.925145 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:58:58.925153 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:58:58.925161 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:58:58.925168 kernel: printk: console [ttyAMA0] enabled Jan 29 11:58:58.925176 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:58:58.929479 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:58:58.929650 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:58:58.929774 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:58:58.929843 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:58:58.929920 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:58:58.929932 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:58:58.929940 kernel: PCI host bridge to bus 0000:00 Jan 29 11:58:58.930028 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:58:58.930093 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:58:58.930153 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:58:58.930219 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:58:58.930307 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:58:58.932576 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 29 11:58:58.932755 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 29 11:58:58.932841 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 11:58:58.932924 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:58.932994 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 29 11:58:58.933075 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:58.933146 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 29 11:58:58.933227 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:58.933308 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 29 11:58:58.933419 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:58.933498 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 29 11:58:58.933587 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:58.933688 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 29 11:58:58.933768 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:58.933849 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 29 11:58:58.933931 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:58.934007 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 29 11:58:58.934083 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:58.934150 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 29 11:58:58.934231 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:58:58.934314 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 29 11:58:58.935944 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 29 11:58:58.936046 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 29 11:58:58.936139 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 11:58:58.936212 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 29 11:58:58.936286 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:58:58.937498 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 11:58:58.937630 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 11:58:58.937724 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 29 11:58:58.937807 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 11:58:58.937880 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 29 11:58:58.937949 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 29 11:58:58.938031 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 11:58:58.938116 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 29 11:58:58.938197 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 11:58:58.938333 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 29 11:58:58.941065 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 29 11:58:58.941199 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 11:58:58.941275 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 29 11:58:58.942480 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 11:58:58.942602 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 11:58:58.942691 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 29 11:58:58.942771 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 29 11:58:58.942843 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 11:58:58.942925 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 29 11:58:58.942996 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 29 11:58:58.943088 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 29 11:58:58.943166 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 29 11:58:58.943233 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 29 11:58:58.943299 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 29 11:58:58.944521 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 11:58:58.944610 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 29 11:58:58.944728 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 29 11:58:58.944819 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 11:58:58.944891 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 29 11:58:58.944959 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 29 11:58:58.945034 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 11:58:58.945105 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 29 11:58:58.945174 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 29 11:58:58.945256 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 11:58:58.945334 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 29 11:58:58.945433 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 29 11:58:58.945507 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 11:58:58.945576 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 29 11:58:58.945646 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 29 11:58:58.945729 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 11:58:58.945881 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 29 11:58:58.945951 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 29 11:58:58.946034 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 11:58:58.946103 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 29 11:58:58.946175 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 29 11:58:58.946261 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 29 11:58:58.946341 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:58:58.947499 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 29 11:58:58.947579 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:58:58.947712 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 29 11:58:58.947788 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:58:58.947864 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 29 11:58:58.947936 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:58:58.948009 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 29 11:58:58.948075 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:58:58.948145 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 29 11:58:58.948217 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:58:58.948289 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 29 11:58:58.952465 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:58:58.952598 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 29 11:58:58.952727 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:58:58.952810 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 29 11:58:58.952894 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:58:58.952977 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 29 11:58:58.953047 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 29 11:58:58.953120 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 29 11:58:58.953189 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 11:58:58.953267 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 29 11:58:58.953337 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 11:58:58.953848 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 29 11:58:58.954156 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 11:58:58.954244 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 29 11:58:58.954314 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 11:58:58.954977 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 29 11:58:58.955063 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 11:58:58.955140 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 29 11:58:58.955211 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 11:58:58.955288 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 29 11:58:58.955396 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 11:58:58.955475 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 29 11:58:58.955549 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 11:58:58.955621 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 29 11:58:58.955710 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 29 11:58:58.955794 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 29 11:58:58.955874 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 29 11:58:58.955947 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:58:58.956024 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 29 11:58:58.956096 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 11:58:58.956172 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 11:58:58.956241 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 29 11:58:58.956310 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:58:58.957286 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 29 11:58:58.957467 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 11:58:58.957547 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 11:58:58.957622 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 29 11:58:58.957714 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:58:58.957797 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 11:58:58.957878 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 29 11:58:58.957961 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 11:58:58.958031 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 11:58:58.958100 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 29 11:58:58.958168 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:58:58.958248 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 11:58:58.958320 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 11:58:58.958445 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 11:58:58.958516 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 29 11:58:58.958586 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:58:58.958670 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 29 11:58:58.958753 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 29 11:58:58.958827 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 11:58:58.958895 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 11:58:58.958985 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 29 11:58:58.959068 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:58:58.959146 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 29 11:58:58.959223 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 29 11:58:58.959295 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 11:58:58.959387 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 11:58:58.959460 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 29 11:58:58.959528 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:58:58.959606 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 29 11:58:58.959718 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 29 11:58:58.959798 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 29 11:58:58.959875 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 11:58:58.959944 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 11:58:58.960012 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 29 11:58:58.960079 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:58:58.960152 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 11:58:58.960222 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 11:58:58.960300 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 29 11:58:58.960384 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:58:58.960461 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 11:58:58.960532 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 29 11:58:58.960601 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 29 11:58:58.960681 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:58:58.960757 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:58:58.960822 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:58:58.960885 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:58:58.960966 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 11:58:58.961038 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 29 11:58:58.961123 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:58:58.961198 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 29 11:58:58.961264 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 29 11:58:58.961333 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:58:58.961522 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 29 11:58:58.961597 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 29 11:58:58.961701 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:58:58.961797 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 11:58:58.961867 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 29 11:58:58.961930 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:58:58.962001 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 29 11:58:58.962065 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 29 11:58:58.962126 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:58:58.962198 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 29 11:58:58.962261 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 29 11:58:58.962327 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:58:58.962533 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 29 11:58:58.962601 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 29 11:58:58.962686 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:58:58.962770 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 29 11:58:58.962835 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 29 11:58:58.962901 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:58:58.962972 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 29 11:58:58.963040 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 29 11:58:58.963104 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:58:58.963115 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:58:58.963123 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:58:58.963131 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:58:58.963139 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:58:58.963147 kernel: iommu: Default domain type: Translated Jan 29 11:58:58.963155 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:58:58.963165 kernel: efivars: Registered efivars operations Jan 29 11:58:58.963173 kernel: vgaarb: loaded Jan 29 11:58:58.963180 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:58:58.963189 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:58:58.963197 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:58:58.963204 kernel: pnp: PnP ACPI init Jan 29 11:58:58.963285 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:58:58.963297 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:58:58.963308 kernel: NET: Registered PF_INET protocol family Jan 29 11:58:58.963316 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:58:58.963324 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:58:58.963332 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:58:58.963340 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:58:58.963360 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:58:58.963370 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:58:58.963378 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:58:58.963399 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:58:58.963410 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:58:58.963500 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 29 11:58:58.963513 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:58:58.963521 kernel: kvm [1]: HYP mode not available Jan 29 11:58:58.963529 kernel: Initialise system trusted keyrings Jan 29 11:58:58.963536 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:58:58.963544 kernel: Key type asymmetric registered Jan 29 11:58:58.963552 kernel: Asymmetric key parser 'x509' registered Jan 29 11:58:58.963560 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:58:58.963571 kernel: io scheduler mq-deadline registered Jan 29 11:58:58.963580 kernel: io scheduler kyber registered Jan 29 11:58:58.963588 kernel: io scheduler bfq registered Jan 29 11:58:58.963597 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 11:58:58.963687 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 29 11:58:58.963765 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 29 11:58:58.963837 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:58.963913 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 29 11:58:58.963981 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 29 11:58:58.964054 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:58.964151 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 29 11:58:58.964220 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 29 11:58:58.964290 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:58.965454 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 29 11:58:58.965638 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 29 11:58:58.965729 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:58.965818 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 29 11:58:58.965886 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 29 11:58:58.965952 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:58.966034 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 29 11:58:58.966102 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 29 11:58:58.966172 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:58.966247 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 29 11:58:58.966324 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 29 11:58:58.966489 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:58.966574 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 29 11:58:58.966644 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 29 11:58:58.966791 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:58.966804 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 29 11:58:58.966876 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 29 11:58:58.966946 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 29 11:58:58.967014 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:58:58.967029 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:58:58.967038 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:58:58.967047 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:58:58.967136 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 29 11:58:58.967232 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 29 11:58:58.967244 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:58:58.967252 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 11:58:58.967324 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 29 11:58:58.967339 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 29 11:58:58.967359 kernel: thunder_xcv, ver 1.0 Jan 29 11:58:58.967369 kernel: thunder_bgx, ver 1.0 Jan 29 11:58:58.968407 kernel: nicpf, ver 1.0 Jan 29 11:58:58.968416 kernel: nicvf, ver 1.0 Jan 29 11:58:58.968556 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:58:58.968628 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:58:58 UTC (1738151938) Jan 29 11:58:58.968639 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:58:58.968675 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:58:58.968686 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:58:58.968695 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:58:58.968702 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:58:58.968710 kernel: Segment Routing with IPv6 Jan 29 11:58:58.968718 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:58:58.968726 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:58:58.968733 kernel: Key type dns_resolver registered Jan 29 11:58:58.968741 kernel: registered taskstats version 1 Jan 29 11:58:58.968752 kernel: Loading compiled-in X.509 certificates Jan 29 11:58:58.968760 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 29 11:58:58.968768 kernel: Key type .fscrypt registered Jan 29 11:58:58.968776 kernel: Key type fscrypt-provisioning registered Jan 29 11:58:58.968784 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:58:58.968792 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:58:58.968800 kernel: ima: No architecture policies found Jan 29 11:58:58.968807 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:58:58.968815 kernel: clk: Disabling unused clocks Jan 29 11:58:58.968825 kernel: Freeing unused kernel memory: 39360K Jan 29 11:58:58.968832 kernel: Run /init as init process Jan 29 11:58:58.968860 kernel: with arguments: Jan 29 11:58:58.968868 kernel: /init Jan 29 11:58:58.968876 kernel: with environment: Jan 29 11:58:58.968883 kernel: HOME=/ Jan 29 11:58:58.968891 kernel: TERM=linux Jan 29 11:58:58.968899 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:58:58.968909 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:58:58.968922 systemd[1]: Detected virtualization kvm. Jan 29 11:58:58.968930 systemd[1]: Detected architecture arm64. Jan 29 11:58:58.968938 systemd[1]: Running in initrd. Jan 29 11:58:58.968946 systemd[1]: No hostname configured, using default hostname. Jan 29 11:58:58.968954 systemd[1]: Hostname set to . Jan 29 11:58:58.968962 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:58:58.968971 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:58:58.968981 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:58.968989 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:58.968998 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:58:58.969007 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:58:58.969015 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:58:58.969024 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:58:58.969034 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:58:58.969044 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:58:58.969052 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:58.969061 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:58.969069 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:58:58.969077 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:58:58.969092 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:58:58.969101 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:58:58.969110 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:58:58.969121 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:58:58.969130 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:58:58.969139 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:58:58.969147 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:58.969155 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:58.969164 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:58.969172 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:58:58.969180 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:58:58.969189 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:58:58.969199 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:58:58.969207 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:58:58.969215 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:58:58.969223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:58:58.969232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:58.969240 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:58:58.969249 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:58.969257 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:58:58.969294 systemd-journald[235]: Collecting audit messages is disabled. Jan 29 11:58:58.969319 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:58:58.969328 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:58:58.969336 kernel: Bridge firewalling registered Jan 29 11:58:58.969344 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:58.969419 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:58:58.969428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:58.969437 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:58.969446 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:58:58.969457 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:58:58.969465 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:58.969475 systemd-journald[235]: Journal started Jan 29 11:58:58.969495 systemd-journald[235]: Runtime Journal (/run/log/journal/fbe777f422ec4e5a8712a935c9a5509b) is 8.0M, max 76.6M, 68.6M free. Jan 29 11:58:58.919604 systemd-modules-load[236]: Inserted module 'overlay' Jan 29 11:58:58.934835 systemd-modules-load[236]: Inserted module 'br_netfilter' Jan 29 11:58:58.975546 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:58:58.992584 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:58:58.994723 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:58.996556 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:59.005771 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:58:59.008398 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:59.012605 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:58:59.026158 dracut-cmdline[270]: dracut-dracut-053 Jan 29 11:58:59.032016 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:58:59.060528 systemd-resolved[273]: Positive Trust Anchors: Jan 29 11:58:59.061213 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:58:59.061250 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:58:59.071756 systemd-resolved[273]: Defaulting to hostname 'linux'. Jan 29 11:58:59.073617 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:58:59.074991 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:59.144415 kernel: SCSI subsystem initialized Jan 29 11:58:59.149383 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:58:59.157392 kernel: iscsi: registered transport (tcp) Jan 29 11:58:59.171405 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:58:59.171463 kernel: QLogic iSCSI HBA Driver Jan 29 11:58:59.221209 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:58:59.228687 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:58:59.248634 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:58:59.248795 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:58:59.248868 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:58:59.300442 kernel: raid6: neonx8 gen() 15638 MB/s Jan 29 11:58:59.317399 kernel: raid6: neonx4 gen() 15509 MB/s Jan 29 11:58:59.334398 kernel: raid6: neonx2 gen() 13179 MB/s Jan 29 11:58:59.351417 kernel: raid6: neonx1 gen() 10441 MB/s Jan 29 11:58:59.368405 kernel: raid6: int64x8 gen() 6931 MB/s Jan 29 11:58:59.385519 kernel: raid6: int64x4 gen() 7322 MB/s Jan 29 11:58:59.402525 kernel: raid6: int64x2 gen() 6111 MB/s Jan 29 11:58:59.419422 kernel: raid6: int64x1 gen() 5030 MB/s Jan 29 11:58:59.419518 kernel: raid6: using algorithm neonx8 gen() 15638 MB/s Jan 29 11:58:59.437621 kernel: raid6: .... xor() 11871 MB/s, rmw enabled Jan 29 11:58:59.437705 kernel: raid6: using neon recovery algorithm Jan 29 11:58:59.441734 kernel: xor: measuring software checksum speed Jan 29 11:58:59.441810 kernel: 8regs : 19802 MB/sec Jan 29 11:58:59.441823 kernel: 32regs : 18288 MB/sec Jan 29 11:58:59.441833 kernel: arm64_neon : 26945 MB/sec Jan 29 11:58:59.443783 kernel: xor: using function: arm64_neon (26945 MB/sec) Jan 29 11:58:59.494434 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:58:59.512433 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:58:59.519650 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:59.548313 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 29 11:58:59.552846 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:59.561651 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:58:59.578424 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jan 29 11:58:59.624039 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:58:59.631810 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:58:59.684142 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:59.691904 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:58:59.714398 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:58:59.717198 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:58:59.718165 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:59.718929 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:58:59.729649 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:58:59.747575 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:58:59.817617 kernel: ACPI: bus type USB registered Jan 29 11:58:59.817705 kernel: usbcore: registered new interface driver usbfs Jan 29 11:58:59.817723 kernel: usbcore: registered new interface driver hub Jan 29 11:58:59.821333 kernel: usbcore: registered new device driver usb Jan 29 11:58:59.821446 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:58:59.823760 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:58:59.823856 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 11:58:59.833340 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:58:59.833506 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:59.835377 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:59.836269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:59.836472 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:59.838128 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:59.847058 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:59.859491 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 11:58:59.867904 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 11:58:59.868019 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 11:58:59.868103 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 11:58:59.868186 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 11:58:59.868275 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 11:58:59.868380 kernel: hub 1-0:1.0: USB hub found Jan 29 11:58:59.868494 kernel: hub 1-0:1.0: 4 ports detected Jan 29 11:58:59.868579 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 11:58:59.868734 kernel: hub 2-0:1.0: USB hub found Jan 29 11:58:59.868837 kernel: hub 2-0:1.0: 4 ports detected Jan 29 11:58:59.873700 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:59.881587 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 29 11:58:59.885694 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 29 11:58:59.885831 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:58:59.885842 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:58:59.881616 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:59.906756 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 29 11:58:59.922516 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 11:58:59.922702 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 29 11:58:59.922807 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 29 11:58:59.922894 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 11:58:59.923003 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:58:59.923024 kernel: GPT:17805311 != 80003071 Jan 29 11:58:59.923034 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:58:59.923044 kernel: GPT:17805311 != 80003071 Jan 29 11:58:59.923053 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:58:59.923063 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:59.923076 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 29 11:58:59.915463 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:59.964387 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (527) Jan 29 11:58:59.966673 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 11:58:59.973866 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (508) Jan 29 11:58:59.975416 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 11:58:59.990188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 11:58:59.998072 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 11:59:00.000542 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 11:59:00.008628 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:59:00.018778 disk-uuid[573]: Primary Header is updated. Jan 29 11:59:00.018778 disk-uuid[573]: Secondary Entries is updated. Jan 29 11:59:00.018778 disk-uuid[573]: Secondary Header is updated. Jan 29 11:59:00.028401 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:59:00.108495 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 11:59:00.351394 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 29 11:59:00.496405 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 29 11:59:00.496469 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 11:59:00.498480 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 29 11:59:00.551941 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 29 11:59:00.552679 kernel: usbcore: registered new interface driver usbhid Jan 29 11:59:00.552709 kernel: usbhid: USB HID core driver Jan 29 11:59:01.040473 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:59:01.040998 disk-uuid[574]: The operation has completed successfully. Jan 29 11:59:01.110412 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:59:01.111318 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:59:01.118678 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:59:01.123659 sh[588]: Success Jan 29 11:59:01.137402 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:59:01.202069 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:59:01.222584 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:59:01.224638 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:59:01.258409 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 29 11:59:01.258472 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:59:01.258486 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:59:01.259423 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:59:01.259470 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:59:01.267436 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:59:01.270383 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:59:01.272457 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:59:01.277683 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:59:01.281668 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:59:01.298746 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:59:01.298813 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:59:01.298825 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:59:01.303773 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:59:01.303866 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:59:01.319540 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:59:01.320992 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:59:01.327389 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:59:01.336693 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:59:01.444526 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:59:01.454630 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:59:01.460865 ignition[672]: Ignition 2.19.0 Jan 29 11:59:01.460879 ignition[672]: Stage: fetch-offline Jan 29 11:59:01.460918 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:01.460928 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:59:01.461083 ignition[672]: parsed url from cmdline: "" Jan 29 11:59:01.465513 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:59:01.461087 ignition[672]: no config URL provided Jan 29 11:59:01.461091 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:59:01.461098 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:59:01.461104 ignition[672]: failed to fetch config: resource requires networking Jan 29 11:59:01.463677 ignition[672]: Ignition finished successfully Jan 29 11:59:01.483759 systemd-networkd[776]: lo: Link UP Jan 29 11:59:01.483773 systemd-networkd[776]: lo: Gained carrier Jan 29 11:59:01.486086 systemd-networkd[776]: Enumeration completed Jan 29 11:59:01.486225 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:59:01.486986 systemd[1]: Reached target network.target - Network. Jan 29 11:59:01.488217 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:01.488221 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:59:01.489147 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:01.489150 systemd-networkd[776]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:59:01.489984 systemd-networkd[776]: eth0: Link UP Jan 29 11:59:01.489988 systemd-networkd[776]: eth0: Gained carrier Jan 29 11:59:01.489997 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:01.495661 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:59:01.496826 systemd-networkd[776]: eth1: Link UP Jan 29 11:59:01.496829 systemd-networkd[776]: eth1: Gained carrier Jan 29 11:59:01.496839 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:01.511876 ignition[779]: Ignition 2.19.0 Jan 29 11:59:01.511896 ignition[779]: Stage: fetch Jan 29 11:59:01.512107 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:01.512117 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:59:01.512216 ignition[779]: parsed url from cmdline: "" Jan 29 11:59:01.512220 ignition[779]: no config URL provided Jan 29 11:59:01.512224 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:59:01.512232 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:59:01.512255 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 11:59:01.513052 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 11:59:01.534498 systemd-networkd[776]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:59:01.569507 systemd-networkd[776]: eth0: DHCPv4 address 188.34.178.132/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 11:59:01.713216 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 11:59:01.719458 ignition[779]: GET result: OK Jan 29 11:59:01.719571 ignition[779]: parsing config with SHA512: 76984dbae59ca804955be5d8819f94edbdf39da99a013dc03bf2855e45f662d818dfd0e374db85a04c44eb7397e942ceef7de79bb3c6731f87abb283fab7724d Jan 29 11:59:01.725447 unknown[779]: fetched base config from "system" Jan 29 11:59:01.725942 ignition[779]: fetch: fetch complete Jan 29 11:59:01.725459 unknown[779]: fetched base config from "system" Jan 29 11:59:01.725947 ignition[779]: fetch: fetch passed Jan 29 11:59:01.725464 unknown[779]: fetched user config from "hetzner" Jan 29 11:59:01.726002 ignition[779]: Ignition finished successfully Jan 29 11:59:01.729589 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:59:01.744839 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:59:01.760386 ignition[787]: Ignition 2.19.0 Jan 29 11:59:01.760398 ignition[787]: Stage: kargs Jan 29 11:59:01.760574 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:01.760583 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:59:01.761913 ignition[787]: kargs: kargs passed Jan 29 11:59:01.761983 ignition[787]: Ignition finished successfully Jan 29 11:59:01.764472 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:59:01.768626 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:59:01.785019 ignition[793]: Ignition 2.19.0 Jan 29 11:59:01.785035 ignition[793]: Stage: disks Jan 29 11:59:01.785269 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:01.785278 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:59:01.788360 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:59:01.786578 ignition[793]: disks: disks passed Jan 29 11:59:01.789472 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:59:01.786656 ignition[793]: Ignition finished successfully Jan 29 11:59:01.790390 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:59:01.791359 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:59:01.792684 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:59:01.793463 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:59:01.801634 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:59:01.819368 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:59:01.822667 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:59:01.827484 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:59:01.890412 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 29 11:59:01.891736 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:59:01.894056 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:59:01.911626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:59:01.935562 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:59:01.938374 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (809) Jan 29 11:59:01.939601 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:59:01.942000 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:59:01.942035 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:59:01.942046 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:59:01.943779 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:59:01.945023 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:59:01.947713 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:59:01.950381 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:59:01.950429 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:59:01.958986 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:59:01.962140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:59:02.023375 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:59:02.025021 coreos-metadata[811]: Jan 29 11:59:02.024 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 11:59:02.028119 coreos-metadata[811]: Jan 29 11:59:02.028 INFO Fetch successful Jan 29 11:59:02.029401 coreos-metadata[811]: Jan 29 11:59:02.029 INFO wrote hostname ci-4081-3-0-b-488529c6ca to /sysroot/etc/hostname Jan 29 11:59:02.034423 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:59:02.041894 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:59:02.050457 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:59:02.056245 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:59:02.194494 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:59:02.202597 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:59:02.206761 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:59:02.217385 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:59:02.253793 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:59:02.258181 ignition[925]: INFO : Ignition 2.19.0 Jan 29 11:59:02.258181 ignition[925]: INFO : Stage: mount Jan 29 11:59:02.258181 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:02.258181 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:59:02.257999 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:59:02.263460 ignition[925]: INFO : mount: mount passed Jan 29 11:59:02.263460 ignition[925]: INFO : Ignition finished successfully Jan 29 11:59:02.261425 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:59:02.269515 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:59:02.293868 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:59:02.309529 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (938) Jan 29 11:59:02.310832 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:59:02.310881 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:59:02.310893 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:59:02.315435 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:59:02.315509 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:59:02.319243 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:59:02.344165 ignition[955]: INFO : Ignition 2.19.0 Jan 29 11:59:02.344165 ignition[955]: INFO : Stage: files Jan 29 11:59:02.345568 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:02.345568 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:59:02.347745 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:59:02.348973 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:59:02.348973 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:59:02.353523 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:59:02.355480 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:59:02.357199 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:59:02.355617 unknown[955]: wrote ssh authorized keys file for user: core Jan 29 11:59:02.360971 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:59:02.362178 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:59:02.362178 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:59:02.362178 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 11:59:02.576164 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:59:02.920427 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:59:02.920427 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:59:02.920427 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:59:02.920427 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:59:02.925817 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:59:02.925817 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:59:02.925817 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:59:02.925817 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:59:02.925817 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:59:02.925817 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:59:02.925817 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:59:02.925817 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:59:02.925817 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:59:02.925817 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:59:02.925817 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 11:59:03.098560 systemd-networkd[776]: eth0: Gained IPv6LL Jan 29 11:59:03.482007 systemd-networkd[776]: eth1: Gained IPv6LL Jan 29 11:59:03.635800 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:59:03.970761 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:59:03.970761 ignition[955]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:59:03.972776 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:59:03.972776 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:59:03.972776 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:59:03.972776 ignition[955]: INFO : files: files passed Jan 29 11:59:03.972776 ignition[955]: INFO : Ignition finished successfully Jan 29 11:59:03.974665 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:59:03.980554 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:59:03.982549 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:59:04.003406 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:59:04.003504 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:59:04.010891 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:59:04.010891 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:59:04.013642 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:59:04.016827 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:59:04.019601 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:59:04.029586 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:59:04.064019 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:59:04.064197 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:59:04.066297 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:59:04.066913 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:59:04.068727 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:59:04.070467 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:59:04.100257 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:59:04.108684 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:59:04.121315 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:59:04.122583 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:59:04.123239 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:59:04.124392 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:59:04.124523 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:59:04.125752 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:59:04.126791 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:59:04.127595 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:59:04.128510 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:59:04.129548 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:59:04.130637 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:59:04.131569 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:59:04.132592 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:59:04.133651 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:59:04.134577 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:59:04.135395 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:59:04.135526 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:59:04.136735 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:59:04.137800 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:59:04.138815 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:59:04.139304 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:59:04.140066 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:59:04.140200 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:59:04.141720 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:59:04.141855 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:59:04.143342 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:59:04.143457 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:59:04.144311 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:59:04.144432 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:59:04.160878 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:59:04.164718 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:59:04.165994 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:59:04.166179 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:59:04.168432 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:59:04.168615 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:59:04.181939 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:59:04.182784 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:59:04.185977 ignition[1007]: INFO : Ignition 2.19.0 Jan 29 11:59:04.188594 ignition[1007]: INFO : Stage: umount Jan 29 11:59:04.188594 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:59:04.188594 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:59:04.188594 ignition[1007]: INFO : umount: umount passed Jan 29 11:59:04.188594 ignition[1007]: INFO : Ignition finished successfully Jan 29 11:59:04.193813 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:59:04.194478 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:59:04.194589 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:59:04.199179 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:59:04.199335 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:59:04.200040 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:59:04.200093 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:59:04.200962 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:59:04.201008 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:59:04.202062 systemd[1]: Stopped target network.target - Network. Jan 29 11:59:04.202891 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:59:04.202952 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:59:04.203854 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:59:04.204698 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:59:04.209431 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:59:04.210194 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:59:04.211561 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:59:04.213505 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:59:04.213590 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:59:04.214471 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:59:04.214518 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:59:04.215334 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:59:04.215411 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:59:04.216398 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:59:04.216452 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:59:04.217448 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:59:04.218500 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:59:04.219644 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:59:04.219775 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:59:04.221559 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:59:04.221690 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:59:04.224772 systemd-networkd[776]: eth0: DHCPv6 lease lost Jan 29 11:59:04.225709 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:59:04.225873 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:59:04.228960 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:59:04.229063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:59:04.229508 systemd-networkd[776]: eth1: DHCPv6 lease lost Jan 29 11:59:04.232436 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:59:04.232686 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:59:04.234555 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:59:04.234630 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:59:04.243541 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:59:04.244095 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:59:04.244174 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:59:04.246139 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:59:04.246202 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:59:04.246988 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:59:04.247039 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:59:04.248739 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:59:04.268185 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:59:04.268309 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:59:04.271252 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:59:04.271441 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:59:04.272789 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:59:04.272835 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:59:04.273698 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:59:04.273756 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:59:04.274648 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:59:04.274702 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:59:04.275455 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:59:04.275505 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:59:04.278493 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:59:04.278556 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:59:04.284749 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:59:04.285331 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:59:04.285420 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:59:04.287867 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:59:04.287932 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:59:04.288564 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:59:04.288606 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:59:04.290051 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:59:04.290193 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:04.297712 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:59:04.297829 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:59:04.299870 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:59:04.307894 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:59:04.319224 systemd[1]: Switching root. Jan 29 11:59:04.344513 systemd-journald[235]: Journal stopped Jan 29 11:59:05.589670 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Jan 29 11:59:05.589756 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:59:05.589769 kernel: SELinux: policy capability open_perms=1 Jan 29 11:59:05.589780 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:59:05.589789 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:59:05.589803 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:59:05.589817 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:59:05.589827 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:59:05.589836 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:59:05.589847 kernel: audit: type=1403 audit(1738151944.710:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:59:05.589858 systemd[1]: Successfully loaded SELinux policy in 37.061ms. Jan 29 11:59:05.589882 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.450ms. Jan 29 11:59:05.589894 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:59:05.589905 systemd[1]: Detected virtualization kvm. Jan 29 11:59:05.589918 systemd[1]: Detected architecture arm64. Jan 29 11:59:05.589933 systemd[1]: Detected first boot. Jan 29 11:59:05.589944 systemd[1]: Hostname set to . Jan 29 11:59:05.589954 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:59:05.589965 zram_generator::config[1070]: No configuration found. Jan 29 11:59:05.589977 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:59:05.589992 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:59:05.590003 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 11:59:05.590016 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:59:05.590027 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:59:05.590037 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:59:05.590048 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:59:05.590058 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:59:05.590069 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:59:05.590079 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:59:05.590089 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:59:05.590100 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:59:05.590113 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:59:05.590127 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:59:05.590137 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:59:05.590148 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:59:05.590159 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:59:05.590169 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:59:05.590180 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:59:05.590191 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:59:05.590201 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:59:05.590214 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:59:05.590225 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:59:05.590236 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:59:05.590246 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:59:05.590257 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:59:05.590268 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:59:05.590279 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:59:05.590291 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:59:05.590302 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:59:05.590313 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:59:05.590323 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:59:05.590334 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:59:05.590344 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:59:05.590381 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:59:05.590392 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:59:05.590403 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:59:05.590418 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:59:05.590431 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:59:05.590441 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:59:05.590452 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:59:05.590462 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:59:05.590474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:59:05.590485 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:59:05.590496 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:59:05.590509 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:59:05.590520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:59:05.590531 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:59:05.590541 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 11:59:05.590553 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 11:59:05.590565 kernel: fuse: init (API version 7.39) Jan 29 11:59:05.590576 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:59:05.590587 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:59:05.590598 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:59:05.590617 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:59:05.590631 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:59:05.590642 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:59:05.590652 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:59:05.590667 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:59:05.590680 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:59:05.590692 kernel: ACPI: bus type drm_connector registered Jan 29 11:59:05.590701 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:59:05.590712 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:59:05.590723 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:59:05.590735 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:59:05.590783 systemd-journald[1155]: Collecting audit messages is disabled. Jan 29 11:59:05.590808 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:59:05.590820 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:59:05.590831 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:59:05.590843 systemd-journald[1155]: Journal started Jan 29 11:59:05.590869 systemd-journald[1155]: Runtime Journal (/run/log/journal/fbe777f422ec4e5a8712a935c9a5509b) is 8.0M, max 76.6M, 68.6M free. Jan 29 11:59:05.593140 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:59:05.593189 kernel: loop: module loaded Jan 29 11:59:05.595230 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:59:05.595927 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:59:05.596139 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:59:05.597046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:59:05.597194 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:59:05.598212 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:59:05.598394 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:59:05.599260 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:59:05.599618 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:59:05.603014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:59:05.605941 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:59:05.607311 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:59:05.622312 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:59:05.629539 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:59:05.632593 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:59:05.635512 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:59:05.639584 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:59:05.652533 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:59:05.654472 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:59:05.666567 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:59:05.670046 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:59:05.672449 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:59:05.677913 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:59:05.684539 systemd-journald[1155]: Time spent on flushing to /var/log/journal/fbe777f422ec4e5a8712a935c9a5509b is 63.090ms for 1113 entries. Jan 29 11:59:05.684539 systemd-journald[1155]: System Journal (/var/log/journal/fbe777f422ec4e5a8712a935c9a5509b) is 8.0M, max 584.8M, 576.8M free. Jan 29 11:59:05.768109 systemd-journald[1155]: Received client request to flush runtime journal. Jan 29 11:59:05.689022 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:59:05.694975 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:59:05.702977 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:59:05.703859 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:59:05.722881 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:59:05.730697 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:59:05.740763 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:59:05.753497 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 29 11:59:05.753510 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 29 11:59:05.754157 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:59:05.761216 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:59:05.774393 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:59:05.781583 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:59:05.821921 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:59:05.828706 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:59:05.855444 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Jan 29 11:59:05.855464 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Jan 29 11:59:05.862999 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:59:06.369526 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:59:06.376630 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:59:06.414309 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Jan 29 11:59:06.444695 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:59:06.456658 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:59:06.485722 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:59:06.537616 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 29 11:59:06.578716 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:59:06.627439 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1240) Jan 29 11:59:06.691959 systemd-networkd[1239]: lo: Link UP Jan 29 11:59:06.691974 systemd-networkd[1239]: lo: Gained carrier Jan 29 11:59:06.693747 systemd-networkd[1239]: Enumeration completed Jan 29 11:59:06.693979 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:59:06.696952 systemd-networkd[1239]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:06.696964 systemd-networkd[1239]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:59:06.702638 systemd-networkd[1239]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:06.702649 systemd-networkd[1239]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:59:06.703788 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:59:06.704832 systemd-networkd[1239]: eth0: Link UP Jan 29 11:59:06.704846 systemd-networkd[1239]: eth0: Gained carrier Jan 29 11:59:06.704868 systemd-networkd[1239]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:06.708858 systemd-networkd[1239]: eth1: Link UP Jan 29 11:59:06.708869 systemd-networkd[1239]: eth1: Gained carrier Jan 29 11:59:06.708890 systemd-networkd[1239]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:06.741826 systemd-networkd[1239]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:59:06.758423 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 11:59:06.764466 systemd-networkd[1239]: eth0: DHCPv4 address 188.34.178.132/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 11:59:06.765937 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:59:06.782080 systemd-networkd[1239]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:06.787869 systemd-networkd[1239]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:59:06.809093 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 11:59:06.809120 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Jan 29 11:59:06.809286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:59:06.815627 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:59:06.831441 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 29 11:59:06.831536 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 11:59:06.831555 kernel: [drm] features: -context_init Jan 29 11:59:06.832816 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:59:06.837938 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:59:06.839030 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:59:06.839086 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:59:06.842646 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:59:06.842857 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:59:06.847774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:59:06.847979 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:59:06.851779 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:59:06.859766 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:59:06.863040 kernel: [drm] number of scanouts: 1 Jan 29 11:59:06.863110 kernel: [drm] number of cap sets: 0 Jan 29 11:59:06.861495 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:59:06.864171 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:59:06.873416 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 11:59:06.885637 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 11:59:06.895477 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 11:59:06.905839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:06.918273 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:59:06.918616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:06.926731 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:59:06.989083 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:59:07.047333 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:59:07.063789 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:59:07.080405 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:59:07.107756 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:59:07.108623 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:59:07.118662 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:59:07.124100 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:59:07.155365 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:59:07.156192 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:59:07.157200 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:59:07.157313 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:59:07.157958 systemd[1]: Reached target machines.target - Containers. Jan 29 11:59:07.160188 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:59:07.167617 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:59:07.175097 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:59:07.176078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:59:07.181663 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:59:07.192729 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:59:07.197682 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:59:07.199440 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:59:07.224123 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:59:07.232414 kernel: loop0: detected capacity change from 0 to 114432 Jan 29 11:59:07.242720 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:59:07.244324 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:59:07.260396 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:59:07.275573 kernel: loop1: detected capacity change from 0 to 194096 Jan 29 11:59:07.346010 kernel: loop2: detected capacity change from 0 to 8 Jan 29 11:59:07.368796 kernel: loop3: detected capacity change from 0 to 114328 Jan 29 11:59:07.405431 kernel: loop4: detected capacity change from 0 to 114432 Jan 29 11:59:07.422407 kernel: loop5: detected capacity change from 0 to 194096 Jan 29 11:59:07.459407 kernel: loop6: detected capacity change from 0 to 8 Jan 29 11:59:07.462411 kernel: loop7: detected capacity change from 0 to 114328 Jan 29 11:59:07.479006 (sd-merge)[1328]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 11:59:07.479507 (sd-merge)[1328]: Merged extensions into '/usr'. Jan 29 11:59:07.486719 systemd[1]: Reloading requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:59:07.486901 systemd[1]: Reloading... Jan 29 11:59:07.583390 zram_generator::config[1359]: No configuration found. Jan 29 11:59:07.731147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:59:07.750426 ldconfig[1311]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:59:07.769476 systemd-networkd[1239]: eth0: Gained IPv6LL Jan 29 11:59:07.793898 systemd[1]: Reloading finished in 306 ms. Jan 29 11:59:07.818905 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:59:07.820645 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:59:07.822273 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:59:07.831820 systemd[1]: Starting ensure-sysext.service... Jan 29 11:59:07.838672 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:59:07.846121 systemd[1]: Reloading requested from client PID 1402 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:59:07.846291 systemd[1]: Reloading... Jan 29 11:59:07.891093 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:59:07.891435 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:59:07.892126 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:59:07.892373 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jan 29 11:59:07.892423 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jan 29 11:59:07.898494 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:59:07.898514 systemd-tmpfiles[1403]: Skipping /boot Jan 29 11:59:07.909097 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:59:07.909119 systemd-tmpfiles[1403]: Skipping /boot Jan 29 11:59:07.925382 zram_generator::config[1433]: No configuration found. Jan 29 11:59:08.062795 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:59:08.120203 systemd[1]: Reloading finished in 273 ms. Jan 29 11:59:08.139121 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:59:08.156762 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:59:08.167738 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:59:08.171646 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:59:08.186987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:59:08.198479 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:59:08.205962 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:59:08.215691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:59:08.226649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:59:08.246100 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:59:08.246959 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:59:08.249035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:59:08.249707 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:59:08.263283 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:59:08.269537 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:59:08.277871 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:59:08.283792 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:59:08.286724 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:59:08.287983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:59:08.288168 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:59:08.290802 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:59:08.294094 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:59:08.295709 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:59:08.299184 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:59:08.313754 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:59:08.315127 systemd[1]: Finished ensure-sysext.service. Jan 29 11:59:08.318826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:59:08.319100 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:59:08.327614 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:59:08.328036 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:59:08.338102 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:59:08.341681 augenrules[1518]: No rules Jan 29 11:59:08.350688 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:59:08.352297 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:59:08.355578 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:59:08.383501 systemd-resolved[1480]: Positive Trust Anchors: Jan 29 11:59:08.383870 systemd-resolved[1480]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:59:08.383905 systemd-resolved[1480]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:59:08.386434 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:59:08.387784 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:59:08.392328 systemd-resolved[1480]: Using system hostname 'ci-4081-3-0-b-488529c6ca'. Jan 29 11:59:08.395454 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:59:08.396199 systemd[1]: Reached target network.target - Network. Jan 29 11:59:08.397655 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:59:08.398374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:59:08.422246 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:59:08.424968 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:59:08.427764 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:59:08.429224 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:59:08.430836 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:59:08.431568 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:59:08.431627 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:59:08.432076 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:59:08.432842 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:59:08.433485 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:59:08.434116 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:59:08.435940 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:59:08.438311 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:59:08.440464 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:59:08.443641 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:59:08.444282 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:59:08.444878 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:59:08.445630 systemd[1]: System is tainted: cgroupsv1 Jan 29 11:59:08.445676 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:59:08.445699 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:59:08.449476 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:59:08.452691 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:59:08.458897 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:59:08.467555 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:59:08.473617 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:59:08.474408 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:59:08.481119 jq[1538]: false Jan 29 11:59:08.480538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:08.494556 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:59:08.512652 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:59:08.512994 dbus-daemon[1537]: [system] SELinux support is enabled Jan 29 11:59:08.519045 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:59:08.524852 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 11:59:08.538330 systemd-networkd[1239]: eth1: Gained IPv6LL Jan 29 11:59:08.542571 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:59:08.543475 extend-filesystems[1539]: Found loop4 Jan 29 11:59:08.543475 extend-filesystems[1539]: Found loop5 Jan 29 11:59:08.543475 extend-filesystems[1539]: Found loop6 Jan 29 11:59:08.543475 extend-filesystems[1539]: Found loop7 Jan 29 11:59:08.543475 extend-filesystems[1539]: Found sda Jan 29 11:59:08.543475 extend-filesystems[1539]: Found sda1 Jan 29 11:59:08.543475 extend-filesystems[1539]: Found sda2 Jan 29 11:59:08.543475 extend-filesystems[1539]: Found sda3 Jan 29 11:59:08.543475 extend-filesystems[1539]: Found usr Jan 29 11:59:08.543475 extend-filesystems[1539]: Found sda4 Jan 29 11:59:08.543475 extend-filesystems[1539]: Found sda6 Jan 29 11:59:08.543475 extend-filesystems[1539]: Found sda7 Jan 29 11:59:08.543475 extend-filesystems[1539]: Found sda9 Jan 29 11:59:08.543475 extend-filesystems[1539]: Checking size of /dev/sda9 Jan 29 11:59:08.214324 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 11:59:08.363105 systemd-journald[1155]: Time jumped backwards, rotating. Jan 29 11:59:08.363276 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1237) Jan 29 11:59:08.363442 extend-filesystems[1539]: Resized partition /dev/sda9 Jan 29 11:59:08.375630 coreos-metadata[1535]: Jan 29 11:59:08.135 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 11:59:08.375630 coreos-metadata[1535]: Jan 29 11:59:08.145 INFO Fetch successful Jan 29 11:59:08.375630 coreos-metadata[1535]: Jan 29 11:59:08.153 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 11:59:08.375630 coreos-metadata[1535]: Jan 29 11:59:08.155 INFO Fetch successful Jan 29 11:59:08.130099 systemd-timesyncd[1525]: Contacted time server 148.251.235.164:123 (0.flatcar.pool.ntp.org). Jan 29 11:59:08.376608 extend-filesystems[1567]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:59:08.130160 systemd-timesyncd[1525]: Initial clock synchronization to Wed 2025-01-29 11:59:08.129929 UTC. Jan 29 11:59:08.130574 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:59:08.130632 systemd-resolved[1480]: Clock change detected. Flushing caches. Jan 29 11:59:08.147146 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:59:08.168622 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:59:08.405596 update_engine[1578]: I20250129 11:59:08.255824 1578 main.cc:92] Flatcar Update Engine starting Jan 29 11:59:08.405596 update_engine[1578]: I20250129 11:59:08.268471 1578 update_check_scheduler.cc:74] Next update check in 4m36s Jan 29 11:59:08.182811 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:59:08.406328 jq[1580]: true Jan 29 11:59:08.196956 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:59:08.199852 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:59:08.228754 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:59:08.229017 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:59:08.407423 jq[1588]: true Jan 29 11:59:08.237852 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:59:08.407663 tar[1587]: linux-arm64/helm Jan 29 11:59:08.239825 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:59:08.257960 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:59:08.263945 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:59:08.264373 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:59:08.297621 (ntainerd)[1589]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:59:08.301258 systemd-logind[1568]: New seat seat0. Jan 29 11:59:08.335381 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:59:08.341624 systemd-logind[1568]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:59:08.341646 systemd-logind[1568]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 29 11:59:08.375583 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:59:08.440496 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:59:08.440676 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:59:08.445050 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:59:08.445473 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:59:08.448927 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:59:08.455546 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:59:08.470607 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 11:59:08.478443 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:59:08.479612 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:59:08.509760 extend-filesystems[1567]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 11:59:08.509760 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 11:59:08.509760 extend-filesystems[1567]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 11:59:08.526272 extend-filesystems[1539]: Resized filesystem in /dev/sda9 Jan 29 11:59:08.526272 extend-filesystems[1539]: Found sr0 Jan 29 11:59:08.510192 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:59:08.510577 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:59:08.547625 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:59:08.559616 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:59:08.572721 systemd[1]: Starting sshkeys.service... Jan 29 11:59:08.610913 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:59:08.619766 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:59:08.712683 coreos-metadata[1642]: Jan 29 11:59:08.712 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 11:59:08.717201 coreos-metadata[1642]: Jan 29 11:59:08.716 INFO Fetch successful Jan 29 11:59:08.720855 unknown[1642]: wrote ssh authorized keys file for user: core Jan 29 11:59:08.780193 update-ssh-keys[1651]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:59:08.787285 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:59:08.799520 containerd[1589]: time="2025-01-29T11:59:08.798844833Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:59:08.799056 systemd[1]: Finished sshkeys.service. Jan 29 11:59:08.853583 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:59:08.910315 containerd[1589]: time="2025-01-29T11:59:08.910259153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:08.918608 containerd[1589]: time="2025-01-29T11:59:08.918541353Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:08.923872 containerd[1589]: time="2025-01-29T11:59:08.921232513Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:59:08.923872 containerd[1589]: time="2025-01-29T11:59:08.921291073Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:59:08.923872 containerd[1589]: time="2025-01-29T11:59:08.921573193Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:59:08.923872 containerd[1589]: time="2025-01-29T11:59:08.921637633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:08.923872 containerd[1589]: time="2025-01-29T11:59:08.921872633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:08.923872 containerd[1589]: time="2025-01-29T11:59:08.921895153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:08.923872 containerd[1589]: time="2025-01-29T11:59:08.922233153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:08.923872 containerd[1589]: time="2025-01-29T11:59:08.922255073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:08.923872 containerd[1589]: time="2025-01-29T11:59:08.922272153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:08.923872 containerd[1589]: time="2025-01-29T11:59:08.922284353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:08.923872 containerd[1589]: time="2025-01-29T11:59:08.922372993Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:08.923872 containerd[1589]: time="2025-01-29T11:59:08.922664153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:59:08.924324 containerd[1589]: time="2025-01-29T11:59:08.922851473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:59:08.924324 containerd[1589]: time="2025-01-29T11:59:08.922873233Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:59:08.924324 containerd[1589]: time="2025-01-29T11:59:08.922960673Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:59:08.924324 containerd[1589]: time="2025-01-29T11:59:08.923008713Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:59:08.937191 containerd[1589]: time="2025-01-29T11:59:08.937107553Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:59:08.937549 containerd[1589]: time="2025-01-29T11:59:08.937527953Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:59:08.937660 containerd[1589]: time="2025-01-29T11:59:08.937645793Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:59:08.937745 containerd[1589]: time="2025-01-29T11:59:08.937732153Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:59:08.937816 containerd[1589]: time="2025-01-29T11:59:08.937802633Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:59:08.939569 containerd[1589]: time="2025-01-29T11:59:08.938080913Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:59:08.944505 containerd[1589]: time="2025-01-29T11:59:08.942698633Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:59:08.945189 containerd[1589]: time="2025-01-29T11:59:08.945127033Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:59:08.945189 containerd[1589]: time="2025-01-29T11:59:08.945194273Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:59:08.945350 containerd[1589]: time="2025-01-29T11:59:08.945217993Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:59:08.945350 containerd[1589]: time="2025-01-29T11:59:08.945236273Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:59:08.945350 containerd[1589]: time="2025-01-29T11:59:08.945263153Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:59:08.945350 containerd[1589]: time="2025-01-29T11:59:08.945277353Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:59:08.945350 containerd[1589]: time="2025-01-29T11:59:08.945293353Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:59:08.945350 containerd[1589]: time="2025-01-29T11:59:08.945317833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:59:08.945350 containerd[1589]: time="2025-01-29T11:59:08.945332153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:59:08.945350 containerd[1589]: time="2025-01-29T11:59:08.945351473Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945366753Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945442433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945466073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945480633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945495633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945509513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945577473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945737913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945760193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945778993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945799233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945816593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945862153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945880793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.945834 containerd[1589]: time="2025-01-29T11:59:08.945903913Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:59:08.946349 containerd[1589]: time="2025-01-29T11:59:08.945937753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.946349 containerd[1589]: time="2025-01-29T11:59:08.945952273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.946349 containerd[1589]: time="2025-01-29T11:59:08.945970633Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:59:08.956933 containerd[1589]: time="2025-01-29T11:59:08.952519673Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:59:08.956933 containerd[1589]: time="2025-01-29T11:59:08.952632993Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:59:08.956933 containerd[1589]: time="2025-01-29T11:59:08.952654233Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:59:08.956933 containerd[1589]: time="2025-01-29T11:59:08.952674273Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:59:08.956933 containerd[1589]: time="2025-01-29T11:59:08.952691793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.956933 containerd[1589]: time="2025-01-29T11:59:08.952752913Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:59:08.956933 containerd[1589]: time="2025-01-29T11:59:08.952767953Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:59:08.956933 containerd[1589]: time="2025-01-29T11:59:08.952798353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:59:08.957241 containerd[1589]: time="2025-01-29T11:59:08.953705433Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:59:08.957241 containerd[1589]: time="2025-01-29T11:59:08.953814873Z" level=info msg="Connect containerd service" Jan 29 11:59:08.957241 containerd[1589]: time="2025-01-29T11:59:08.954064393Z" level=info msg="using legacy CRI server" Jan 29 11:59:08.957241 containerd[1589]: time="2025-01-29T11:59:08.954082473Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:59:08.958468 containerd[1589]: time="2025-01-29T11:59:08.958367353Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:59:08.965540 containerd[1589]: time="2025-01-29T11:59:08.965485993Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:59:08.966022 containerd[1589]: time="2025-01-29T11:59:08.965920353Z" level=info msg="Start subscribing containerd event" Jan 29 11:59:08.966022 containerd[1589]: time="2025-01-29T11:59:08.966018073Z" level=info msg="Start recovering state" Jan 29 11:59:08.966128 containerd[1589]: time="2025-01-29T11:59:08.966110633Z" level=info msg="Start event monitor" Jan 29 11:59:08.966128 containerd[1589]: time="2025-01-29T11:59:08.966126873Z" level=info msg="Start snapshots syncer" Jan 29 11:59:08.966198 containerd[1589]: time="2025-01-29T11:59:08.966138913Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:59:08.966198 containerd[1589]: time="2025-01-29T11:59:08.966147353Z" level=info msg="Start streaming server" Jan 29 11:59:08.970029 containerd[1589]: time="2025-01-29T11:59:08.968388633Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:59:08.970029 containerd[1589]: time="2025-01-29T11:59:08.968702673Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:59:08.969083 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:59:08.974097 containerd[1589]: time="2025-01-29T11:59:08.973994993Z" level=info msg="containerd successfully booted in 0.198967s" Jan 29 11:59:09.115686 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:59:09.177460 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:59:09.196580 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:59:09.211972 tar[1587]: linux-arm64/LICENSE Jan 29 11:59:09.211972 tar[1587]: linux-arm64/README.md Jan 29 11:59:09.228611 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:59:09.229237 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:59:09.234890 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:59:09.246678 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:59:09.263754 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:59:09.279524 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:59:09.288700 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:59:09.289739 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:59:09.643596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:09.645438 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:59:09.647666 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:09.652417 systemd[1]: Startup finished in 6.855s (kernel) + 5.410s (userspace) = 12.266s. Jan 29 11:59:10.362100 kubelet[1695]: E0129 11:59:10.362053 1695 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:10.365939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:10.366298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:20.616919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:59:20.624500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:20.761581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:20.765492 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:20.830766 kubelet[1720]: E0129 11:59:20.830568 1720 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:20.833610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:20.833761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:31.084349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:59:31.091506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:31.231499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:31.237996 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:31.305439 kubelet[1741]: E0129 11:59:31.305371 1741 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:31.308087 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:31.308424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:41.333057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:59:41.340451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:41.476511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:41.482257 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:41.539053 kubelet[1762]: E0129 11:59:41.538996 1762 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:41.543427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:41.543740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:43.858879 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:59:43.866702 systemd[1]: Started sshd@0-188.34.178.132:22-36.41.71.82:51696.service - OpenSSH per-connection server daemon (36.41.71.82:51696). Jan 29 11:59:51.582927 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 11:59:51.595547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:51.765412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:51.781782 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:51.836291 kubelet[1784]: E0129 11:59:51.836154 1784 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:51.840215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:51.841026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:53.397188 update_engine[1578]: I20250129 11:59:53.396261 1578 update_attempter.cc:509] Updating boot flags... Jan 29 11:59:53.456977 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1802) Jan 29 11:59:53.516198 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1801) Jan 29 12:00:02.083346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 12:00:02.090731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:02.266572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:02.268549 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:02.316376 kubelet[1823]: E0129 12:00:02.316306 1823 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:02.319379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:02.319617 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:12.333322 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 12:00:12.341512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:12.526460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:12.532330 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:12.584536 kubelet[1844]: E0129 12:00:12.584405 1844 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:12.587159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:12.587382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:22.833285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 12:00:22.841451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:23.001296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:23.017956 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:23.069483 kubelet[1864]: E0129 12:00:23.069414 1864 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:23.072890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:23.073359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:33.083016 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 12:00:33.092506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:33.246490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:33.259104 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:33.318084 kubelet[1885]: E0129 12:00:33.318035 1885 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:33.320333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:33.320676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:43.333417 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 12:00:43.342449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:43.498437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:43.517612 (kubelet)[1905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:43.578435 kubelet[1905]: E0129 12:00:43.578366 1905 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:43.582761 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:43.582990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:53.833347 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 12:00:53.841601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:53.995323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:54.000065 (kubelet)[1927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:54.050009 kubelet[1927]: E0129 12:00:54.049948 1927 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:54.052264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:54.052569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:58.957545 systemd[1]: Started sshd@1-188.34.178.132:22-139.178.89.65:48850.service - OpenSSH per-connection server daemon (139.178.89.65:48850). Jan 29 12:00:59.961556 sshd[1936]: Accepted publickey for core from 139.178.89.65 port 48850 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:00:59.970620 sshd[1936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:59.981854 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:00:59.989767 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:00:59.994848 systemd-logind[1568]: New session 1 of user core. Jan 29 12:01:00.010146 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:01:00.022754 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:01:00.028063 (systemd)[1942]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:01:00.137429 systemd[1942]: Queued start job for default target default.target. Jan 29 12:01:00.138227 systemd[1942]: Created slice app.slice - User Application Slice. Jan 29 12:01:00.138355 systemd[1942]: Reached target paths.target - Paths. Jan 29 12:01:00.138433 systemd[1942]: Reached target timers.target - Timers. Jan 29 12:01:00.144384 systemd[1942]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:01:00.155348 systemd[1942]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:01:00.155427 systemd[1942]: Reached target sockets.target - Sockets. Jan 29 12:01:00.155440 systemd[1942]: Reached target basic.target - Basic System. Jan 29 12:01:00.155493 systemd[1942]: Reached target default.target - Main User Target. Jan 29 12:01:00.155520 systemd[1942]: Startup finished in 118ms. Jan 29 12:01:00.155684 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:01:00.160524 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:01:00.859726 systemd[1]: Started sshd@2-188.34.178.132:22-139.178.89.65:48862.service - OpenSSH per-connection server daemon (139.178.89.65:48862). Jan 29 12:01:01.833987 sshd[1954]: Accepted publickey for core from 139.178.89.65 port 48862 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:01:01.837715 sshd[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:01.852502 systemd-logind[1568]: New session 2 of user core. Jan 29 12:01:01.864922 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:01:02.516077 sshd[1954]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:02.528352 systemd[1]: sshd@2-188.34.178.132:22-139.178.89.65:48862.service: Deactivated successfully. Jan 29 12:01:02.547862 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:01:02.550859 systemd-logind[1568]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:01:02.557159 systemd-logind[1568]: Removed session 2. Jan 29 12:01:02.691732 systemd[1]: Started sshd@3-188.34.178.132:22-139.178.89.65:56380.service - OpenSSH per-connection server daemon (139.178.89.65:56380). Jan 29 12:01:03.691276 sshd[1962]: Accepted publickey for core from 139.178.89.65 port 56380 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:01:03.694877 sshd[1962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:03.699381 systemd-logind[1568]: New session 3 of user core. Jan 29 12:01:03.710736 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:01:04.083903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 12:01:04.093577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:04.288370 (kubelet)[1979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:01:04.290502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:04.341012 kubelet[1979]: E0129 12:01:04.339010 1979 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:01:04.341659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:01:04.341832 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:01:04.380949 sshd[1962]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:04.389103 systemd[1]: sshd@3-188.34.178.132:22-139.178.89.65:56380.service: Deactivated successfully. Jan 29 12:01:04.393387 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:01:04.395272 systemd-logind[1568]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:01:04.396934 systemd-logind[1568]: Removed session 3. Jan 29 12:01:04.546542 systemd[1]: Started sshd@4-188.34.178.132:22-139.178.89.65:56390.service - OpenSSH per-connection server daemon (139.178.89.65:56390). Jan 29 12:01:05.556011 sshd[1991]: Accepted publickey for core from 139.178.89.65 port 56390 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:01:05.559126 sshd[1991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:05.572918 systemd-logind[1568]: New session 4 of user core. Jan 29 12:01:05.583827 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:01:06.248832 sshd[1991]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:06.254201 systemd[1]: sshd@4-188.34.178.132:22-139.178.89.65:56390.service: Deactivated successfully. Jan 29 12:01:06.258287 systemd-logind[1568]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:01:06.258715 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:01:06.260408 systemd-logind[1568]: Removed session 4. Jan 29 12:01:06.423816 systemd[1]: Started sshd@5-188.34.178.132:22-139.178.89.65:56400.service - OpenSSH per-connection server daemon (139.178.89.65:56400). Jan 29 12:01:07.397774 sshd[1999]: Accepted publickey for core from 139.178.89.65 port 56400 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:01:07.400641 sshd[1999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:07.405873 systemd-logind[1568]: New session 5 of user core. Jan 29 12:01:07.420472 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:01:07.929991 sudo[2003]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:01:07.930355 sudo[2003]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:01:07.943699 sudo[2003]: pam_unix(sudo:session): session closed for user root Jan 29 12:01:08.102721 sshd[1999]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:08.108268 systemd[1]: sshd@5-188.34.178.132:22-139.178.89.65:56400.service: Deactivated successfully. Jan 29 12:01:08.113354 systemd-logind[1568]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:01:08.115744 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:01:08.117369 systemd-logind[1568]: Removed session 5. Jan 29 12:01:08.276870 systemd[1]: Started sshd@6-188.34.178.132:22-139.178.89.65:56410.service - OpenSSH per-connection server daemon (139.178.89.65:56410). Jan 29 12:01:09.259473 sshd[2008]: Accepted publickey for core from 139.178.89.65 port 56410 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:01:09.261325 sshd[2008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:09.268750 systemd-logind[1568]: New session 6 of user core. Jan 29 12:01:09.275043 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:01:09.782190 sudo[2014]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:01:09.782508 sudo[2014]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:01:09.788677 sudo[2014]: pam_unix(sudo:session): session closed for user root Jan 29 12:01:09.794597 sudo[2013]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:01:09.794915 sudo[2013]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:01:09.820329 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:01:09.823436 auditctl[2017]: No rules Jan 29 12:01:09.824004 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:01:09.824285 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:01:09.830379 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:01:09.872071 augenrules[2036]: No rules Jan 29 12:01:09.873395 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:01:09.874787 sudo[2013]: pam_unix(sudo:session): session closed for user root Jan 29 12:01:10.036501 sshd[2008]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:10.040454 systemd[1]: sshd@6-188.34.178.132:22-139.178.89.65:56410.service: Deactivated successfully. Jan 29 12:01:10.043572 systemd-logind[1568]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:01:10.045971 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:01:10.047416 systemd-logind[1568]: Removed session 6. Jan 29 12:01:10.202757 systemd[1]: Started sshd@7-188.34.178.132:22-139.178.89.65:56426.service - OpenSSH per-connection server daemon (139.178.89.65:56426). Jan 29 12:01:11.179302 sshd[2045]: Accepted publickey for core from 139.178.89.65 port 56426 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:01:11.181045 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:11.187616 systemd-logind[1568]: New session 7 of user core. Jan 29 12:01:11.202842 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:01:11.703325 sudo[2049]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:01:11.703623 sudo[2049]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:01:12.051793 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:01:12.065096 (dockerd)[2064]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:01:12.327284 dockerd[2064]: time="2025-01-29T12:01:12.327085980Z" level=info msg="Starting up" Jan 29 12:01:12.436625 systemd[1]: var-lib-docker-metacopy\x2dcheck1592605764-merged.mount: Deactivated successfully. Jan 29 12:01:12.452670 dockerd[2064]: time="2025-01-29T12:01:12.451777743Z" level=info msg="Loading containers: start." Jan 29 12:01:12.607545 kernel: Initializing XFRM netlink socket Jan 29 12:01:12.710120 systemd-networkd[1239]: docker0: Link UP Jan 29 12:01:12.736035 dockerd[2064]: time="2025-01-29T12:01:12.735215981Z" level=info msg="Loading containers: done." Jan 29 12:01:12.758482 dockerd[2064]: time="2025-01-29T12:01:12.758424860Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:01:12.758876 dockerd[2064]: time="2025-01-29T12:01:12.758843264Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:01:12.759197 dockerd[2064]: time="2025-01-29T12:01:12.759145267Z" level=info msg="Daemon has completed initialization" Jan 29 12:01:12.803100 dockerd[2064]: time="2025-01-29T12:01:12.802970479Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:01:12.803368 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:01:13.410248 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1796303398-merged.mount: Deactivated successfully. Jan 29 12:01:14.144287 containerd[1589]: time="2025-01-29T12:01:14.144228566Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:01:14.582897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 12:01:14.589671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:14.762494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:14.765792 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:01:14.851624 kubelet[2222]: E0129 12:01:14.851424 2222 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:01:14.855673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753033582.mount: Deactivated successfully. Jan 29 12:01:14.859654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:01:14.859877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:01:16.002200 containerd[1589]: time="2025-01-29T12:01:16.000873617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:16.003962 containerd[1589]: time="2025-01-29T12:01:16.003912965Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29865027" Jan 29 12:01:16.006790 containerd[1589]: time="2025-01-29T12:01:16.006740711Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:16.012344 containerd[1589]: time="2025-01-29T12:01:16.012279323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:16.013857 containerd[1589]: time="2025-01-29T12:01:16.013786217Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 1.86950121s" Jan 29 12:01:16.013857 containerd[1589]: time="2025-01-29T12:01:16.013841938Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 12:01:16.039598 containerd[1589]: time="2025-01-29T12:01:16.039551138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:01:18.000219 containerd[1589]: time="2025-01-29T12:01:17.999934240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:18.003793 containerd[1589]: time="2025-01-29T12:01:18.003370032Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901581" Jan 29 12:01:18.006221 containerd[1589]: time="2025-01-29T12:01:18.005861574Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:18.011544 containerd[1589]: time="2025-01-29T12:01:18.011460984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:18.013964 containerd[1589]: time="2025-01-29T12:01:18.013408761Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.9734853s" Jan 29 12:01:18.013964 containerd[1589]: time="2025-01-29T12:01:18.013467602Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 12:01:18.041201 containerd[1589]: time="2025-01-29T12:01:18.040616324Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:01:19.014198 containerd[1589]: time="2025-01-29T12:01:19.012300223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:19.016011 containerd[1589]: time="2025-01-29T12:01:19.015955935Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164358" Jan 29 12:01:19.018119 containerd[1589]: time="2025-01-29T12:01:19.018071354Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:19.021575 containerd[1589]: time="2025-01-29T12:01:19.021515624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:19.022943 containerd[1589]: time="2025-01-29T12:01:19.022893676Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 982.176271ms" Jan 29 12:01:19.022943 containerd[1589]: time="2025-01-29T12:01:19.022936596Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 12:01:19.049089 containerd[1589]: time="2025-01-29T12:01:19.049022783Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:01:20.061388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount662006335.mount: Deactivated successfully. Jan 29 12:01:20.684362 containerd[1589]: time="2025-01-29T12:01:20.684258529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:20.689579 containerd[1589]: time="2025-01-29T12:01:20.689409412Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662738" Jan 29 12:01:20.691248 containerd[1589]: time="2025-01-29T12:01:20.691195908Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:20.695576 containerd[1589]: time="2025-01-29T12:01:20.694193893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:20.695576 containerd[1589]: time="2025-01-29T12:01:20.695072341Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.645999678s" Jan 29 12:01:20.695576 containerd[1589]: time="2025-01-29T12:01:20.695113781Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 12:01:20.719419 containerd[1589]: time="2025-01-29T12:01:20.719337947Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:01:21.341624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638255336.mount: Deactivated successfully. Jan 29 12:01:22.084235 containerd[1589]: time="2025-01-29T12:01:22.084153498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:22.087359 containerd[1589]: time="2025-01-29T12:01:22.087259163Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 29 12:01:22.089406 containerd[1589]: time="2025-01-29T12:01:22.088675174Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:22.094311 containerd[1589]: time="2025-01-29T12:01:22.094235940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:22.096092 containerd[1589]: time="2025-01-29T12:01:22.096032914Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.376432165s" Jan 29 12:01:22.096092 containerd[1589]: time="2025-01-29T12:01:22.096084635Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 12:01:22.120636 containerd[1589]: time="2025-01-29T12:01:22.120577554Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:01:22.642472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242914158.mount: Deactivated successfully. Jan 29 12:01:22.654529 containerd[1589]: time="2025-01-29T12:01:22.654478095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:22.656908 containerd[1589]: time="2025-01-29T12:01:22.656857554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 29 12:01:22.658368 containerd[1589]: time="2025-01-29T12:01:22.658296926Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:22.664188 containerd[1589]: time="2025-01-29T12:01:22.662447639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:22.664188 containerd[1589]: time="2025-01-29T12:01:22.664146973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 543.515819ms" Jan 29 12:01:22.664376 containerd[1589]: time="2025-01-29T12:01:22.664214454Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 12:01:22.692318 containerd[1589]: time="2025-01-29T12:01:22.692280522Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:01:23.324066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1503760963.mount: Deactivated successfully. Jan 29 12:01:24.870193 containerd[1589]: time="2025-01-29T12:01:24.870089219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:24.873207 containerd[1589]: time="2025-01-29T12:01:24.872246876Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Jan 29 12:01:24.873207 containerd[1589]: time="2025-01-29T12:01:24.872767280Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:24.877348 containerd[1589]: time="2025-01-29T12:01:24.876768191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:24.878297 containerd[1589]: time="2025-01-29T12:01:24.878246842Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.185781799s" Jan 29 12:01:24.878297 containerd[1589]: time="2025-01-29T12:01:24.878297603Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 12:01:24.925986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 29 12:01:24.939777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:25.088447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:25.093102 (kubelet)[2443]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:01:25.140893 kubelet[2443]: E0129 12:01:25.140737 2443 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:01:25.144384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:01:25.144628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:01:29.860532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:29.872628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:29.918836 systemd[1]: Reloading requested from client PID 2507 ('systemctl') (unit session-7.scope)... Jan 29 12:01:29.918861 systemd[1]: Reloading... Jan 29 12:01:30.062197 zram_generator::config[2553]: No configuration found. Jan 29 12:01:30.171954 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:30.239319 systemd[1]: Reloading finished in 318 ms. Jan 29 12:01:30.296518 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:01:30.296623 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:01:30.296919 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:30.306704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:30.437442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:30.449662 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:01:30.501415 kubelet[2610]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:30.501415 kubelet[2610]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:01:30.501415 kubelet[2610]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:30.503031 kubelet[2610]: I0129 12:01:30.502948 2610 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:01:31.534690 kubelet[2610]: I0129 12:01:31.534614 2610 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:01:31.534690 kubelet[2610]: I0129 12:01:31.534666 2610 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:01:31.535126 kubelet[2610]: I0129 12:01:31.534920 2610 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:01:31.560594 kubelet[2610]: E0129 12:01:31.560245 2610 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://188.34.178.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:31.560594 kubelet[2610]: I0129 12:01:31.560456 2610 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:01:31.571303 kubelet[2610]: I0129 12:01:31.571158 2610 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:01:31.573417 kubelet[2610]: I0129 12:01:31.573304 2610 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:01:31.573624 kubelet[2610]: I0129 12:01:31.573394 2610 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-b-488529c6ca","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:01:31.573724 kubelet[2610]: I0129 12:01:31.573676 2610 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:01:31.573724 kubelet[2610]: I0129 12:01:31.573688 2610 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:01:31.574070 kubelet[2610]: I0129 12:01:31.574019 2610 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:31.577813 kubelet[2610]: I0129 12:01:31.576287 2610 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:01:31.577813 kubelet[2610]: I0129 12:01:31.576323 2610 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:01:31.577813 kubelet[2610]: W0129 12:01:31.576436 2610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.34.178.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-b-488529c6ca&limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:31.577813 kubelet[2610]: I0129 12:01:31.576496 2610 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:01:31.577813 kubelet[2610]: E0129 12:01:31.576495 2610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.34.178.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-b-488529c6ca&limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:31.577813 kubelet[2610]: I0129 12:01:31.576578 2610 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:01:31.577813 kubelet[2610]: W0129 12:01:31.577721 2610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.34.178.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:31.577813 kubelet[2610]: E0129 12:01:31.577768 2610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.34.178.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:31.580296 kubelet[2610]: I0129 12:01:31.578596 2610 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:01:31.580296 kubelet[2610]: I0129 12:01:31.579059 2610 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:01:31.580296 kubelet[2610]: W0129 12:01:31.579113 2610 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:01:31.580296 kubelet[2610]: I0129 12:01:31.580043 2610 server.go:1264] "Started kubelet" Jan 29 12:01:31.587972 kubelet[2610]: E0129 12:01:31.587440 2610 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.34.178.132:6443/api/v1/namespaces/default/events\": dial tcp 188.34.178.132:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-b-488529c6ca.181f281e403aa8ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-b-488529c6ca,UID:ci-4081-3-0-b-488529c6ca,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-b-488529c6ca,},FirstTimestamp:2025-01-29 12:01:31.580016814 +0000 UTC m=+1.126290255,LastTimestamp:2025-01-29 12:01:31.580016814 +0000 UTC m=+1.126290255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-b-488529c6ca,}" Jan 29 12:01:31.589150 kubelet[2610]: I0129 12:01:31.589090 2610 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:01:31.593844 kubelet[2610]: I0129 12:01:31.593763 2610 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:01:31.594343 kubelet[2610]: I0129 12:01:31.594324 2610 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:01:31.595628 kubelet[2610]: I0129 12:01:31.595500 2610 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:01:31.596242 kubelet[2610]: I0129 12:01:31.596222 2610 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:01:31.601719 kubelet[2610]: E0129 12:01:31.601687 2610 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:01:31.603595 kubelet[2610]: E0129 12:01:31.603563 2610 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-b-488529c6ca\" not found" Jan 29 12:01:31.604012 kubelet[2610]: I0129 12:01:31.603997 2610 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:01:31.604272 kubelet[2610]: I0129 12:01:31.604257 2610 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:01:31.604495 kubelet[2610]: I0129 12:01:31.604483 2610 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:01:31.605072 kubelet[2610]: W0129 12:01:31.605029 2610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.34.178.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:31.605202 kubelet[2610]: E0129 12:01:31.605188 2610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.34.178.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:31.607846 kubelet[2610]: E0129 12:01:31.607794 2610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.34.178.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-b-488529c6ca?timeout=10s\": dial tcp 188.34.178.132:6443: connect: connection refused" interval="200ms" Jan 29 12:01:31.608119 kubelet[2610]: I0129 12:01:31.608095 2610 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:01:31.608353 kubelet[2610]: I0129 12:01:31.608325 2610 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:01:31.611345 kubelet[2610]: I0129 12:01:31.611304 2610 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:01:31.629815 kubelet[2610]: I0129 12:01:31.629760 2610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:01:31.631384 kubelet[2610]: I0129 12:01:31.631351 2610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:01:31.631526 kubelet[2610]: I0129 12:01:31.631516 2610 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:01:31.631585 kubelet[2610]: I0129 12:01:31.631576 2610 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:01:31.631770 kubelet[2610]: E0129 12:01:31.631747 2610 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:01:31.638045 kubelet[2610]: W0129 12:01:31.638000 2610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.34.178.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:31.638343 kubelet[2610]: E0129 12:01:31.638249 2610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.34.178.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:31.639897 kubelet[2610]: I0129 12:01:31.639761 2610 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:01:31.639897 kubelet[2610]: I0129 12:01:31.639784 2610 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:01:31.639897 kubelet[2610]: I0129 12:01:31.639813 2610 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:31.641517 kubelet[2610]: I0129 12:01:31.641480 2610 policy_none.go:49] "None policy: Start" Jan 29 12:01:31.643150 kubelet[2610]: I0129 12:01:31.642444 2610 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:01:31.643150 kubelet[2610]: I0129 12:01:31.642501 2610 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:01:31.648568 kubelet[2610]: I0129 12:01:31.648530 2610 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:01:31.649044 kubelet[2610]: I0129 12:01:31.649003 2610 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:01:31.649255 kubelet[2610]: I0129 12:01:31.649243 2610 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:01:31.655613 kubelet[2610]: E0129 12:01:31.655584 2610 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-b-488529c6ca\" not found" Jan 29 12:01:31.708016 kubelet[2610]: I0129 12:01:31.707909 2610 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.708611 kubelet[2610]: E0129 12:01:31.708575 2610 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.34.178.132:6443/api/v1/nodes\": dial tcp 188.34.178.132:6443: connect: connection refused" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.734056 kubelet[2610]: I0129 12:01:31.733452 2610 topology_manager.go:215] "Topology Admit Handler" podUID="413511ca7d4bf6f458ce79a2e24d0a70" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.738833 kubelet[2610]: I0129 12:01:31.738487 2610 topology_manager.go:215] "Topology Admit Handler" podUID="c86db993ba533ab9b302483a7d70f844" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.744304 kubelet[2610]: I0129 12:01:31.744110 2610 topology_manager.go:215] "Topology Admit Handler" podUID="cb07c9f1cffb3c1a716b3dae20202c81" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.805848 kubelet[2610]: I0129 12:01:31.805701 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c86db993ba533ab9b302483a7d70f844-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-b-488529c6ca\" (UID: \"c86db993ba533ab9b302483a7d70f844\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.806527 kubelet[2610]: I0129 12:01:31.806065 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/413511ca7d4bf6f458ce79a2e24d0a70-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-b-488529c6ca\" (UID: \"413511ca7d4bf6f458ce79a2e24d0a70\") " pod="kube-system/kube-apiserver-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.806527 kubelet[2610]: I0129 12:01:31.806129 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/413511ca7d4bf6f458ce79a2e24d0a70-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-b-488529c6ca\" (UID: \"413511ca7d4bf6f458ce79a2e24d0a70\") " pod="kube-system/kube-apiserver-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.806527 kubelet[2610]: I0129 12:01:31.806200 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c86db993ba533ab9b302483a7d70f844-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-b-488529c6ca\" (UID: \"c86db993ba533ab9b302483a7d70f844\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.806527 kubelet[2610]: I0129 12:01:31.806239 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c86db993ba533ab9b302483a7d70f844-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-b-488529c6ca\" (UID: \"c86db993ba533ab9b302483a7d70f844\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.806527 kubelet[2610]: I0129 12:01:31.806275 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c86db993ba533ab9b302483a7d70f844-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-b-488529c6ca\" (UID: \"c86db993ba533ab9b302483a7d70f844\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.806922 kubelet[2610]: I0129 12:01:31.806313 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb07c9f1cffb3c1a716b3dae20202c81-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-b-488529c6ca\" (UID: \"cb07c9f1cffb3c1a716b3dae20202c81\") " pod="kube-system/kube-scheduler-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.806922 kubelet[2610]: I0129 12:01:31.806344 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/413511ca7d4bf6f458ce79a2e24d0a70-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-b-488529c6ca\" (UID: \"413511ca7d4bf6f458ce79a2e24d0a70\") " pod="kube-system/kube-apiserver-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.806922 kubelet[2610]: I0129 12:01:31.806376 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c86db993ba533ab9b302483a7d70f844-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-b-488529c6ca\" (UID: \"c86db993ba533ab9b302483a7d70f844\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.808747 kubelet[2610]: E0129 12:01:31.808694 2610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.34.178.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-b-488529c6ca?timeout=10s\": dial tcp 188.34.178.132:6443: connect: connection refused" interval="400ms" Jan 29 12:01:31.912130 kubelet[2610]: I0129 12:01:31.911656 2610 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:31.912130 kubelet[2610]: E0129 12:01:31.912069 2610 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.34.178.132:6443/api/v1/nodes\": dial tcp 188.34.178.132:6443: connect: connection refused" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:32.045933 containerd[1589]: time="2025-01-29T12:01:32.045864766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-b-488529c6ca,Uid:413511ca7d4bf6f458ce79a2e24d0a70,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:32.052275 containerd[1589]: time="2025-01-29T12:01:32.051408162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-b-488529c6ca,Uid:c86db993ba533ab9b302483a7d70f844,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:32.058845 containerd[1589]: time="2025-01-29T12:01:32.058273247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-b-488529c6ca,Uid:cb07c9f1cffb3c1a716b3dae20202c81,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:32.210140 kubelet[2610]: E0129 12:01:32.210073 2610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.34.178.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-b-488529c6ca?timeout=10s\": dial tcp 188.34.178.132:6443: connect: connection refused" interval="800ms" Jan 29 12:01:32.314972 kubelet[2610]: I0129 12:01:32.314761 2610 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:32.318081 kubelet[2610]: E0129 12:01:32.318018 2610 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.34.178.132:6443/api/v1/nodes\": dial tcp 188.34.178.132:6443: connect: connection refused" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:32.490892 kubelet[2610]: W0129 12:01:32.490677 2610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.34.178.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-b-488529c6ca&limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:32.490892 kubelet[2610]: E0129 12:01:32.490777 2610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.34.178.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-b-488529c6ca&limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:32.621884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952408475.mount: Deactivated successfully. Jan 29 12:01:32.639197 containerd[1589]: time="2025-01-29T12:01:32.637208723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:32.639197 containerd[1589]: time="2025-01-29T12:01:32.638598972Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:32.641079 containerd[1589]: time="2025-01-29T12:01:32.641038788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:01:32.641334 containerd[1589]: time="2025-01-29T12:01:32.641317549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 29 12:01:32.643377 containerd[1589]: time="2025-01-29T12:01:32.643310923Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:32.646426 containerd[1589]: time="2025-01-29T12:01:32.646156661Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:32.647585 containerd[1589]: time="2025-01-29T12:01:32.647522950Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:01:32.652955 containerd[1589]: time="2025-01-29T12:01:32.652487423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:32.653740 containerd[1589]: time="2025-01-29T12:01:32.653687071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 607.246702ms" Jan 29 12:01:32.658369 containerd[1589]: time="2025-01-29T12:01:32.658312661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 606.804898ms" Jan 29 12:01:32.663796 containerd[1589]: time="2025-01-29T12:01:32.663722176Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 605.217128ms" Jan 29 12:01:32.673095 kubelet[2610]: W0129 12:01:32.672984 2610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.34.178.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:32.673095 kubelet[2610]: E0129 12:01:32.673064 2610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.34.178.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:32.727848 kubelet[2610]: W0129 12:01:32.727724 2610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.34.178.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:32.727848 kubelet[2610]: E0129 12:01:32.727810 2610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.34.178.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:32.824810 containerd[1589]: time="2025-01-29T12:01:32.824656351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:32.825929 containerd[1589]: time="2025-01-29T12:01:32.825727438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:32.825929 containerd[1589]: time="2025-01-29T12:01:32.825795279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:32.825929 containerd[1589]: time="2025-01-29T12:01:32.825806839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:32.826280 containerd[1589]: time="2025-01-29T12:01:32.826153281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:32.826280 containerd[1589]: time="2025-01-29T12:01:32.826256202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:32.826896 containerd[1589]: time="2025-01-29T12:01:32.826648845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:32.826896 containerd[1589]: time="2025-01-29T12:01:32.826835566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:32.827326 containerd[1589]: time="2025-01-29T12:01:32.826854006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:32.827573 containerd[1589]: time="2025-01-29T12:01:32.827535610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:32.827942 containerd[1589]: time="2025-01-29T12:01:32.827794332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:32.828141 containerd[1589]: time="2025-01-29T12:01:32.827864692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:32.920623 containerd[1589]: time="2025-01-29T12:01:32.919915576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-b-488529c6ca,Uid:c86db993ba533ab9b302483a7d70f844,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbc52ec1593faa9935418e16957abf3c12b4debeeaf2c02e048c9a69f314a890\"" Jan 29 12:01:32.929727 containerd[1589]: time="2025-01-29T12:01:32.929668880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-b-488529c6ca,Uid:413511ca7d4bf6f458ce79a2e24d0a70,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc1115ddff26a8443f2385ce4ec6acfa3fb591e8f788f07473f768ff6fc1293c\"" Jan 29 12:01:32.934049 containerd[1589]: time="2025-01-29T12:01:32.933991588Z" level=info msg="CreateContainer within sandbox \"cbc52ec1593faa9935418e16957abf3c12b4debeeaf2c02e048c9a69f314a890\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:01:32.939095 containerd[1589]: time="2025-01-29T12:01:32.938802420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-b-488529c6ca,Uid:cb07c9f1cffb3c1a716b3dae20202c81,Namespace:kube-system,Attempt:0,} returns sandbox id \"20b318628fd01ec36db8fb17c6c0d7ac94cb08dac6e0ceacb015c5a97b52b362\"" Jan 29 12:01:32.942785 kubelet[2610]: W0129 12:01:32.942458 2610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.34.178.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:32.942785 kubelet[2610]: E0129 12:01:32.942531 2610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.34.178.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.34.178.132:6443: connect: connection refused Jan 29 12:01:32.942961 containerd[1589]: time="2025-01-29T12:01:32.942795286Z" level=info msg="CreateContainer within sandbox \"dc1115ddff26a8443f2385ce4ec6acfa3fb591e8f788f07473f768ff6fc1293c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:01:32.947180 containerd[1589]: time="2025-01-29T12:01:32.946873593Z" level=info msg="CreateContainer within sandbox \"20b318628fd01ec36db8fb17c6c0d7ac94cb08dac6e0ceacb015c5a97b52b362\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:01:32.980425 containerd[1589]: time="2025-01-29T12:01:32.980258092Z" level=info msg="CreateContainer within sandbox \"cbc52ec1593faa9935418e16957abf3c12b4debeeaf2c02e048c9a69f314a890\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2930cc7aa7778a65b312c62f23dd706d8beb8930bba84bdfbef25c78ab8180e7\"" Jan 29 12:01:32.986251 containerd[1589]: time="2025-01-29T12:01:32.984817721Z" level=info msg="CreateContainer within sandbox \"dc1115ddff26a8443f2385ce4ec6acfa3fb591e8f788f07473f768ff6fc1293c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ed4e3432a11cf0b51c9f7440577f8e5d4a282bac2e2e564121fa2a884601e08d\"" Jan 29 12:01:32.986251 containerd[1589]: time="2025-01-29T12:01:32.985339245Z" level=info msg="StartContainer for \"2930cc7aa7778a65b312c62f23dd706d8beb8930bba84bdfbef25c78ab8180e7\"" Jan 29 12:01:32.990526 containerd[1589]: time="2025-01-29T12:01:32.990446678Z" level=info msg="CreateContainer within sandbox \"20b318628fd01ec36db8fb17c6c0d7ac94cb08dac6e0ceacb015c5a97b52b362\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"57d9a279ba65632826399ab9faa34a6541550788222c23ee94c66f59ddbab4c8\"" Jan 29 12:01:32.992933 containerd[1589]: time="2025-01-29T12:01:32.992834654Z" level=info msg="StartContainer for \"ed4e3432a11cf0b51c9f7440577f8e5d4a282bac2e2e564121fa2a884601e08d\"" Jan 29 12:01:32.997477 containerd[1589]: time="2025-01-29T12:01:32.997373084Z" level=info msg="StartContainer for \"57d9a279ba65632826399ab9faa34a6541550788222c23ee94c66f59ddbab4c8\"" Jan 29 12:01:33.012079 kubelet[2610]: E0129 12:01:33.012010 2610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.34.178.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-b-488529c6ca?timeout=10s\": dial tcp 188.34.178.132:6443: connect: connection refused" interval="1.6s" Jan 29 12:01:33.114312 containerd[1589]: time="2025-01-29T12:01:33.114224595Z" level=info msg="StartContainer for \"2930cc7aa7778a65b312c62f23dd706d8beb8930bba84bdfbef25c78ab8180e7\" returns successfully" Jan 29 12:01:33.134276 kubelet[2610]: I0129 12:01:33.131996 2610 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:33.134614 kubelet[2610]: E0129 12:01:33.134448 2610 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.34.178.132:6443/api/v1/nodes\": dial tcp 188.34.178.132:6443: connect: connection refused" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:33.136950 containerd[1589]: time="2025-01-29T12:01:33.136427978Z" level=info msg="StartContainer for \"57d9a279ba65632826399ab9faa34a6541550788222c23ee94c66f59ddbab4c8\" returns successfully" Jan 29 12:01:33.139485 containerd[1589]: time="2025-01-29T12:01:33.138480351Z" level=info msg="StartContainer for \"ed4e3432a11cf0b51c9f7440577f8e5d4a282bac2e2e564121fa2a884601e08d\" returns successfully" Jan 29 12:01:34.739093 kubelet[2610]: I0129 12:01:34.738789 2610 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:35.466918 kubelet[2610]: E0129 12:01:35.466872 2610 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-b-488529c6ca\" not found" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:35.527348 kubelet[2610]: I0129 12:01:35.527283 2610 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:35.579860 kubelet[2610]: I0129 12:01:35.579482 2610 apiserver.go:52] "Watching apiserver" Jan 29 12:01:35.605442 kubelet[2610]: I0129 12:01:35.605322 2610 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:01:35.678234 kubelet[2610]: E0129 12:01:35.677342 2610 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-b-488529c6ca\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:37.897373 systemd[1]: Reloading requested from client PID 2885 ('systemctl') (unit session-7.scope)... Jan 29 12:01:37.897393 systemd[1]: Reloading... Jan 29 12:01:38.029255 zram_generator::config[2936]: No configuration found. Jan 29 12:01:38.141560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:38.226589 systemd[1]: Reloading finished in 328 ms. Jan 29 12:01:38.264097 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:38.264748 kubelet[2610]: I0129 12:01:38.264492 2610 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:01:38.280002 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:01:38.281459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:38.288292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:38.423715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:38.432887 (kubelet)[2982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:01:38.486053 kubelet[2982]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:38.486053 kubelet[2982]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:01:38.486053 kubelet[2982]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:38.486053 kubelet[2982]: I0129 12:01:38.485918 2982 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:01:38.491257 kubelet[2982]: I0129 12:01:38.491206 2982 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:01:38.491257 kubelet[2982]: I0129 12:01:38.491239 2982 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:01:38.491500 kubelet[2982]: I0129 12:01:38.491447 2982 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:01:38.493110 kubelet[2982]: I0129 12:01:38.493056 2982 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:01:38.495218 kubelet[2982]: I0129 12:01:38.494879 2982 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:01:38.507978 kubelet[2982]: I0129 12:01:38.507941 2982 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:01:38.510675 kubelet[2982]: I0129 12:01:38.510592 2982 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:01:38.511096 kubelet[2982]: I0129 12:01:38.510836 2982 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-b-488529c6ca","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:01:38.511269 kubelet[2982]: I0129 12:01:38.511241 2982 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:01:38.511360 kubelet[2982]: I0129 12:01:38.511350 2982 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:01:38.511483 kubelet[2982]: I0129 12:01:38.511474 2982 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:38.511754 kubelet[2982]: I0129 12:01:38.511740 2982 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:01:38.511883 kubelet[2982]: I0129 12:01:38.511870 2982 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:01:38.512055 kubelet[2982]: I0129 12:01:38.512045 2982 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:01:38.512141 kubelet[2982]: I0129 12:01:38.512131 2982 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:01:38.514847 kubelet[2982]: I0129 12:01:38.514800 2982 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:01:38.515697 kubelet[2982]: I0129 12:01:38.515571 2982 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:01:38.517006 kubelet[2982]: I0129 12:01:38.516981 2982 server.go:1264] "Started kubelet" Jan 29 12:01:38.523293 kubelet[2982]: I0129 12:01:38.523262 2982 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:01:38.536198 kubelet[2982]: I0129 12:01:38.535254 2982 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:01:38.537444 kubelet[2982]: I0129 12:01:38.537415 2982 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:01:38.539553 kubelet[2982]: I0129 12:01:38.539480 2982 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:01:38.541788 kubelet[2982]: I0129 12:01:38.541763 2982 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:01:38.546829 kubelet[2982]: I0129 12:01:38.546773 2982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:01:38.549778 kubelet[2982]: I0129 12:01:38.549744 2982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:01:38.550246 kubelet[2982]: I0129 12:01:38.549935 2982 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:01:38.550246 kubelet[2982]: I0129 12:01:38.549967 2982 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:01:38.550246 kubelet[2982]: E0129 12:01:38.550021 2982 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:01:38.550398 kubelet[2982]: I0129 12:01:38.547103 2982 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:01:38.557746 kubelet[2982]: I0129 12:01:38.547118 2982 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:01:38.569208 kubelet[2982]: I0129 12:01:38.568649 2982 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:01:38.576662 kubelet[2982]: I0129 12:01:38.576606 2982 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:01:38.578980 kubelet[2982]: E0129 12:01:38.577151 2982 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:01:38.580152 kubelet[2982]: I0129 12:01:38.579843 2982 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:01:38.580152 kubelet[2982]: I0129 12:01:38.579864 2982 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:01:38.642015 kubelet[2982]: I0129 12:01:38.641989 2982 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:01:38.642311 kubelet[2982]: I0129 12:01:38.642227 2982 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:01:38.642697 kubelet[2982]: I0129 12:01:38.642378 2982 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:38.642697 kubelet[2982]: I0129 12:01:38.642558 2982 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:01:38.642697 kubelet[2982]: I0129 12:01:38.642571 2982 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:01:38.642697 kubelet[2982]: I0129 12:01:38.642590 2982 policy_none.go:49] "None policy: Start" Jan 29 12:01:38.643611 kubelet[2982]: I0129 12:01:38.643589 2982 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:01:38.643725 kubelet[2982]: I0129 12:01:38.643660 2982 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:01:38.643870 kubelet[2982]: I0129 12:01:38.643852 2982 state_mem.go:75] "Updated machine memory state" Jan 29 12:01:38.645688 kubelet[2982]: I0129 12:01:38.645667 2982 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:01:38.646244 kubelet[2982]: I0129 12:01:38.645849 2982 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:01:38.646244 kubelet[2982]: I0129 12:01:38.645968 2982 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:01:38.650244 kubelet[2982]: I0129 12:01:38.650157 2982 topology_manager.go:215] "Topology Admit Handler" podUID="413511ca7d4bf6f458ce79a2e24d0a70" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.650375 kubelet[2982]: I0129 12:01:38.650311 2982 topology_manager.go:215] "Topology Admit Handler" podUID="c86db993ba533ab9b302483a7d70f844" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.650375 kubelet[2982]: I0129 12:01:38.650357 2982 topology_manager.go:215] "Topology Admit Handler" podUID="cb07c9f1cffb3c1a716b3dae20202c81" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.657953 kubelet[2982]: I0129 12:01:38.657022 2982 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.665399 kubelet[2982]: E0129 12:01:38.665355 2982 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-b-488529c6ca\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.669233 kubelet[2982]: I0129 12:01:38.669181 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c86db993ba533ab9b302483a7d70f844-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-b-488529c6ca\" (UID: \"c86db993ba533ab9b302483a7d70f844\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.669388 kubelet[2982]: I0129 12:01:38.669325 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/413511ca7d4bf6f458ce79a2e24d0a70-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-b-488529c6ca\" (UID: \"413511ca7d4bf6f458ce79a2e24d0a70\") " pod="kube-system/kube-apiserver-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.669388 kubelet[2982]: I0129 12:01:38.669353 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c86db993ba533ab9b302483a7d70f844-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-b-488529c6ca\" (UID: \"c86db993ba533ab9b302483a7d70f844\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.670292 kubelet[2982]: I0129 12:01:38.669464 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c86db993ba533ab9b302483a7d70f844-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-b-488529c6ca\" (UID: \"c86db993ba533ab9b302483a7d70f844\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.670292 kubelet[2982]: I0129 12:01:38.669493 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c86db993ba533ab9b302483a7d70f844-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-b-488529c6ca\" (UID: \"c86db993ba533ab9b302483a7d70f844\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.670292 kubelet[2982]: I0129 12:01:38.669514 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c86db993ba533ab9b302483a7d70f844-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-b-488529c6ca\" (UID: \"c86db993ba533ab9b302483a7d70f844\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.670292 kubelet[2982]: I0129 12:01:38.669645 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb07c9f1cffb3c1a716b3dae20202c81-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-b-488529c6ca\" (UID: \"cb07c9f1cffb3c1a716b3dae20202c81\") " pod="kube-system/kube-scheduler-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.670292 kubelet[2982]: I0129 12:01:38.669667 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/413511ca7d4bf6f458ce79a2e24d0a70-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-b-488529c6ca\" (UID: \"413511ca7d4bf6f458ce79a2e24d0a70\") " pod="kube-system/kube-apiserver-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.670571 kubelet[2982]: I0129 12:01:38.669782 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/413511ca7d4bf6f458ce79a2e24d0a70-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-b-488529c6ca\" (UID: \"413511ca7d4bf6f458ce79a2e24d0a70\") " pod="kube-system/kube-apiserver-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.678895 kubelet[2982]: I0129 12:01:38.676703 2982 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:38.678895 kubelet[2982]: I0129 12:01:38.676818 2982 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-b-488529c6ca" Jan 29 12:01:39.526292 kubelet[2982]: I0129 12:01:39.525517 2982 apiserver.go:52] "Watching apiserver" Jan 29 12:01:39.558519 kubelet[2982]: I0129 12:01:39.558462 2982 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:01:39.625587 kubelet[2982]: E0129 12:01:39.624491 2982 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-b-488529c6ca\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-b-488529c6ca" Jan 29 12:01:39.649448 kubelet[2982]: I0129 12:01:39.649103 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-b-488529c6ca" podStartSLOduration=3.649087883 podStartE2EDuration="3.649087883s" podCreationTimestamp="2025-01-29 12:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:39.648828482 +0000 UTC m=+1.211841270" watchObservedRunningTime="2025-01-29 12:01:39.649087883 +0000 UTC m=+1.212100591" Jan 29 12:01:39.691924 kubelet[2982]: I0129 12:01:39.691082 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-b-488529c6ca" podStartSLOduration=1.6910505630000001 podStartE2EDuration="1.691050563s" podCreationTimestamp="2025-01-29 12:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:39.665192775 +0000 UTC m=+1.228205523" watchObservedRunningTime="2025-01-29 12:01:39.691050563 +0000 UTC m=+1.254063351" Jan 29 12:01:39.731683 kubelet[2982]: I0129 12:01:39.731026 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-b-488529c6ca" podStartSLOduration=1.731005191 podStartE2EDuration="1.731005191s" podCreationTimestamp="2025-01-29 12:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:39.695526669 +0000 UTC m=+1.258539377" watchObservedRunningTime="2025-01-29 12:01:39.731005191 +0000 UTC m=+1.294017939" Jan 29 12:01:43.891340 systemd[1]: sshd@0-188.34.178.132:22-36.41.71.82:51696.service: Deactivated successfully. Jan 29 12:01:44.108409 sudo[2049]: pam_unix(sudo:session): session closed for user root Jan 29 12:01:44.268698 sshd[2045]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:44.276511 systemd-logind[1568]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:01:44.277863 systemd[1]: sshd@7-188.34.178.132:22-139.178.89.65:56426.service: Deactivated successfully. Jan 29 12:01:44.287754 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:01:44.296193 systemd-logind[1568]: Removed session 7. Jan 29 12:01:50.472542 systemd[1]: Started sshd@8-188.34.178.132:22-36.41.71.82:53924.service - OpenSSH per-connection server daemon (36.41.71.82:53924). Jan 29 12:01:53.126957 kubelet[2982]: I0129 12:01:53.126762 2982 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:01:53.128991 containerd[1589]: time="2025-01-29T12:01:53.128489890Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:01:53.130822 kubelet[2982]: I0129 12:01:53.129848 2982 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:01:54.021146 kubelet[2982]: I0129 12:01:54.017904 2982 topology_manager.go:215] "Topology Admit Handler" podUID="5cd35ace-46b1-4d37-b29d-676f0218df3d" podNamespace="kube-system" podName="kube-proxy-8xhkf" Jan 29 12:01:54.074573 kubelet[2982]: I0129 12:01:54.074540 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5cd35ace-46b1-4d37-b29d-676f0218df3d-kube-proxy\") pod \"kube-proxy-8xhkf\" (UID: \"5cd35ace-46b1-4d37-b29d-676f0218df3d\") " pod="kube-system/kube-proxy-8xhkf" Jan 29 12:01:54.074994 kubelet[2982]: I0129 12:01:54.074974 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cd35ace-46b1-4d37-b29d-676f0218df3d-lib-modules\") pod \"kube-proxy-8xhkf\" (UID: \"5cd35ace-46b1-4d37-b29d-676f0218df3d\") " pod="kube-system/kube-proxy-8xhkf" Jan 29 12:01:54.075111 kubelet[2982]: I0129 12:01:54.075091 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgwnc\" (UniqueName: \"kubernetes.io/projected/5cd35ace-46b1-4d37-b29d-676f0218df3d-kube-api-access-wgwnc\") pod \"kube-proxy-8xhkf\" (UID: \"5cd35ace-46b1-4d37-b29d-676f0218df3d\") " pod="kube-system/kube-proxy-8xhkf" Jan 29 12:01:54.075251 kubelet[2982]: I0129 12:01:54.075233 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cd35ace-46b1-4d37-b29d-676f0218df3d-xtables-lock\") pod \"kube-proxy-8xhkf\" (UID: \"5cd35ace-46b1-4d37-b29d-676f0218df3d\") " pod="kube-system/kube-proxy-8xhkf" Jan 29 12:01:54.213155 kubelet[2982]: I0129 12:01:54.213098 2982 topology_manager.go:215] "Topology Admit Handler" podUID="74cfde88-6828-4940-a16d-a02d42e9ebd6" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-wmqr5" Jan 29 12:01:54.277215 kubelet[2982]: I0129 12:01:54.276974 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/74cfde88-6828-4940-a16d-a02d42e9ebd6-var-lib-calico\") pod \"tigera-operator-7bc55997bb-wmqr5\" (UID: \"74cfde88-6828-4940-a16d-a02d42e9ebd6\") " pod="tigera-operator/tigera-operator-7bc55997bb-wmqr5" Jan 29 12:01:54.277215 kubelet[2982]: I0129 12:01:54.277069 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckrfc\" (UniqueName: \"kubernetes.io/projected/74cfde88-6828-4940-a16d-a02d42e9ebd6-kube-api-access-ckrfc\") pod \"tigera-operator-7bc55997bb-wmqr5\" (UID: \"74cfde88-6828-4940-a16d-a02d42e9ebd6\") " pod="tigera-operator/tigera-operator-7bc55997bb-wmqr5" Jan 29 12:01:54.334389 containerd[1589]: time="2025-01-29T12:01:54.334341300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8xhkf,Uid:5cd35ace-46b1-4d37-b29d-676f0218df3d,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:54.368543 containerd[1589]: time="2025-01-29T12:01:54.368246811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:54.368543 containerd[1589]: time="2025-01-29T12:01:54.368356771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:54.368543 containerd[1589]: time="2025-01-29T12:01:54.368388691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:54.368933 containerd[1589]: time="2025-01-29T12:01:54.368872333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:54.424325 containerd[1589]: time="2025-01-29T12:01:54.424277939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8xhkf,Uid:5cd35ace-46b1-4d37-b29d-676f0218df3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7fade02f2f92dacd82f397a2d66c0034aa1eb554f97c07609272b75278c8cb4\"" Jan 29 12:01:54.428525 containerd[1589]: time="2025-01-29T12:01:54.428474918Z" level=info msg="CreateContainer within sandbox \"d7fade02f2f92dacd82f397a2d66c0034aa1eb554f97c07609272b75278c8cb4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:01:54.447256 containerd[1589]: time="2025-01-29T12:01:54.447101200Z" level=info msg="CreateContainer within sandbox \"d7fade02f2f92dacd82f397a2d66c0034aa1eb554f97c07609272b75278c8cb4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"68440dc5ec54e1810c6bc7f02869e72b4329ca92f71bcfdd78cf543d1131a4a1\"" Jan 29 12:01:54.449846 containerd[1589]: time="2025-01-29T12:01:54.448985809Z" level=info msg="StartContainer for \"68440dc5ec54e1810c6bc7f02869e72b4329ca92f71bcfdd78cf543d1131a4a1\"" Jan 29 12:01:54.512294 containerd[1589]: time="2025-01-29T12:01:54.512244610Z" level=info msg="StartContainer for \"68440dc5ec54e1810c6bc7f02869e72b4329ca92f71bcfdd78cf543d1131a4a1\" returns successfully" Jan 29 12:01:54.520293 containerd[1589]: time="2025-01-29T12:01:54.520214965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-wmqr5,Uid:74cfde88-6828-4940-a16d-a02d42e9ebd6,Namespace:tigera-operator,Attempt:0,}" Jan 29 12:01:54.565224 containerd[1589]: time="2025-01-29T12:01:54.564615442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:54.565224 containerd[1589]: time="2025-01-29T12:01:54.564782443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:54.565224 containerd[1589]: time="2025-01-29T12:01:54.564843603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:54.565224 containerd[1589]: time="2025-01-29T12:01:54.565020924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:54.627499 containerd[1589]: time="2025-01-29T12:01:54.627440921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-wmqr5,Uid:74cfde88-6828-4940-a16d-a02d42e9ebd6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a29ffd3ea8c41a72b7e3ab544bfdaf842d7f4722c9dc41800f1c2b3a9968ee6f\"" Jan 29 12:01:54.633765 containerd[1589]: time="2025-01-29T12:01:54.633571148Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 12:01:56.394943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332236599.mount: Deactivated successfully. Jan 29 12:01:56.733993 containerd[1589]: time="2025-01-29T12:01:56.733448947Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:56.735658 containerd[1589]: time="2025-01-29T12:01:56.735604797Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 29 12:01:56.737129 containerd[1589]: time="2025-01-29T12:01:56.737068923Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:56.742102 containerd[1589]: time="2025-01-29T12:01:56.740732819Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:56.742102 containerd[1589]: time="2025-01-29T12:01:56.741653103Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.108016195s" Jan 29 12:01:56.742102 containerd[1589]: time="2025-01-29T12:01:56.741690023Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 29 12:01:56.746665 containerd[1589]: time="2025-01-29T12:01:56.746512524Z" level=info msg="CreateContainer within sandbox \"a29ffd3ea8c41a72b7e3ab544bfdaf842d7f4722c9dc41800f1c2b3a9968ee6f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 12:01:56.761585 containerd[1589]: time="2025-01-29T12:01:56.761507508Z" level=info msg="CreateContainer within sandbox \"a29ffd3ea8c41a72b7e3ab544bfdaf842d7f4722c9dc41800f1c2b3a9968ee6f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9d56df801354d95dd24d6b7f9dba3cf8f2aaaa6318dd0afd3e7305732d241098\"" Jan 29 12:01:56.764821 containerd[1589]: time="2025-01-29T12:01:56.763703038Z" level=info msg="StartContainer for \"9d56df801354d95dd24d6b7f9dba3cf8f2aaaa6318dd0afd3e7305732d241098\"" Jan 29 12:01:56.818882 containerd[1589]: time="2025-01-29T12:01:56.818813795Z" level=info msg="StartContainer for \"9d56df801354d95dd24d6b7f9dba3cf8f2aaaa6318dd0afd3e7305732d241098\" returns successfully" Jan 29 12:01:57.681213 kubelet[2982]: I0129 12:01:57.681117 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8xhkf" podStartSLOduration=3.681090029 podStartE2EDuration="3.681090029s" podCreationTimestamp="2025-01-29 12:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:54.674920932 +0000 UTC m=+16.237933680" watchObservedRunningTime="2025-01-29 12:01:57.681090029 +0000 UTC m=+19.244102817" Jan 29 12:01:58.578218 kubelet[2982]: I0129 12:01:58.576702 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-wmqr5" podStartSLOduration=2.463796584 podStartE2EDuration="4.576613439s" podCreationTimestamp="2025-01-29 12:01:54 +0000 UTC" firstStartedPulling="2025-01-29 12:01:54.631020697 +0000 UTC m=+16.194033445" lastFinishedPulling="2025-01-29 12:01:56.743837512 +0000 UTC m=+18.306850300" observedRunningTime="2025-01-29 12:01:57.681548351 +0000 UTC m=+19.244561139" watchObservedRunningTime="2025-01-29 12:01:58.576613439 +0000 UTC m=+20.139626267" Jan 29 12:02:01.472198 kubelet[2982]: I0129 12:02:01.471150 2982 topology_manager.go:215] "Topology Admit Handler" podUID="9ad8bc6e-a42d-4698-bbc7-1235ed8120ac" podNamespace="calico-system" podName="calico-typha-d68bffb9b-gmjlr" Jan 29 12:02:01.525978 kubelet[2982]: I0129 12:02:01.525918 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac-tigera-ca-bundle\") pod \"calico-typha-d68bffb9b-gmjlr\" (UID: \"9ad8bc6e-a42d-4698-bbc7-1235ed8120ac\") " pod="calico-system/calico-typha-d68bffb9b-gmjlr" Jan 29 12:02:01.525978 kubelet[2982]: I0129 12:02:01.525973 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac-typha-certs\") pod \"calico-typha-d68bffb9b-gmjlr\" (UID: \"9ad8bc6e-a42d-4698-bbc7-1235ed8120ac\") " pod="calico-system/calico-typha-d68bffb9b-gmjlr" Jan 29 12:02:01.525978 kubelet[2982]: I0129 12:02:01.525997 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzjnt\" (UniqueName: \"kubernetes.io/projected/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac-kube-api-access-hzjnt\") pod \"calico-typha-d68bffb9b-gmjlr\" (UID: \"9ad8bc6e-a42d-4698-bbc7-1235ed8120ac\") " pod="calico-system/calico-typha-d68bffb9b-gmjlr" Jan 29 12:02:01.641201 kubelet[2982]: I0129 12:02:01.640283 2982 topology_manager.go:215] "Topology Admit Handler" podUID="56e6a48b-7702-4cea-a191-6f3a685269b6" podNamespace="calico-system" podName="calico-node-x5lb5" Jan 29 12:02:01.728249 kubelet[2982]: I0129 12:02:01.728036 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-xtables-lock\") pod \"calico-node-x5lb5\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " pod="calico-system/calico-node-x5lb5" Jan 29 12:02:01.728249 kubelet[2982]: I0129 12:02:01.728109 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-policysync\") pod \"calico-node-x5lb5\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " pod="calico-system/calico-node-x5lb5" Jan 29 12:02:01.728249 kubelet[2982]: I0129 12:02:01.728131 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-flexvol-driver-host\") pod \"calico-node-x5lb5\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " pod="calico-system/calico-node-x5lb5" Jan 29 12:02:01.728249 kubelet[2982]: I0129 12:02:01.728160 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-lib-modules\") pod \"calico-node-x5lb5\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " pod="calico-system/calico-node-x5lb5" Jan 29 12:02:01.728249 kubelet[2982]: I0129 12:02:01.728194 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-cni-bin-dir\") pod \"calico-node-x5lb5\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " pod="calico-system/calico-node-x5lb5" Jan 29 12:02:01.728599 kubelet[2982]: I0129 12:02:01.728214 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56e6a48b-7702-4cea-a191-6f3a685269b6-tigera-ca-bundle\") pod \"calico-node-x5lb5\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " pod="calico-system/calico-node-x5lb5" Jan 29 12:02:01.728599 kubelet[2982]: I0129 12:02:01.728232 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/56e6a48b-7702-4cea-a191-6f3a685269b6-node-certs\") pod \"calico-node-x5lb5\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " pod="calico-system/calico-node-x5lb5" Jan 29 12:02:01.728599 kubelet[2982]: I0129 12:02:01.728250 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-var-lib-calico\") pod \"calico-node-x5lb5\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " pod="calico-system/calico-node-x5lb5" Jan 29 12:02:01.728599 kubelet[2982]: I0129 12:02:01.728270 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-cni-net-dir\") pod \"calico-node-x5lb5\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " pod="calico-system/calico-node-x5lb5" Jan 29 12:02:01.728599 kubelet[2982]: I0129 12:02:01.728291 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-cni-log-dir\") pod \"calico-node-x5lb5\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " pod="calico-system/calico-node-x5lb5" Jan 29 12:02:01.729988 kubelet[2982]: I0129 12:02:01.729346 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdvnh\" (UniqueName: \"kubernetes.io/projected/56e6a48b-7702-4cea-a191-6f3a685269b6-kube-api-access-bdvnh\") pod \"calico-node-x5lb5\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " pod="calico-system/calico-node-x5lb5" Jan 29 12:02:01.729988 kubelet[2982]: I0129 12:02:01.729934 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-var-run-calico\") pod \"calico-node-x5lb5\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " pod="calico-system/calico-node-x5lb5" Jan 29 12:02:01.772958 kubelet[2982]: I0129 12:02:01.772846 2982 topology_manager.go:215] "Topology Admit Handler" podUID="f8acab89-8346-440a-bbf8-eaa1717109a0" podNamespace="calico-system" podName="csi-node-driver-f4x79" Jan 29 12:02:01.773240 kubelet[2982]: E0129 12:02:01.773207 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f4x79" podUID="f8acab89-8346-440a-bbf8-eaa1717109a0" Jan 29 12:02:01.789445 containerd[1589]: time="2025-01-29T12:02:01.789332470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d68bffb9b-gmjlr,Uid:9ad8bc6e-a42d-4698-bbc7-1235ed8120ac,Namespace:calico-system,Attempt:0,}" Jan 29 12:02:01.840215 kubelet[2982]: I0129 12:02:01.833636 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f8acab89-8346-440a-bbf8-eaa1717109a0-varrun\") pod \"csi-node-driver-f4x79\" (UID: \"f8acab89-8346-440a-bbf8-eaa1717109a0\") " pod="calico-system/csi-node-driver-f4x79" Jan 29 12:02:01.840215 kubelet[2982]: I0129 12:02:01.837318 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8acab89-8346-440a-bbf8-eaa1717109a0-kubelet-dir\") pod \"csi-node-driver-f4x79\" (UID: \"f8acab89-8346-440a-bbf8-eaa1717109a0\") " pod="calico-system/csi-node-driver-f4x79" Jan 29 12:02:01.840215 kubelet[2982]: I0129 12:02:01.837406 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f8acab89-8346-440a-bbf8-eaa1717109a0-registration-dir\") pod \"csi-node-driver-f4x79\" (UID: \"f8acab89-8346-440a-bbf8-eaa1717109a0\") " pod="calico-system/csi-node-driver-f4x79" Jan 29 12:02:01.840215 kubelet[2982]: I0129 12:02:01.837428 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f8acab89-8346-440a-bbf8-eaa1717109a0-socket-dir\") pod \"csi-node-driver-f4x79\" (UID: \"f8acab89-8346-440a-bbf8-eaa1717109a0\") " pod="calico-system/csi-node-driver-f4x79" Jan 29 12:02:01.840215 kubelet[2982]: I0129 12:02:01.837446 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-748rb\" (UniqueName: \"kubernetes.io/projected/f8acab89-8346-440a-bbf8-eaa1717109a0-kube-api-access-748rb\") pod \"csi-node-driver-f4x79\" (UID: \"f8acab89-8346-440a-bbf8-eaa1717109a0\") " pod="calico-system/csi-node-driver-f4x79" Jan 29 12:02:01.910847 kubelet[2982]: E0129 12:02:01.910681 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.910847 kubelet[2982]: W0129 12:02:01.910771 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.910847 kubelet[2982]: E0129 12:02:01.910811 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.913940 kubelet[2982]: E0129 12:02:01.913906 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.913940 kubelet[2982]: W0129 12:02:01.913933 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.914143 kubelet[2982]: E0129 12:02:01.913960 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.916345 kubelet[2982]: E0129 12:02:01.915721 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.916345 kubelet[2982]: W0129 12:02:01.915751 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.916345 kubelet[2982]: E0129 12:02:01.915777 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.920321 containerd[1589]: time="2025-01-29T12:02:01.919698594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:01.920321 containerd[1589]: time="2025-01-29T12:02:01.919806075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:01.920321 containerd[1589]: time="2025-01-29T12:02:01.919834075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:01.920321 containerd[1589]: time="2025-01-29T12:02:01.919964635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:01.940078 kubelet[2982]: E0129 12:02:01.939806 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.940244 kubelet[2982]: W0129 12:02:01.939836 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.940244 kubelet[2982]: E0129 12:02:01.940223 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.942126 kubelet[2982]: E0129 12:02:01.941834 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.942126 kubelet[2982]: W0129 12:02:01.941865 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.942126 kubelet[2982]: E0129 12:02:01.941892 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.942582 kubelet[2982]: E0129 12:02:01.942558 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.942943 kubelet[2982]: W0129 12:02:01.942679 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.942943 kubelet[2982]: E0129 12:02:01.942708 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.943330 kubelet[2982]: E0129 12:02:01.943306 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.943425 kubelet[2982]: W0129 12:02:01.943410 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.943488 kubelet[2982]: E0129 12:02:01.943477 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.943788 kubelet[2982]: E0129 12:02:01.943773 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.943915 kubelet[2982]: W0129 12:02:01.943898 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.943989 kubelet[2982]: E0129 12:02:01.943976 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.944458 kubelet[2982]: E0129 12:02:01.944439 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.944938 kubelet[2982]: W0129 12:02:01.944698 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.944938 kubelet[2982]: E0129 12:02:01.944728 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.946356 kubelet[2982]: E0129 12:02:01.946108 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.946356 kubelet[2982]: W0129 12:02:01.946130 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.946356 kubelet[2982]: E0129 12:02:01.946153 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.946689 kubelet[2982]: E0129 12:02:01.946600 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.946689 kubelet[2982]: W0129 12:02:01.946617 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.946689 kubelet[2982]: E0129 12:02:01.946632 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.951931 kubelet[2982]: E0129 12:02:01.951704 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.951931 kubelet[2982]: W0129 12:02:01.951766 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.953098 kubelet[2982]: E0129 12:02:01.953047 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.954131 kubelet[2982]: E0129 12:02:01.953590 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.954131 kubelet[2982]: W0129 12:02:01.953616 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.954131 kubelet[2982]: E0129 12:02:01.954235 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.956535 kubelet[2982]: E0129 12:02:01.956414 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.956535 kubelet[2982]: W0129 12:02:01.956448 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.956819 kubelet[2982]: E0129 12:02:01.956753 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.958916 kubelet[2982]: E0129 12:02:01.958869 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.958916 kubelet[2982]: W0129 12:02:01.958907 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.959838 kubelet[2982]: E0129 12:02:01.959211 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.960689 kubelet[2982]: E0129 12:02:01.960646 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.960689 kubelet[2982]: W0129 12:02:01.960672 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.961202 kubelet[2982]: E0129 12:02:01.960760 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.961706 kubelet[2982]: E0129 12:02:01.961428 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.961706 kubelet[2982]: W0129 12:02:01.961696 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.961866 kubelet[2982]: E0129 12:02:01.961821 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.963445 kubelet[2982]: E0129 12:02:01.963124 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.963445 kubelet[2982]: W0129 12:02:01.963155 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.963445 kubelet[2982]: E0129 12:02:01.963228 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.964272 kubelet[2982]: E0129 12:02:01.964248 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.964788 kubelet[2982]: W0129 12:02:01.964757 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.964788 kubelet[2982]: E0129 12:02:01.964825 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.966371 kubelet[2982]: E0129 12:02:01.966292 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.966371 kubelet[2982]: W0129 12:02:01.966316 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.967519 kubelet[2982]: E0129 12:02:01.967067 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.967519 kubelet[2982]: E0129 12:02:01.967374 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.967519 kubelet[2982]: W0129 12:02:01.967387 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.968220 kubelet[2982]: E0129 12:02:01.968192 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.969344 kubelet[2982]: E0129 12:02:01.969306 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.969344 kubelet[2982]: W0129 12:02:01.969340 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.969890 kubelet[2982]: E0129 12:02:01.969715 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.970900 kubelet[2982]: E0129 12:02:01.970864 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.970900 kubelet[2982]: W0129 12:02:01.970887 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.971105 kubelet[2982]: E0129 12:02:01.970975 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.973077 kubelet[2982]: E0129 12:02:01.972722 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.974866 containerd[1589]: time="2025-01-29T12:02:01.973527611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x5lb5,Uid:56e6a48b-7702-4cea-a191-6f3a685269b6,Namespace:calico-system,Attempt:0,}" Jan 29 12:02:01.975101 kubelet[2982]: W0129 12:02:01.975066 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.975403 kubelet[2982]: E0129 12:02:01.975315 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.975693 kubelet[2982]: E0129 12:02:01.975676 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.975940 kubelet[2982]: W0129 12:02:01.975751 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.975940 kubelet[2982]: E0129 12:02:01.975800 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.978000 kubelet[2982]: E0129 12:02:01.977254 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.978000 kubelet[2982]: W0129 12:02:01.977280 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.978000 kubelet[2982]: E0129 12:02:01.977715 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.980078 kubelet[2982]: E0129 12:02:01.979413 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.980078 kubelet[2982]: W0129 12:02:01.979440 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.980078 kubelet[2982]: E0129 12:02:01.979705 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:01.984224 kubelet[2982]: E0129 12:02:01.982775 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:01.984224 kubelet[2982]: W0129 12:02:01.982804 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:01.984224 kubelet[2982]: E0129 12:02:01.982832 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:02.006520 kubelet[2982]: E0129 12:02:02.006389 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:02.006520 kubelet[2982]: W0129 12:02:02.006425 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:02.006520 kubelet[2982]: E0129 12:02:02.006458 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:02.052067 containerd[1589]: time="2025-01-29T12:02:02.048465870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:02.052067 containerd[1589]: time="2025-01-29T12:02:02.048717551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:02.052067 containerd[1589]: time="2025-01-29T12:02:02.049073032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:02.052067 containerd[1589]: time="2025-01-29T12:02:02.050370397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:02.064710 containerd[1589]: time="2025-01-29T12:02:02.064662494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d68bffb9b-gmjlr,Uid:9ad8bc6e-a42d-4698-bbc7-1235ed8120ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\"" Jan 29 12:02:02.078665 containerd[1589]: time="2025-01-29T12:02:02.077030743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 12:02:02.127636 containerd[1589]: time="2025-01-29T12:02:02.126325058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x5lb5,Uid:56e6a48b-7702-4cea-a191-6f3a685269b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\"" Jan 29 12:02:03.470021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3454765577.mount: Deactivated successfully. Jan 29 12:02:03.552079 kubelet[2982]: E0129 12:02:03.551485 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f4x79" podUID="f8acab89-8346-440a-bbf8-eaa1717109a0" Jan 29 12:02:03.911156 containerd[1589]: time="2025-01-29T12:02:03.911105014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:03.913321 containerd[1589]: time="2025-01-29T12:02:03.913274783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 29 12:02:03.915597 containerd[1589]: time="2025-01-29T12:02:03.914434627Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:03.917495 containerd[1589]: time="2025-01-29T12:02:03.917441519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:03.921373 containerd[1589]: time="2025-01-29T12:02:03.921298014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.842985666s" Jan 29 12:02:03.921672 containerd[1589]: time="2025-01-29T12:02:03.921381094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 29 12:02:03.925492 containerd[1589]: time="2025-01-29T12:02:03.925366150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:02:03.945183 containerd[1589]: time="2025-01-29T12:02:03.945005667Z" level=info msg="CreateContainer within sandbox \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:02:03.975285 containerd[1589]: time="2025-01-29T12:02:03.975092865Z" level=info msg="CreateContainer within sandbox \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\"" Jan 29 12:02:03.977484 containerd[1589]: time="2025-01-29T12:02:03.977285273Z" level=info msg="StartContainer for \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\"" Jan 29 12:02:04.066609 containerd[1589]: time="2025-01-29T12:02:04.066553460Z" level=info msg="StartContainer for \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\" returns successfully" Jan 29 12:02:04.727260 kubelet[2982]: I0129 12:02:04.727156 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d68bffb9b-gmjlr" podStartSLOduration=1.877532443 podStartE2EDuration="3.727132335s" podCreationTimestamp="2025-01-29 12:02:01 +0000 UTC" firstStartedPulling="2025-01-29 12:02:02.074467413 +0000 UTC m=+23.637480161" lastFinishedPulling="2025-01-29 12:02:03.924067265 +0000 UTC m=+25.487080053" observedRunningTime="2025-01-29 12:02:04.726383772 +0000 UTC m=+26.289396520" watchObservedRunningTime="2025-01-29 12:02:04.727132335 +0000 UTC m=+26.290145083" Jan 29 12:02:04.730838 kubelet[2982]: E0129 12:02:04.730802 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.731226 kubelet[2982]: W0129 12:02:04.731004 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.731226 kubelet[2982]: E0129 12:02:04.731045 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.732186 kubelet[2982]: E0129 12:02:04.732056 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.732186 kubelet[2982]: W0129 12:02:04.732086 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.732186 kubelet[2982]: E0129 12:02:04.732107 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.732772 kubelet[2982]: E0129 12:02:04.732654 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.732772 kubelet[2982]: W0129 12:02:04.732670 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.732772 kubelet[2982]: E0129 12:02:04.732700 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.733271 kubelet[2982]: E0129 12:02:04.733124 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.733271 kubelet[2982]: W0129 12:02:04.733137 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.733271 kubelet[2982]: E0129 12:02:04.733149 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.733712 kubelet[2982]: E0129 12:02:04.733639 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.733712 kubelet[2982]: W0129 12:02:04.733653 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.733712 kubelet[2982]: E0129 12:02:04.733665 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.734071 kubelet[2982]: E0129 12:02:04.733970 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.734071 kubelet[2982]: W0129 12:02:04.733984 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.734071 kubelet[2982]: E0129 12:02:04.734003 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.734398 kubelet[2982]: E0129 12:02:04.734335 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.734398 kubelet[2982]: W0129 12:02:04.734348 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.734398 kubelet[2982]: E0129 12:02:04.734358 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.734795 kubelet[2982]: E0129 12:02:04.734727 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.734795 kubelet[2982]: W0129 12:02:04.734739 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.734795 kubelet[2982]: E0129 12:02:04.734753 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.735355 kubelet[2982]: E0129 12:02:04.735342 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.735523 kubelet[2982]: W0129 12:02:04.735427 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.735523 kubelet[2982]: E0129 12:02:04.735445 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.736919 kubelet[2982]: E0129 12:02:04.736897 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.737135 kubelet[2982]: W0129 12:02:04.737004 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.737135 kubelet[2982]: E0129 12:02:04.737028 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.738017 kubelet[2982]: E0129 12:02:04.737310 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.738017 kubelet[2982]: W0129 12:02:04.737323 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.738017 kubelet[2982]: E0129 12:02:04.737334 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.738405 kubelet[2982]: E0129 12:02:04.738273 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.738405 kubelet[2982]: W0129 12:02:04.738298 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.738405 kubelet[2982]: E0129 12:02:04.738315 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.738831 kubelet[2982]: E0129 12:02:04.738626 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.738831 kubelet[2982]: W0129 12:02:04.738645 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.738831 kubelet[2982]: E0129 12:02:04.738656 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.739021 kubelet[2982]: E0129 12:02:04.738999 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.739021 kubelet[2982]: W0129 12:02:04.739019 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.739069 kubelet[2982]: E0129 12:02:04.739035 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.739300 kubelet[2982]: E0129 12:02:04.739282 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.739300 kubelet[2982]: W0129 12:02:04.739297 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.739370 kubelet[2982]: E0129 12:02:04.739310 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.771197 kubelet[2982]: E0129 12:02:04.771061 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.771197 kubelet[2982]: W0129 12:02:04.771128 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.773722 kubelet[2982]: E0129 12:02:04.771655 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.773722 kubelet[2982]: E0129 12:02:04.772048 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.773722 kubelet[2982]: W0129 12:02:04.772066 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.773722 kubelet[2982]: E0129 12:02:04.772083 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.773722 kubelet[2982]: E0129 12:02:04.772390 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.773722 kubelet[2982]: W0129 12:02:04.772405 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.773722 kubelet[2982]: E0129 12:02:04.772426 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.773722 kubelet[2982]: E0129 12:02:04.772693 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.773722 kubelet[2982]: W0129 12:02:04.772703 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.773722 kubelet[2982]: E0129 12:02:04.772714 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.774251 kubelet[2982]: E0129 12:02:04.772898 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.774251 kubelet[2982]: W0129 12:02:04.772912 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.774251 kubelet[2982]: E0129 12:02:04.772928 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.774251 kubelet[2982]: E0129 12:02:04.773094 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.774251 kubelet[2982]: W0129 12:02:04.773104 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.774251 kubelet[2982]: E0129 12:02:04.773118 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.774251 kubelet[2982]: E0129 12:02:04.773336 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.774251 kubelet[2982]: W0129 12:02:04.773351 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.774251 kubelet[2982]: E0129 12:02:04.773388 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.775377 kubelet[2982]: E0129 12:02:04.774923 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.775377 kubelet[2982]: W0129 12:02:04.774951 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.775377 kubelet[2982]: E0129 12:02:04.775001 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.775377 kubelet[2982]: E0129 12:02:04.775215 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.775377 kubelet[2982]: W0129 12:02:04.775224 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.775377 kubelet[2982]: E0129 12:02:04.775258 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.776019 kubelet[2982]: E0129 12:02:04.775706 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.776019 kubelet[2982]: W0129 12:02:04.775721 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.776019 kubelet[2982]: E0129 12:02:04.775740 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.777715 kubelet[2982]: E0129 12:02:04.776464 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.777715 kubelet[2982]: W0129 12:02:04.776482 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.777715 kubelet[2982]: E0129 12:02:04.776861 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.777960 kubelet[2982]: E0129 12:02:04.777941 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.778111 kubelet[2982]: W0129 12:02:04.778006 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.778445 kubelet[2982]: E0129 12:02:04.778292 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.778679 kubelet[2982]: E0129 12:02:04.778570 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.778679 kubelet[2982]: W0129 12:02:04.778587 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.779260 kubelet[2982]: E0129 12:02:04.779239 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.779483 kubelet[2982]: W0129 12:02:04.779330 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.779483 kubelet[2982]: E0129 12:02:04.779352 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.779483 kubelet[2982]: E0129 12:02:04.779440 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.780024 kubelet[2982]: E0129 12:02:04.779895 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.780024 kubelet[2982]: W0129 12:02:04.779912 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.780024 kubelet[2982]: E0129 12:02:04.779925 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.780218 kubelet[2982]: E0129 12:02:04.780204 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.781036 kubelet[2982]: W0129 12:02:04.780274 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.781036 kubelet[2982]: E0129 12:02:04.780291 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.782389 kubelet[2982]: E0129 12:02:04.782016 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.782389 kubelet[2982]: W0129 12:02:04.782039 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.782389 kubelet[2982]: E0129 12:02:04.782061 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:04.782659 kubelet[2982]: E0129 12:02:04.782642 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:02:04.782728 kubelet[2982]: W0129 12:02:04.782715 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:02:04.782796 kubelet[2982]: E0129 12:02:04.782784 2982 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:02:05.351083 containerd[1589]: time="2025-01-29T12:02:05.350741690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:05.352210 containerd[1589]: time="2025-01-29T12:02:05.352008535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 29 12:02:05.353616 containerd[1589]: time="2025-01-29T12:02:05.353516581Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:05.358420 containerd[1589]: time="2025-01-29T12:02:05.358363719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:05.359913 containerd[1589]: time="2025-01-29T12:02:05.359347843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.433908933s" Jan 29 12:02:05.359913 containerd[1589]: time="2025-01-29T12:02:05.359424483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 12:02:05.363753 containerd[1589]: time="2025-01-29T12:02:05.363712660Z" level=info msg="CreateContainer within sandbox \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:02:05.383436 containerd[1589]: time="2025-01-29T12:02:05.383379495Z" level=info msg="CreateContainer within sandbox \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eccb4d21223d05d210bcd531b733980ab75422fec400ba1bf52afa789cea6bca\"" Jan 29 12:02:05.385253 containerd[1589]: time="2025-01-29T12:02:05.384490259Z" level=info msg="StartContainer for \"eccb4d21223d05d210bcd531b733980ab75422fec400ba1bf52afa789cea6bca\"" Jan 29 12:02:05.468924 containerd[1589]: time="2025-01-29T12:02:05.468882862Z" level=info msg="StartContainer for \"eccb4d21223d05d210bcd531b733980ab75422fec400ba1bf52afa789cea6bca\" returns successfully" Jan 29 12:02:05.521216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eccb4d21223d05d210bcd531b733980ab75422fec400ba1bf52afa789cea6bca-rootfs.mount: Deactivated successfully. Jan 29 12:02:05.550757 kubelet[2982]: E0129 12:02:05.550705 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f4x79" podUID="f8acab89-8346-440a-bbf8-eaa1717109a0" Jan 29 12:02:05.706629 containerd[1589]: time="2025-01-29T12:02:05.706198408Z" level=info msg="shim disconnected" id=eccb4d21223d05d210bcd531b733980ab75422fec400ba1bf52afa789cea6bca namespace=k8s.io Jan 29 12:02:05.706629 containerd[1589]: time="2025-01-29T12:02:05.706264048Z" level=warning msg="cleaning up after shim disconnected" id=eccb4d21223d05d210bcd531b733980ab75422fec400ba1bf52afa789cea6bca namespace=k8s.io Jan 29 12:02:05.706629 containerd[1589]: time="2025-01-29T12:02:05.706274048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:05.722225 kubelet[2982]: I0129 12:02:05.722120 2982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:02:05.733643 containerd[1589]: time="2025-01-29T12:02:05.733577753Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:02:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:02:06.728089 containerd[1589]: time="2025-01-29T12:02:06.728032958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:02:07.551467 kubelet[2982]: E0129 12:02:07.550684 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f4x79" podUID="f8acab89-8346-440a-bbf8-eaa1717109a0" Jan 29 12:02:09.553219 kubelet[2982]: E0129 12:02:09.551859 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f4x79" podUID="f8acab89-8346-440a-bbf8-eaa1717109a0" Jan 29 12:02:09.942808 containerd[1589]: time="2025-01-29T12:02:09.942655795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:09.946818 containerd[1589]: time="2025-01-29T12:02:09.946506409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 12:02:09.950206 containerd[1589]: time="2025-01-29T12:02:09.948757857Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:09.953003 containerd[1589]: time="2025-01-29T12:02:09.952937352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:09.954799 containerd[1589]: time="2025-01-29T12:02:09.954725919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.226639401s" Jan 29 12:02:09.954799 containerd[1589]: time="2025-01-29T12:02:09.954795079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 12:02:09.958930 containerd[1589]: time="2025-01-29T12:02:09.958882774Z" level=info msg="CreateContainer within sandbox \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:02:09.978183 containerd[1589]: time="2025-01-29T12:02:09.978074564Z" level=info msg="CreateContainer within sandbox \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5dffc4e863ed77cca66a17ad8b74203a083a3c23fddd03e204f1e280d6134d97\"" Jan 29 12:02:09.979857 containerd[1589]: time="2025-01-29T12:02:09.979129688Z" level=info msg="StartContainer for \"5dffc4e863ed77cca66a17ad8b74203a083a3c23fddd03e204f1e280d6134d97\"" Jan 29 12:02:10.060074 containerd[1589]: time="2025-01-29T12:02:10.060006540Z" level=info msg="StartContainer for \"5dffc4e863ed77cca66a17ad8b74203a083a3c23fddd03e204f1e280d6134d97\" returns successfully" Jan 29 12:02:10.659438 containerd[1589]: time="2025-01-29T12:02:10.659388140Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:02:10.689898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dffc4e863ed77cca66a17ad8b74203a083a3c23fddd03e204f1e280d6134d97-rootfs.mount: Deactivated successfully. Jan 29 12:02:10.745770 kubelet[2982]: I0129 12:02:10.745667 2982 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:02:10.798490 kubelet[2982]: I0129 12:02:10.797802 2982 topology_manager.go:215] "Topology Admit Handler" podUID="411fde72-7e67-4c52-beb5-274e475ac3a2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7httm" Jan 29 12:02:10.804964 kubelet[2982]: I0129 12:02:10.804497 2982 topology_manager.go:215] "Topology Admit Handler" podUID="d5e90633-37b9-444b-9d34-7ea4734213c7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hrwhb" Jan 29 12:02:10.816806 kubelet[2982]: I0129 12:02:10.816555 2982 topology_manager.go:215] "Topology Admit Handler" podUID="3a58702a-d30f-45ab-b50c-e13b846313b4" podNamespace="calico-apiserver" podName="calico-apiserver-7cf4d54ff6-gcwqx" Jan 29 12:02:10.817605 kubelet[2982]: I0129 12:02:10.817458 2982 topology_manager.go:215] "Topology Admit Handler" podUID="8ea098a5-9143-4ffd-a41c-59ec409201c4" podNamespace="calico-system" podName="calico-kube-controllers-77c4b4575d-lt42c" Jan 29 12:02:10.818799 kubelet[2982]: I0129 12:02:10.818656 2982 topology_manager.go:215] "Topology Admit Handler" podUID="3c7ffc06-7657-4b76-aeac-67f169d0c448" podNamespace="calico-apiserver" podName="calico-apiserver-7cf4d54ff6-8kcss" Jan 29 12:02:10.819285 kubelet[2982]: I0129 12:02:10.819109 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbzw\" (UniqueName: \"kubernetes.io/projected/411fde72-7e67-4c52-beb5-274e475ac3a2-kube-api-access-mvbzw\") pod \"coredns-7db6d8ff4d-7httm\" (UID: \"411fde72-7e67-4c52-beb5-274e475ac3a2\") " pod="kube-system/coredns-7db6d8ff4d-7httm" Jan 29 12:02:10.819285 kubelet[2982]: I0129 12:02:10.819155 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/411fde72-7e67-4c52-beb5-274e475ac3a2-config-volume\") pod \"coredns-7db6d8ff4d-7httm\" (UID: \"411fde72-7e67-4c52-beb5-274e475ac3a2\") " pod="kube-system/coredns-7db6d8ff4d-7httm" Jan 29 12:02:10.903072 containerd[1589]: time="2025-01-29T12:02:10.902985018Z" level=info msg="shim disconnected" id=5dffc4e863ed77cca66a17ad8b74203a083a3c23fddd03e204f1e280d6134d97 namespace=k8s.io Jan 29 12:02:10.903072 containerd[1589]: time="2025-01-29T12:02:10.903067778Z" level=warning msg="cleaning up after shim disconnected" id=5dffc4e863ed77cca66a17ad8b74203a083a3c23fddd03e204f1e280d6134d97 namespace=k8s.io Jan 29 12:02:10.903072 containerd[1589]: time="2025-01-29T12:02:10.903077258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:10.920271 kubelet[2982]: I0129 12:02:10.919862 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfm4d\" (UniqueName: \"kubernetes.io/projected/d5e90633-37b9-444b-9d34-7ea4734213c7-kube-api-access-qfm4d\") pod \"coredns-7db6d8ff4d-hrwhb\" (UID: \"d5e90633-37b9-444b-9d34-7ea4734213c7\") " pod="kube-system/coredns-7db6d8ff4d-hrwhb" Jan 29 12:02:10.920271 kubelet[2982]: I0129 12:02:10.919919 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ea098a5-9143-4ffd-a41c-59ec409201c4-tigera-ca-bundle\") pod \"calico-kube-controllers-77c4b4575d-lt42c\" (UID: \"8ea098a5-9143-4ffd-a41c-59ec409201c4\") " pod="calico-system/calico-kube-controllers-77c4b4575d-lt42c" Jan 29 12:02:10.920271 kubelet[2982]: I0129 12:02:10.919938 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmtjh\" (UniqueName: \"kubernetes.io/projected/8ea098a5-9143-4ffd-a41c-59ec409201c4-kube-api-access-tmtjh\") pod \"calico-kube-controllers-77c4b4575d-lt42c\" (UID: \"8ea098a5-9143-4ffd-a41c-59ec409201c4\") " pod="calico-system/calico-kube-controllers-77c4b4575d-lt42c" Jan 29 12:02:10.920271 kubelet[2982]: I0129 12:02:10.919958 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3c7ffc06-7657-4b76-aeac-67f169d0c448-calico-apiserver-certs\") pod \"calico-apiserver-7cf4d54ff6-8kcss\" (UID: \"3c7ffc06-7657-4b76-aeac-67f169d0c448\") " pod="calico-apiserver/calico-apiserver-7cf4d54ff6-8kcss" Jan 29 12:02:10.920987 kubelet[2982]: I0129 12:02:10.919991 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb6mg\" (UniqueName: \"kubernetes.io/projected/3a58702a-d30f-45ab-b50c-e13b846313b4-kube-api-access-nb6mg\") pod \"calico-apiserver-7cf4d54ff6-gcwqx\" (UID: \"3a58702a-d30f-45ab-b50c-e13b846313b4\") " pod="calico-apiserver/calico-apiserver-7cf4d54ff6-gcwqx" Jan 29 12:02:10.920987 kubelet[2982]: I0129 12:02:10.920852 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5e90633-37b9-444b-9d34-7ea4734213c7-config-volume\") pod \"coredns-7db6d8ff4d-hrwhb\" (UID: \"d5e90633-37b9-444b-9d34-7ea4734213c7\") " pod="kube-system/coredns-7db6d8ff4d-hrwhb" Jan 29 12:02:10.920987 kubelet[2982]: I0129 12:02:10.920884 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvwkp\" (UniqueName: \"kubernetes.io/projected/3c7ffc06-7657-4b76-aeac-67f169d0c448-kube-api-access-vvwkp\") pod \"calico-apiserver-7cf4d54ff6-8kcss\" (UID: \"3c7ffc06-7657-4b76-aeac-67f169d0c448\") " pod="calico-apiserver/calico-apiserver-7cf4d54ff6-8kcss" Jan 29 12:02:10.921394 kubelet[2982]: I0129 12:02:10.921273 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3a58702a-d30f-45ab-b50c-e13b846313b4-calico-apiserver-certs\") pod \"calico-apiserver-7cf4d54ff6-gcwqx\" (UID: \"3a58702a-d30f-45ab-b50c-e13b846313b4\") " pod="calico-apiserver/calico-apiserver-7cf4d54ff6-gcwqx" Jan 29 12:02:11.126575 containerd[1589]: time="2025-01-29T12:02:11.125388934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hrwhb,Uid:d5e90633-37b9-444b-9d34-7ea4734213c7,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:11.126575 containerd[1589]: time="2025-01-29T12:02:11.126465098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf4d54ff6-8kcss,Uid:3c7ffc06-7657-4b76-aeac-67f169d0c448,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:02:11.127649 containerd[1589]: time="2025-01-29T12:02:11.127581062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7httm,Uid:411fde72-7e67-4c52-beb5-274e475ac3a2,Namespace:kube-system,Attempt:0,}" Jan 29 12:02:11.132379 containerd[1589]: time="2025-01-29T12:02:11.132334919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf4d54ff6-gcwqx,Uid:3a58702a-d30f-45ab-b50c-e13b846313b4,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:02:11.160473 containerd[1589]: time="2025-01-29T12:02:11.160332539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c4b4575d-lt42c,Uid:8ea098a5-9143-4ffd-a41c-59ec409201c4,Namespace:calico-system,Attempt:0,}" Jan 29 12:02:11.368439 containerd[1589]: time="2025-01-29T12:02:11.368009759Z" level=error msg="Failed to destroy network for sandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.369884 containerd[1589]: time="2025-01-29T12:02:11.369115083Z" level=error msg="encountered an error cleaning up failed sandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.369884 containerd[1589]: time="2025-01-29T12:02:11.369318084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf4d54ff6-8kcss,Uid:3c7ffc06-7657-4b76-aeac-67f169d0c448,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.370106 kubelet[2982]: E0129 12:02:11.369791 2982 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.370106 kubelet[2982]: E0129 12:02:11.369860 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf4d54ff6-8kcss" Jan 29 12:02:11.370106 kubelet[2982]: E0129 12:02:11.369881 2982 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf4d54ff6-8kcss" Jan 29 12:02:11.370234 kubelet[2982]: E0129 12:02:11.370027 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cf4d54ff6-8kcss_calico-apiserver(3c7ffc06-7657-4b76-aeac-67f169d0c448)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cf4d54ff6-8kcss_calico-apiserver(3c7ffc06-7657-4b76-aeac-67f169d0c448)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf4d54ff6-8kcss" podUID="3c7ffc06-7657-4b76-aeac-67f169d0c448" Jan 29 12:02:11.391832 containerd[1589]: time="2025-01-29T12:02:11.391773684Z" level=error msg="Failed to destroy network for sandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.395045 containerd[1589]: time="2025-01-29T12:02:11.394970015Z" level=error msg="encountered an error cleaning up failed sandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.395225 containerd[1589]: time="2025-01-29T12:02:11.395073695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf4d54ff6-gcwqx,Uid:3a58702a-d30f-45ab-b50c-e13b846313b4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.395956 kubelet[2982]: E0129 12:02:11.395859 2982 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.395956 kubelet[2982]: E0129 12:02:11.395941 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf4d54ff6-gcwqx" Jan 29 12:02:11.395956 kubelet[2982]: E0129 12:02:11.395965 2982 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf4d54ff6-gcwqx" Jan 29 12:02:11.396146 kubelet[2982]: E0129 12:02:11.396012 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cf4d54ff6-gcwqx_calico-apiserver(3a58702a-d30f-45ab-b50c-e13b846313b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cf4d54ff6-gcwqx_calico-apiserver(3a58702a-d30f-45ab-b50c-e13b846313b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf4d54ff6-gcwqx" podUID="3a58702a-d30f-45ab-b50c-e13b846313b4" Jan 29 12:02:11.415630 containerd[1589]: time="2025-01-29T12:02:11.415564648Z" level=error msg="Failed to destroy network for sandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.416322 containerd[1589]: time="2025-01-29T12:02:11.416110090Z" level=error msg="encountered an error cleaning up failed sandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.416322 containerd[1589]: time="2025-01-29T12:02:11.416196731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7httm,Uid:411fde72-7e67-4c52-beb5-274e475ac3a2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.416471 containerd[1589]: time="2025-01-29T12:02:11.416390211Z" level=error msg="Failed to destroy network for sandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.417822 containerd[1589]: time="2025-01-29T12:02:11.417574335Z" level=error msg="encountered an error cleaning up failed sandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.417822 containerd[1589]: time="2025-01-29T12:02:11.417696576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hrwhb,Uid:d5e90633-37b9-444b-9d34-7ea4734213c7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.417963 kubelet[2982]: E0129 12:02:11.417719 2982 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.417963 kubelet[2982]: E0129 12:02:11.417901 2982 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.417963 kubelet[2982]: E0129 12:02:11.417931 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hrwhb" Jan 29 12:02:11.417963 kubelet[2982]: E0129 12:02:11.417952 2982 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hrwhb" Jan 29 12:02:11.418141 kubelet[2982]: E0129 12:02:11.417991 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hrwhb_kube-system(d5e90633-37b9-444b-9d34-7ea4734213c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hrwhb_kube-system(d5e90633-37b9-444b-9d34-7ea4734213c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hrwhb" podUID="d5e90633-37b9-444b-9d34-7ea4734213c7" Jan 29 12:02:11.418141 kubelet[2982]: E0129 12:02:11.418126 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7httm" Jan 29 12:02:11.418258 kubelet[2982]: E0129 12:02:11.418146 2982 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7httm" Jan 29 12:02:11.418287 kubelet[2982]: E0129 12:02:11.418264 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7httm_kube-system(411fde72-7e67-4c52-beb5-274e475ac3a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7httm_kube-system(411fde72-7e67-4c52-beb5-274e475ac3a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7httm" podUID="411fde72-7e67-4c52-beb5-274e475ac3a2" Jan 29 12:02:11.422928 containerd[1589]: time="2025-01-29T12:02:11.422559433Z" level=error msg="Failed to destroy network for sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.423597 containerd[1589]: time="2025-01-29T12:02:11.423251156Z" level=error msg="encountered an error cleaning up failed sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.423597 containerd[1589]: time="2025-01-29T12:02:11.423327076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c4b4575d-lt42c,Uid:8ea098a5-9143-4ffd-a41c-59ec409201c4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.424723 kubelet[2982]: E0129 12:02:11.424429 2982 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.424723 kubelet[2982]: E0129 12:02:11.424495 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77c4b4575d-lt42c" Jan 29 12:02:11.424723 kubelet[2982]: E0129 12:02:11.424560 2982 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77c4b4575d-lt42c" Jan 29 12:02:11.424934 kubelet[2982]: E0129 12:02:11.424656 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77c4b4575d-lt42c_calico-system(8ea098a5-9143-4ffd-a41c-59ec409201c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77c4b4575d-lt42c_calico-system(8ea098a5-9143-4ffd-a41c-59ec409201c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77c4b4575d-lt42c" podUID="8ea098a5-9143-4ffd-a41c-59ec409201c4" Jan 29 12:02:11.559591 containerd[1589]: time="2025-01-29T12:02:11.559071440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f4x79,Uid:f8acab89-8346-440a-bbf8-eaa1717109a0,Namespace:calico-system,Attempt:0,}" Jan 29 12:02:11.627879 containerd[1589]: time="2025-01-29T12:02:11.627717484Z" level=error msg="Failed to destroy network for sandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.628132 containerd[1589]: time="2025-01-29T12:02:11.628095286Z" level=error msg="encountered an error cleaning up failed sandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.628747 containerd[1589]: time="2025-01-29T12:02:11.628160086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f4x79,Uid:f8acab89-8346-440a-bbf8-eaa1717109a0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.629424 kubelet[2982]: E0129 12:02:11.629377 2982 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.629504 kubelet[2982]: E0129 12:02:11.629446 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f4x79" Jan 29 12:02:11.629504 kubelet[2982]: E0129 12:02:11.629466 2982 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f4x79" Jan 29 12:02:11.629602 kubelet[2982]: E0129 12:02:11.629529 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f4x79_calico-system(f8acab89-8346-440a-bbf8-eaa1717109a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f4x79_calico-system(f8acab89-8346-440a-bbf8-eaa1717109a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f4x79" podUID="f8acab89-8346-440a-bbf8-eaa1717109a0" Jan 29 12:02:11.744228 kubelet[2982]: I0129 12:02:11.744081 2982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:11.746523 containerd[1589]: time="2025-01-29T12:02:11.746024866Z" level=info msg="StopPodSandbox for \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\"" Jan 29 12:02:11.746523 containerd[1589]: time="2025-01-29T12:02:11.746229107Z" level=info msg="Ensure that sandbox be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9 in task-service has been cleanup successfully" Jan 29 12:02:11.755767 containerd[1589]: time="2025-01-29T12:02:11.755719261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:02:11.756306 kubelet[2982]: I0129 12:02:11.756272 2982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:11.758410 containerd[1589]: time="2025-01-29T12:02:11.757906988Z" level=info msg="StopPodSandbox for \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\"" Jan 29 12:02:11.758410 containerd[1589]: time="2025-01-29T12:02:11.758105469Z" level=info msg="Ensure that sandbox d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2 in task-service has been cleanup successfully" Jan 29 12:02:11.767330 kubelet[2982]: I0129 12:02:11.767084 2982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:11.769890 containerd[1589]: time="2025-01-29T12:02:11.769760151Z" level=info msg="StopPodSandbox for \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\"" Jan 29 12:02:11.770160 containerd[1589]: time="2025-01-29T12:02:11.770138232Z" level=info msg="Ensure that sandbox 4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93 in task-service has been cleanup successfully" Jan 29 12:02:11.775478 kubelet[2982]: I0129 12:02:11.775095 2982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:11.777598 containerd[1589]: time="2025-01-29T12:02:11.777550538Z" level=info msg="StopPodSandbox for \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\"" Jan 29 12:02:11.779130 containerd[1589]: time="2025-01-29T12:02:11.778765783Z" level=info msg="Ensure that sandbox ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966 in task-service has been cleanup successfully" Jan 29 12:02:11.790406 kubelet[2982]: I0129 12:02:11.790360 2982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:11.794751 containerd[1589]: time="2025-01-29T12:02:11.794230638Z" level=info msg="StopPodSandbox for \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\"" Jan 29 12:02:11.794751 containerd[1589]: time="2025-01-29T12:02:11.794466959Z" level=info msg="Ensure that sandbox b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd in task-service has been cleanup successfully" Jan 29 12:02:11.799495 kubelet[2982]: I0129 12:02:11.799469 2982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:11.804413 containerd[1589]: time="2025-01-29T12:02:11.803388750Z" level=info msg="StopPodSandbox for \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\"" Jan 29 12:02:11.806496 containerd[1589]: time="2025-01-29T12:02:11.806446841Z" level=info msg="Ensure that sandbox 9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e in task-service has been cleanup successfully" Jan 29 12:02:11.873346 containerd[1589]: time="2025-01-29T12:02:11.871052192Z" level=error msg="StopPodSandbox for \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\" failed" error="failed to destroy network for sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.873548 kubelet[2982]: E0129 12:02:11.872285 2982 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:11.873548 kubelet[2982]: E0129 12:02:11.872348 2982 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2"} Jan 29 12:02:11.873548 kubelet[2982]: E0129 12:02:11.872405 2982 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ea098a5-9143-4ffd-a41c-59ec409201c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:11.873548 kubelet[2982]: E0129 12:02:11.872428 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ea098a5-9143-4ffd-a41c-59ec409201c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77c4b4575d-lt42c" podUID="8ea098a5-9143-4ffd-a41c-59ec409201c4" Jan 29 12:02:11.875716 containerd[1589]: time="2025-01-29T12:02:11.874229403Z" level=error msg="StopPodSandbox for \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\" failed" error="failed to destroy network for sandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.875845 kubelet[2982]: E0129 12:02:11.875462 2982 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:11.875845 kubelet[2982]: E0129 12:02:11.875644 2982 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966"} Jan 29 12:02:11.876821 kubelet[2982]: E0129 12:02:11.875683 2982 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8acab89-8346-440a-bbf8-eaa1717109a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:11.876821 kubelet[2982]: E0129 12:02:11.876406 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8acab89-8346-440a-bbf8-eaa1717109a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f4x79" podUID="f8acab89-8346-440a-bbf8-eaa1717109a0" Jan 29 12:02:11.899200 containerd[1589]: time="2025-01-29T12:02:11.898472249Z" level=error msg="StopPodSandbox for \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\" failed" error="failed to destroy network for sandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.899379 kubelet[2982]: E0129 12:02:11.898869 2982 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:11.899379 kubelet[2982]: E0129 12:02:11.898957 2982 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd"} Jan 29 12:02:11.900157 kubelet[2982]: E0129 12:02:11.899023 2982 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"411fde72-7e67-4c52-beb5-274e475ac3a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:11.900157 kubelet[2982]: E0129 12:02:11.899790 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"411fde72-7e67-4c52-beb5-274e475ac3a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7httm" podUID="411fde72-7e67-4c52-beb5-274e475ac3a2" Jan 29 12:02:11.901214 kubelet[2982]: E0129 12:02:11.900703 2982 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:11.901214 kubelet[2982]: E0129 12:02:11.900772 2982 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9"} Jan 29 12:02:11.901367 containerd[1589]: time="2025-01-29T12:02:11.900347016Z" level=error msg="StopPodSandbox for \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\" failed" error="failed to destroy network for sandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.902291 kubelet[2982]: E0129 12:02:11.902068 2982 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5e90633-37b9-444b-9d34-7ea4734213c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:11.902291 kubelet[2982]: E0129 12:02:11.902155 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5e90633-37b9-444b-9d34-7ea4734213c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hrwhb" podUID="d5e90633-37b9-444b-9d34-7ea4734213c7" Jan 29 12:02:11.909129 containerd[1589]: time="2025-01-29T12:02:11.908724806Z" level=error msg="StopPodSandbox for \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\" failed" error="failed to destroy network for sandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.909261 kubelet[2982]: E0129 12:02:11.908975 2982 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:11.909261 kubelet[2982]: E0129 12:02:11.909024 2982 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93"} Jan 29 12:02:11.909261 kubelet[2982]: E0129 12:02:11.909059 2982 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a58702a-d30f-45ab-b50c-e13b846313b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:11.909261 kubelet[2982]: E0129 12:02:11.909082 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a58702a-d30f-45ab-b50c-e13b846313b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf4d54ff6-gcwqx" podUID="3a58702a-d30f-45ab-b50c-e13b846313b4" Jan 29 12:02:11.914443 containerd[1589]: time="2025-01-29T12:02:11.914375426Z" level=error msg="StopPodSandbox for \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\" failed" error="failed to destroy network for sandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:02:11.914842 kubelet[2982]: E0129 12:02:11.914738 2982 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:11.914842 kubelet[2982]: E0129 12:02:11.914797 2982 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e"} Jan 29 12:02:11.914842 kubelet[2982]: E0129 12:02:11.914833 2982 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c7ffc06-7657-4b76-aeac-67f169d0c448\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:02:11.915039 kubelet[2982]: E0129 12:02:11.914855 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c7ffc06-7657-4b76-aeac-67f169d0c448\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf4d54ff6-8kcss" podUID="3c7ffc06-7657-4b76-aeac-67f169d0c448" Jan 29 12:02:15.787218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount599895685.mount: Deactivated successfully. Jan 29 12:02:15.888325 containerd[1589]: time="2025-01-29T12:02:15.888201475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:15.890236 containerd[1589]: time="2025-01-29T12:02:15.889888600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 12:02:15.891778 containerd[1589]: time="2025-01-29T12:02:15.891723847Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:15.894298 containerd[1589]: time="2025-01-29T12:02:15.894198055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:15.895467 containerd[1589]: time="2025-01-29T12:02:15.894988098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.139215397s" Jan 29 12:02:15.895467 containerd[1589]: time="2025-01-29T12:02:15.895033498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 12:02:15.918105 containerd[1589]: time="2025-01-29T12:02:15.917110774Z" level=info msg="CreateContainer within sandbox \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:02:15.941898 containerd[1589]: time="2025-01-29T12:02:15.941824378Z" level=info msg="CreateContainer within sandbox \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35\"" Jan 29 12:02:15.943668 containerd[1589]: time="2025-01-29T12:02:15.942584901Z" level=info msg="StartContainer for \"d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35\"" Jan 29 12:02:16.017896 containerd[1589]: time="2025-01-29T12:02:16.017640797Z" level=info msg="StartContainer for \"d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35\" returns successfully" Jan 29 12:02:16.140456 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:02:16.140608 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:02:16.840145 kubelet[2982]: I0129 12:02:16.840071 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x5lb5" podStartSLOduration=2.072151025 podStartE2EDuration="15.84003842s" podCreationTimestamp="2025-01-29 12:02:01 +0000 UTC" firstStartedPulling="2025-01-29 12:02:02.129650832 +0000 UTC m=+23.692663580" lastFinishedPulling="2025-01-29 12:02:15.897538227 +0000 UTC m=+37.460550975" observedRunningTime="2025-01-29 12:02:16.837446531 +0000 UTC m=+38.400459279" watchObservedRunningTime="2025-01-29 12:02:16.84003842 +0000 UTC m=+38.403051168" Jan 29 12:02:17.615175 kubelet[2982]: I0129 12:02:17.615120 2982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:02:17.949221 kernel: bpftool[4190]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:02:18.228932 systemd-networkd[1239]: vxlan.calico: Link UP Jan 29 12:02:18.228957 systemd-networkd[1239]: vxlan.calico: Gained carrier Jan 29 12:02:19.595728 systemd-networkd[1239]: vxlan.calico: Gained IPv6LL Jan 29 12:02:22.554133 containerd[1589]: time="2025-01-29T12:02:22.554050351Z" level=info msg="StopPodSandbox for \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\"" Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.663 [INFO][4323] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.666 [INFO][4323] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" iface="eth0" netns="/var/run/netns/cni-9f3c0f52-225a-7ec1-6593-209a7bb62b2d" Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.669 [INFO][4323] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" iface="eth0" netns="/var/run/netns/cni-9f3c0f52-225a-7ec1-6593-209a7bb62b2d" Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.670 [INFO][4323] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" iface="eth0" netns="/var/run/netns/cni-9f3c0f52-225a-7ec1-6593-209a7bb62b2d" Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.670 [INFO][4323] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.670 [INFO][4323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.719 [INFO][4329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" HandleID="k8s-pod-network.d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.719 [INFO][4329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.719 [INFO][4329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.730 [WARNING][4329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" HandleID="k8s-pod-network.d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.730 [INFO][4329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" HandleID="k8s-pod-network.d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.733 [INFO][4329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:22.737960 containerd[1589]: 2025-01-29 12:02:22.736 [INFO][4323] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:22.741002 containerd[1589]: time="2025-01-29T12:02:22.740872909Z" level=info msg="TearDown network for sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\" successfully" Jan 29 12:02:22.741002 containerd[1589]: time="2025-01-29T12:02:22.740923350Z" level=info msg="StopPodSandbox for \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\" returns successfully" Jan 29 12:02:22.742703 systemd[1]: run-netns-cni\x2d9f3c0f52\x2d225a\x2d7ec1\x2d6593\x2d209a7bb62b2d.mount: Deactivated successfully. Jan 29 12:02:22.743201 containerd[1589]: time="2025-01-29T12:02:22.742749675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c4b4575d-lt42c,Uid:8ea098a5-9143-4ffd-a41c-59ec409201c4,Namespace:calico-system,Attempt:1,}" Jan 29 12:02:22.915913 systemd-networkd[1239]: cali177ca7502d0: Link UP Jan 29 12:02:22.916156 systemd-networkd[1239]: cali177ca7502d0: Gained carrier Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.815 [INFO][4336] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0 calico-kube-controllers-77c4b4575d- calico-system 8ea098a5-9143-4ffd-a41c-59ec409201c4 786 0 2025-01-29 12:02:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77c4b4575d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-b-488529c6ca calico-kube-controllers-77c4b4575d-lt42c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali177ca7502d0 [] []}} ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Namespace="calico-system" Pod="calico-kube-controllers-77c4b4575d-lt42c" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-" Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.815 [INFO][4336] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Namespace="calico-system" Pod="calico-kube-controllers-77c4b4575d-lt42c" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.848 [INFO][4347] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" HandleID="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.866 [INFO][4347] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" HandleID="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cb70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-b-488529c6ca", "pod":"calico-kube-controllers-77c4b4575d-lt42c", "timestamp":"2025-01-29 12:02:22.848614335 +0000 UTC"}, Hostname:"ci-4081-3-0-b-488529c6ca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.866 [INFO][4347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.866 [INFO][4347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.866 [INFO][4347] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-b-488529c6ca' Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.870 [INFO][4347] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.877 [INFO][4347] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.884 [INFO][4347] ipam/ipam.go 489: Trying affinity for 192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.888 [INFO][4347] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.892 [INFO][4347] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.892 [INFO][4347] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.895 [INFO][4347] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871 Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.900 [INFO][4347] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.909 [INFO][4347] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.65/26] block=192.168.18.64/26 handle="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.909 [INFO][4347] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.65/26] handle="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.909 [INFO][4347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:22.947225 containerd[1589]: 2025-01-29 12:02:22.909 [INFO][4347] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.65/26] IPv6=[] ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" HandleID="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:22.947816 containerd[1589]: 2025-01-29 12:02:22.912 [INFO][4336] cni-plugin/k8s.go 386: Populated endpoint ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Namespace="calico-system" Pod="calico-kube-controllers-77c4b4575d-lt42c" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0", GenerateName:"calico-kube-controllers-77c4b4575d-", Namespace:"calico-system", SelfLink:"", UID:"8ea098a5-9143-4ffd-a41c-59ec409201c4", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77c4b4575d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"", Pod:"calico-kube-controllers-77c4b4575d-lt42c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali177ca7502d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:22.947816 containerd[1589]: 2025-01-29 12:02:22.912 [INFO][4336] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.65/32] ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Namespace="calico-system" Pod="calico-kube-controllers-77c4b4575d-lt42c" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:22.947816 containerd[1589]: 2025-01-29 12:02:22.912 [INFO][4336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali177ca7502d0 ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Namespace="calico-system" Pod="calico-kube-controllers-77c4b4575d-lt42c" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:22.947816 containerd[1589]: 2025-01-29 12:02:22.915 [INFO][4336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Namespace="calico-system" Pod="calico-kube-controllers-77c4b4575d-lt42c" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:22.947816 containerd[1589]: 2025-01-29 12:02:22.918 [INFO][4336] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Namespace="calico-system" Pod="calico-kube-controllers-77c4b4575d-lt42c" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0", GenerateName:"calico-kube-controllers-77c4b4575d-", Namespace:"calico-system", SelfLink:"", UID:"8ea098a5-9143-4ffd-a41c-59ec409201c4", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77c4b4575d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871", Pod:"calico-kube-controllers-77c4b4575d-lt42c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali177ca7502d0", MAC:"d2:21:40:df:6a:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:22.947816 containerd[1589]: 2025-01-29 12:02:22.938 [INFO][4336] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Namespace="calico-system" Pod="calico-kube-controllers-77c4b4575d-lt42c" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:22.978966 containerd[1589]: time="2025-01-29T12:02:22.978746031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:22.979507 containerd[1589]: time="2025-01-29T12:02:22.978820832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:22.979507 containerd[1589]: time="2025-01-29T12:02:22.979432554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:22.980484 containerd[1589]: time="2025-01-29T12:02:22.980338637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:23.054469 containerd[1589]: time="2025-01-29T12:02:23.054211272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c4b4575d-lt42c,Uid:8ea098a5-9143-4ffd-a41c-59ec409201c4,Namespace:calico-system,Attempt:1,} returns sandbox id \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\"" Jan 29 12:02:23.058639 containerd[1589]: time="2025-01-29T12:02:23.056774400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:02:23.555019 containerd[1589]: time="2025-01-29T12:02:23.554945103Z" level=info msg="StopPodSandbox for \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\"" Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.619 [INFO][4428] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.619 [INFO][4428] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" iface="eth0" netns="/var/run/netns/cni-9e773f12-d8ed-9cf1-7132-89f1bdac4790" Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.620 [INFO][4428] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" iface="eth0" netns="/var/run/netns/cni-9e773f12-d8ed-9cf1-7132-89f1bdac4790" Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.620 [INFO][4428] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" iface="eth0" netns="/var/run/netns/cni-9e773f12-d8ed-9cf1-7132-89f1bdac4790" Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.620 [INFO][4428] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.620 [INFO][4428] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.642 [INFO][4434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" HandleID="k8s-pod-network.4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.642 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.642 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.654 [WARNING][4434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" HandleID="k8s-pod-network.4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.654 [INFO][4434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" HandleID="k8s-pod-network.4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.656 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:23.660024 containerd[1589]: 2025-01-29 12:02:23.658 [INFO][4428] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:23.662717 containerd[1589]: time="2025-01-29T12:02:23.660296917Z" level=info msg="TearDown network for sandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\" successfully" Jan 29 12:02:23.662717 containerd[1589]: time="2025-01-29T12:02:23.660337077Z" level=info msg="StopPodSandbox for \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\" returns successfully" Jan 29 12:02:23.662717 containerd[1589]: time="2025-01-29T12:02:23.661312681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf4d54ff6-gcwqx,Uid:3a58702a-d30f-45ab-b50c-e13b846313b4,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:02:23.664183 systemd[1]: run-netns-cni\x2d9e773f12\x2dd8ed\x2d9cf1\x2d7132\x2d89f1bdac4790.mount: Deactivated successfully. Jan 29 12:02:23.838294 systemd-networkd[1239]: cali15e5b532e7b: Link UP Jan 29 12:02:23.838650 systemd-networkd[1239]: cali15e5b532e7b: Gained carrier Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.728 [INFO][4441] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0 calico-apiserver-7cf4d54ff6- calico-apiserver 3a58702a-d30f-45ab-b50c-e13b846313b4 793 0 2025-01-29 12:02:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cf4d54ff6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-b-488529c6ca calico-apiserver-7cf4d54ff6-gcwqx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali15e5b532e7b [] []}} ContainerID="bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-gcwqx" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-" Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.728 [INFO][4441] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-gcwqx" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.767 [INFO][4452] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" HandleID="k8s-pod-network.bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.782 [INFO][4452] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" HandleID="k8s-pod-network.bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000333020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-b-488529c6ca", "pod":"calico-apiserver-7cf4d54ff6-gcwqx", "timestamp":"2025-01-29 12:02:23.767923899 +0000 UTC"}, Hostname:"ci-4081-3-0-b-488529c6ca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.782 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.782 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.782 [INFO][4452] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-b-488529c6ca' Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.786 [INFO][4452] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.793 [INFO][4452] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.803 [INFO][4452] ipam/ipam.go 489: Trying affinity for 192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.806 [INFO][4452] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.810 [INFO][4452] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.810 [INFO][4452] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.813 [INFO][4452] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.819 [INFO][4452] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.829 [INFO][4452] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.66/26] block=192.168.18.64/26 handle="k8s-pod-network.bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.829 [INFO][4452] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.66/26] handle="k8s-pod-network.bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.829 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:23.865626 containerd[1589]: 2025-01-29 12:02:23.830 [INFO][4452] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.66/26] IPv6=[] ContainerID="bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" HandleID="k8s-pod-network.bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:23.867782 containerd[1589]: 2025-01-29 12:02:23.833 [INFO][4441] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-gcwqx" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0", GenerateName:"calico-apiserver-7cf4d54ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a58702a-d30f-45ab-b50c-e13b846313b4", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf4d54ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"", Pod:"calico-apiserver-7cf4d54ff6-gcwqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15e5b532e7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:23.867782 containerd[1589]: 2025-01-29 12:02:23.833 [INFO][4441] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.66/32] ContainerID="bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-gcwqx" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:23.867782 containerd[1589]: 2025-01-29 12:02:23.833 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15e5b532e7b ContainerID="bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-gcwqx" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:23.867782 containerd[1589]: 2025-01-29 12:02:23.835 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-gcwqx" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:23.867782 containerd[1589]: 2025-01-29 12:02:23.837 [INFO][4441] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-gcwqx" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0", GenerateName:"calico-apiserver-7cf4d54ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a58702a-d30f-45ab-b50c-e13b846313b4", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf4d54ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c", Pod:"calico-apiserver-7cf4d54ff6-gcwqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15e5b532e7b", MAC:"42:f1:ba:91:ae:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:23.867782 containerd[1589]: 2025-01-29 12:02:23.856 [INFO][4441] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-gcwqx" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:23.896890 containerd[1589]: time="2025-01-29T12:02:23.896780989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:23.897682 containerd[1589]: time="2025-01-29T12:02:23.897350430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:23.897682 containerd[1589]: time="2025-01-29T12:02:23.897377110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:23.897682 containerd[1589]: time="2025-01-29T12:02:23.897569511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:23.968403 containerd[1589]: time="2025-01-29T12:02:23.968353016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf4d54ff6-gcwqx,Uid:3a58702a-d30f-45ab-b50c-e13b846313b4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c\"" Jan 29 12:02:24.553892 containerd[1589]: time="2025-01-29T12:02:24.553682461Z" level=info msg="StopPodSandbox for \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\"" Jan 29 12:02:24.556046 containerd[1589]: time="2025-01-29T12:02:24.554220103Z" level=info msg="StopPodSandbox for \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\"" Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.667 [INFO][4528] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.667 [INFO][4528] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" iface="eth0" netns="/var/run/netns/cni-e312a3ce-d71e-335c-13fe-7a8dd1f6f52b" Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.668 [INFO][4528] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" iface="eth0" netns="/var/run/netns/cni-e312a3ce-d71e-335c-13fe-7a8dd1f6f52b" Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.668 [INFO][4528] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" iface="eth0" netns="/var/run/netns/cni-e312a3ce-d71e-335c-13fe-7a8dd1f6f52b" Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.668 [INFO][4528] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.668 [INFO][4528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.730 [INFO][4553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" HandleID="k8s-pod-network.b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.730 [INFO][4553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.730 [INFO][4553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.742 [WARNING][4553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" HandleID="k8s-pod-network.b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.742 [INFO][4553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" HandleID="k8s-pod-network.b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.744 [INFO][4553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:24.748001 containerd[1589]: 2025-01-29 12:02:24.746 [INFO][4528] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:24.749790 containerd[1589]: time="2025-01-29T12:02:24.749212037Z" level=info msg="TearDown network for sandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\" successfully" Jan 29 12:02:24.749790 containerd[1589]: time="2025-01-29T12:02:24.749261597Z" level=info msg="StopPodSandbox for \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\" returns successfully" Jan 29 12:02:24.751646 containerd[1589]: time="2025-01-29T12:02:24.751601525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7httm,Uid:411fde72-7e67-4c52-beb5-274e475ac3a2,Namespace:kube-system,Attempt:1,}" Jan 29 12:02:24.753574 systemd[1]: run-netns-cni\x2de312a3ce\x2dd71e\x2d335c\x2d13fe\x2d7a8dd1f6f52b.mount: Deactivated successfully. Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.712 [INFO][4545] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.715 [INFO][4545] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" iface="eth0" netns="/var/run/netns/cni-60285bd4-5429-7ca9-d7e0-5882227ecfcf" Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.716 [INFO][4545] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" iface="eth0" netns="/var/run/netns/cni-60285bd4-5429-7ca9-d7e0-5882227ecfcf" Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.716 [INFO][4545] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" iface="eth0" netns="/var/run/netns/cni-60285bd4-5429-7ca9-d7e0-5882227ecfcf" Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.716 [INFO][4545] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.716 [INFO][4545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.784 [INFO][4560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" HandleID="k8s-pod-network.be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.787 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.789 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.808 [WARNING][4560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" HandleID="k8s-pod-network.be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.808 [INFO][4560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" HandleID="k8s-pod-network.be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.811 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:24.818791 containerd[1589]: 2025-01-29 12:02:24.815 [INFO][4545] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:24.822024 containerd[1589]: time="2025-01-29T12:02:24.821720586Z" level=info msg="TearDown network for sandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\" successfully" Jan 29 12:02:24.822024 containerd[1589]: time="2025-01-29T12:02:24.821799946Z" level=info msg="StopPodSandbox for \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\" returns successfully" Jan 29 12:02:24.825292 containerd[1589]: time="2025-01-29T12:02:24.825242277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hrwhb,Uid:d5e90633-37b9-444b-9d34-7ea4734213c7,Namespace:kube-system,Attempt:1,}" Jan 29 12:02:24.826014 systemd[1]: run-netns-cni\x2d60285bd4\x2d5429\x2d7ca9\x2dd7e0\x2d5882227ecfcf.mount: Deactivated successfully. Jan 29 12:02:24.907592 systemd-networkd[1239]: cali177ca7502d0: Gained IPv6LL Jan 29 12:02:25.083386 systemd-networkd[1239]: calia70da8503bc: Link UP Jan 29 12:02:25.086646 systemd-networkd[1239]: calia70da8503bc: Gained carrier Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:24.879 [INFO][4569] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0 coredns-7db6d8ff4d- kube-system 411fde72-7e67-4c52-beb5-274e475ac3a2 802 0 2025-01-29 12:01:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-b-488529c6ca coredns-7db6d8ff4d-7httm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia70da8503bc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7httm" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-" Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:24.879 [INFO][4569] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7httm" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:24.978 [INFO][4593] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" HandleID="k8s-pod-network.02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.002 [INFO][4593] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" HandleID="k8s-pod-network.02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ea090), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-b-488529c6ca", "pod":"coredns-7db6d8ff4d-7httm", "timestamp":"2025-01-29 12:02:24.978060038 +0000 UTC"}, Hostname:"ci-4081-3-0-b-488529c6ca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.002 [INFO][4593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.002 [INFO][4593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.002 [INFO][4593] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-b-488529c6ca' Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.006 [INFO][4593] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.016 [INFO][4593] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.023 [INFO][4593] ipam/ipam.go 489: Trying affinity for 192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.028 [INFO][4593] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.034 [INFO][4593] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.034 [INFO][4593] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.041 [INFO][4593] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50 Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.052 [INFO][4593] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.064 [INFO][4593] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.67/26] block=192.168.18.64/26 handle="k8s-pod-network.02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.064 [INFO][4593] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.67/26] handle="k8s-pod-network.02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.064 [INFO][4593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:25.119565 containerd[1589]: 2025-01-29 12:02:25.064 [INFO][4593] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.67/26] IPv6=[] ContainerID="02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" HandleID="k8s-pod-network.02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:25.121524 containerd[1589]: 2025-01-29 12:02:25.071 [INFO][4569] cni-plugin/k8s.go 386: Populated endpoint ContainerID="02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7httm" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"411fde72-7e67-4c52-beb5-274e475ac3a2", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"", Pod:"coredns-7db6d8ff4d-7httm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia70da8503bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:25.121524 containerd[1589]: 2025-01-29 12:02:25.071 [INFO][4569] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.67/32] ContainerID="02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7httm" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:25.121524 containerd[1589]: 2025-01-29 12:02:25.071 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia70da8503bc ContainerID="02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7httm" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:25.121524 containerd[1589]: 2025-01-29 12:02:25.088 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7httm" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:25.121524 containerd[1589]: 2025-01-29 12:02:25.090 [INFO][4569] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7httm" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"411fde72-7e67-4c52-beb5-274e475ac3a2", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50", Pod:"coredns-7db6d8ff4d-7httm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia70da8503bc", MAC:"d6:50:4f:55:13:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:25.121524 containerd[1589]: 2025-01-29 12:02:25.107 [INFO][4569] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7httm" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:25.162735 systemd-networkd[1239]: cali8abd84b6799: Link UP Jan 29 12:02:25.164287 systemd-networkd[1239]: cali8abd84b6799: Gained carrier Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:24.949 [INFO][4583] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0 coredns-7db6d8ff4d- kube-system d5e90633-37b9-444b-9d34-7ea4734213c7 803 0 2025-01-29 12:01:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-b-488529c6ca coredns-7db6d8ff4d-hrwhb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8abd84b6799 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrwhb" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-" Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:24.949 [INFO][4583] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrwhb" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.029 [INFO][4601] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" HandleID="k8s-pod-network.f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.052 [INFO][4601] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" HandleID="k8s-pod-network.f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ba180), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-b-488529c6ca", "pod":"coredns-7db6d8ff4d-hrwhb", "timestamp":"2025-01-29 12:02:25.029975321 +0000 UTC"}, Hostname:"ci-4081-3-0-b-488529c6ca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.052 [INFO][4601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.064 [INFO][4601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.064 [INFO][4601] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-b-488529c6ca' Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.068 [INFO][4601] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.085 [INFO][4601] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.107 [INFO][4601] ipam/ipam.go 489: Trying affinity for 192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.116 [INFO][4601] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.125 [INFO][4601] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.125 [INFO][4601] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.130 [INFO][4601] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.139 [INFO][4601] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.152 [INFO][4601] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.68/26] block=192.168.18.64/26 handle="k8s-pod-network.f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.153 [INFO][4601] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.68/26] handle="k8s-pod-network.f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.153 [INFO][4601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:25.193323 containerd[1589]: 2025-01-29 12:02:25.153 [INFO][4601] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.68/26] IPv6=[] ContainerID="f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" HandleID="k8s-pod-network.f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:25.194798 containerd[1589]: 2025-01-29 12:02:25.156 [INFO][4583] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrwhb" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d5e90633-37b9-444b-9d34-7ea4734213c7", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"", Pod:"coredns-7db6d8ff4d-hrwhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8abd84b6799", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:25.194798 containerd[1589]: 2025-01-29 12:02:25.157 [INFO][4583] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.68/32] ContainerID="f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrwhb" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:25.194798 containerd[1589]: 2025-01-29 12:02:25.157 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8abd84b6799 ContainerID="f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrwhb" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:25.194798 containerd[1589]: 2025-01-29 12:02:25.165 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrwhb" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:25.194798 containerd[1589]: 2025-01-29 12:02:25.165 [INFO][4583] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrwhb" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d5e90633-37b9-444b-9d34-7ea4734213c7", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e", Pod:"coredns-7db6d8ff4d-hrwhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8abd84b6799", MAC:"56:07:c3:06:1f:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:25.194798 containerd[1589]: 2025-01-29 12:02:25.188 [INFO][4583] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hrwhb" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:25.227304 containerd[1589]: time="2025-01-29T12:02:25.226608096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:25.227304 containerd[1589]: time="2025-01-29T12:02:25.226674336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:25.227304 containerd[1589]: time="2025-01-29T12:02:25.226698376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:25.227304 containerd[1589]: time="2025-01-29T12:02:25.226833296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:25.273318 containerd[1589]: time="2025-01-29T12:02:25.273198081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:25.273318 containerd[1589]: time="2025-01-29T12:02:25.273267321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:25.273806 containerd[1589]: time="2025-01-29T12:02:25.273283601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:25.273806 containerd[1589]: time="2025-01-29T12:02:25.273385002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:25.323031 containerd[1589]: time="2025-01-29T12:02:25.322972237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7httm,Uid:411fde72-7e67-4c52-beb5-274e475ac3a2,Namespace:kube-system,Attempt:1,} returns sandbox id \"02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50\"" Jan 29 12:02:25.340962 containerd[1589]: time="2025-01-29T12:02:25.337671003Z" level=info msg="CreateContainer within sandbox \"02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:02:25.374556 containerd[1589]: time="2025-01-29T12:02:25.374492758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hrwhb,Uid:d5e90633-37b9-444b-9d34-7ea4734213c7,Namespace:kube-system,Attempt:1,} returns sandbox id \"f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e\"" Jan 29 12:02:25.382640 containerd[1589]: time="2025-01-29T12:02:25.382583183Z" level=info msg="CreateContainer within sandbox \"f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:02:25.396513 containerd[1589]: time="2025-01-29T12:02:25.396425106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:25.418104 containerd[1589]: time="2025-01-29T12:02:25.418038254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 29 12:02:25.422930 systemd-networkd[1239]: cali15e5b532e7b: Gained IPv6LL Jan 29 12:02:25.436629 containerd[1589]: time="2025-01-29T12:02:25.436579552Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:25.443130 containerd[1589]: time="2025-01-29T12:02:25.443054612Z" level=info msg="CreateContainer within sandbox \"02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1af623512661d97c22e580db991061504f4bcd840e271ac009d0a0b8ec4da8ed\"" Jan 29 12:02:25.448615 containerd[1589]: time="2025-01-29T12:02:25.448557469Z" level=info msg="StartContainer for \"1af623512661d97c22e580db991061504f4bcd840e271ac009d0a0b8ec4da8ed\"" Jan 29 12:02:25.458948 containerd[1589]: time="2025-01-29T12:02:25.458774301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:25.468933 containerd[1589]: time="2025-01-29T12:02:25.468247331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.411413131s" Jan 29 12:02:25.468933 containerd[1589]: time="2025-01-29T12:02:25.468522212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 29 12:02:25.473904 containerd[1589]: time="2025-01-29T12:02:25.473860268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:02:25.486564 containerd[1589]: time="2025-01-29T12:02:25.486494708Z" level=info msg="CreateContainer within sandbox \"f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88d6720fb77040d6d444d67dbb7b47ad49e6e020138c550bbf59914b0467bcc8\"" Jan 29 12:02:25.491082 containerd[1589]: time="2025-01-29T12:02:25.489951719Z" level=info msg="StartContainer for \"88d6720fb77040d6d444d67dbb7b47ad49e6e020138c550bbf59914b0467bcc8\"" Jan 29 12:02:25.498667 containerd[1589]: time="2025-01-29T12:02:25.498613266Z" level=info msg="CreateContainer within sandbox \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:02:25.549794 containerd[1589]: time="2025-01-29T12:02:25.549567465Z" level=info msg="CreateContainer within sandbox \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\"" Jan 29 12:02:25.552558 containerd[1589]: time="2025-01-29T12:02:25.552460754Z" level=info msg="StartContainer for \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\"" Jan 29 12:02:25.571216 containerd[1589]: time="2025-01-29T12:02:25.563819430Z" level=info msg="StopPodSandbox for \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\"" Jan 29 12:02:25.573375 containerd[1589]: time="2025-01-29T12:02:25.566923839Z" level=info msg="StopPodSandbox for \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\"" Jan 29 12:02:25.575731 containerd[1589]: time="2025-01-29T12:02:25.574372942Z" level=info msg="StartContainer for \"1af623512661d97c22e580db991061504f4bcd840e271ac009d0a0b8ec4da8ed\" returns successfully" Jan 29 12:02:25.656997 containerd[1589]: time="2025-01-29T12:02:25.654491033Z" level=info msg="StartContainer for \"88d6720fb77040d6d444d67dbb7b47ad49e6e020138c550bbf59914b0467bcc8\" returns successfully" Jan 29 12:02:25.860847 containerd[1589]: time="2025-01-29T12:02:25.858430670Z" level=info msg="StartContainer for \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\" returns successfully" Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.798 [INFO][4813] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.798 [INFO][4813] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" iface="eth0" netns="/var/run/netns/cni-2ab7b0bd-85ed-beb9-b68f-89bc4cf3ee28" Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.799 [INFO][4813] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" iface="eth0" netns="/var/run/netns/cni-2ab7b0bd-85ed-beb9-b68f-89bc4cf3ee28" Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.799 [INFO][4813] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" iface="eth0" netns="/var/run/netns/cni-2ab7b0bd-85ed-beb9-b68f-89bc4cf3ee28" Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.799 [INFO][4813] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.799 [INFO][4813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.838 [INFO][4864] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" HandleID="k8s-pod-network.ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Workload="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.838 [INFO][4864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.838 [INFO][4864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.855 [WARNING][4864] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" HandleID="k8s-pod-network.ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Workload="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.855 [INFO][4864] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" HandleID="k8s-pod-network.ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Workload="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.861 [INFO][4864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:25.873656 containerd[1589]: 2025-01-29 12:02:25.864 [INFO][4813] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:25.883538 containerd[1589]: time="2025-01-29T12:02:25.882156024Z" level=info msg="TearDown network for sandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\" successfully" Jan 29 12:02:25.883538 containerd[1589]: time="2025-01-29T12:02:25.882308025Z" level=info msg="StopPodSandbox for \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\" returns successfully" Jan 29 12:02:25.883538 containerd[1589]: time="2025-01-29T12:02:25.883266308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f4x79,Uid:f8acab89-8346-440a-bbf8-eaa1717109a0,Namespace:calico-system,Attempt:1,}" Jan 29 12:02:25.884583 systemd[1]: run-netns-cni\x2d2ab7b0bd\x2d85ed\x2dbeb9\x2db68f\x2d89bc4cf3ee28.mount: Deactivated successfully. Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.795 [INFO][4832] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.796 [INFO][4832] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" iface="eth0" netns="/var/run/netns/cni-9f676e49-6668-8e99-17ca-3e205514ae54" Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.797 [INFO][4832] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" iface="eth0" netns="/var/run/netns/cni-9f676e49-6668-8e99-17ca-3e205514ae54" Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.799 [INFO][4832] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" iface="eth0" netns="/var/run/netns/cni-9f676e49-6668-8e99-17ca-3e205514ae54" Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.799 [INFO][4832] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.799 [INFO][4832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.866 [INFO][4865] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" HandleID="k8s-pod-network.9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.866 [INFO][4865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.866 [INFO][4865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.892 [WARNING][4865] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" HandleID="k8s-pod-network.9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.893 [INFO][4865] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" HandleID="k8s-pod-network.9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.896 [INFO][4865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:25.915671 containerd[1589]: 2025-01-29 12:02:25.907 [INFO][4832] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:25.926174 containerd[1589]: time="2025-01-29T12:02:25.918947059Z" level=info msg="TearDown network for sandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\" successfully" Jan 29 12:02:25.926174 containerd[1589]: time="2025-01-29T12:02:25.918984100Z" level=info msg="StopPodSandbox for \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\" returns successfully" Jan 29 12:02:25.929258 systemd[1]: run-netns-cni\x2d9f676e49\x2d6668\x2d8e99\x2d17ca\x2d3e205514ae54.mount: Deactivated successfully. Jan 29 12:02:25.939260 containerd[1589]: time="2025-01-29T12:02:25.938649481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf4d54ff6-8kcss,Uid:3c7ffc06-7657-4b76-aeac-67f169d0c448,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:02:25.958422 kubelet[2982]: I0129 12:02:25.954676 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77c4b4575d-lt42c" podStartSLOduration=22.538318985 podStartE2EDuration="24.954653491s" podCreationTimestamp="2025-01-29 12:02:01 +0000 UTC" firstStartedPulling="2025-01-29 12:02:23.056066758 +0000 UTC m=+44.619079506" lastFinishedPulling="2025-01-29 12:02:25.472401264 +0000 UTC m=+47.035414012" observedRunningTime="2025-01-29 12:02:25.946776186 +0000 UTC m=+47.509788934" watchObservedRunningTime="2025-01-29 12:02:25.954653491 +0000 UTC m=+47.517666239" Jan 29 12:02:26.010312 kubelet[2982]: I0129 12:02:26.006140 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7httm" podStartSLOduration=32.005942451 podStartE2EDuration="32.005942451s" podCreationTimestamp="2025-01-29 12:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:26.001364757 +0000 UTC m=+47.564377625" watchObservedRunningTime="2025-01-29 12:02:26.005942451 +0000 UTC m=+47.568955159" Jan 29 12:02:26.034291 kubelet[2982]: I0129 12:02:26.033056 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hrwhb" podStartSLOduration=32.033033895 podStartE2EDuration="32.033033895s" podCreationTimestamp="2025-01-29 12:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:26.030769008 +0000 UTC m=+47.593781756" watchObservedRunningTime="2025-01-29 12:02:26.033033895 +0000 UTC m=+47.596046643" Jan 29 12:02:26.250555 systemd-networkd[1239]: cali8abd84b6799: Gained IPv6LL Jan 29 12:02:26.251351 systemd-networkd[1239]: calia70da8503bc: Gained IPv6LL Jan 29 12:02:26.282997 systemd-networkd[1239]: cali127fa60017e: Link UP Jan 29 12:02:26.283646 systemd-networkd[1239]: cali127fa60017e: Gained carrier Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.140 [INFO][4900] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0 csi-node-driver- calico-system f8acab89-8346-440a-bbf8-eaa1717109a0 821 0 2025-01-29 12:02:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-b-488529c6ca csi-node-driver-f4x79 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali127fa60017e [] []}} ContainerID="3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" Namespace="calico-system" Pod="csi-node-driver-f4x79" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-" Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.141 [INFO][4900] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" Namespace="calico-system" Pod="csi-node-driver-f4x79" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.186 [INFO][4934] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" HandleID="k8s-pod-network.3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" Workload="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.204 [INFO][4934] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" HandleID="k8s-pod-network.3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" Workload="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003175c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-b-488529c6ca", "pod":"csi-node-driver-f4x79", "timestamp":"2025-01-29 12:02:26.186677772 +0000 UTC"}, Hostname:"ci-4081-3-0-b-488529c6ca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.204 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.204 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.204 [INFO][4934] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-b-488529c6ca' Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.209 [INFO][4934] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.217 [INFO][4934] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.229 [INFO][4934] ipam/ipam.go 489: Trying affinity for 192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.233 [INFO][4934] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.239 [INFO][4934] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.239 [INFO][4934] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.244 [INFO][4934] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6 Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.261 [INFO][4934] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.273 [INFO][4934] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.69/26] block=192.168.18.64/26 handle="k8s-pod-network.3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.273 [INFO][4934] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.69/26] handle="k8s-pod-network.3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.273 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:26.307642 containerd[1589]: 2025-01-29 12:02:26.273 [INFO][4934] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.69/26] IPv6=[] ContainerID="3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" HandleID="k8s-pod-network.3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" Workload="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:26.308693 containerd[1589]: 2025-01-29 12:02:26.276 [INFO][4900] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" Namespace="calico-system" Pod="csi-node-driver-f4x79" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f8acab89-8346-440a-bbf8-eaa1717109a0", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"", Pod:"csi-node-driver-f4x79", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali127fa60017e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:26.308693 containerd[1589]: 2025-01-29 12:02:26.276 [INFO][4900] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.69/32] ContainerID="3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" Namespace="calico-system" Pod="csi-node-driver-f4x79" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:26.308693 containerd[1589]: 2025-01-29 12:02:26.276 [INFO][4900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali127fa60017e ContainerID="3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" Namespace="calico-system" Pod="csi-node-driver-f4x79" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:26.308693 containerd[1589]: 2025-01-29 12:02:26.285 [INFO][4900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" Namespace="calico-system" Pod="csi-node-driver-f4x79" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:26.308693 containerd[1589]: 2025-01-29 12:02:26.287 [INFO][4900] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" Namespace="calico-system" Pod="csi-node-driver-f4x79" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f8acab89-8346-440a-bbf8-eaa1717109a0", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6", Pod:"csi-node-driver-f4x79", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali127fa60017e", MAC:"3e:5c:2b:68:aa:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:26.308693 containerd[1589]: 2025-01-29 12:02:26.298 [INFO][4900] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6" Namespace="calico-system" Pod="csi-node-driver-f4x79" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:26.356234 containerd[1589]: time="2025-01-29T12:02:26.355620136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:26.356234 containerd[1589]: time="2025-01-29T12:02:26.356206537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:26.356666 containerd[1589]: time="2025-01-29T12:02:26.356355538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:26.358387 containerd[1589]: time="2025-01-29T12:02:26.358278824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:26.378876 systemd-networkd[1239]: calif86f281a59c: Link UP Jan 29 12:02:26.379110 systemd-networkd[1239]: calif86f281a59c: Gained carrier Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.196 [INFO][4916] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0 calico-apiserver-7cf4d54ff6- calico-apiserver 3c7ffc06-7657-4b76-aeac-67f169d0c448 822 0 2025-01-29 12:02:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cf4d54ff6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-b-488529c6ca calico-apiserver-7cf4d54ff6-8kcss eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif86f281a59c [] []}} ContainerID="0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-8kcss" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-" Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.196 [INFO][4916] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-8kcss" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.268 [INFO][4942] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" HandleID="k8s-pod-network.0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.295 [INFO][4942] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" HandleID="k8s-pod-network.0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ab360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-b-488529c6ca", "pod":"calico-apiserver-7cf4d54ff6-8kcss", "timestamp":"2025-01-29 12:02:26.268383065 +0000 UTC"}, Hostname:"ci-4081-3-0-b-488529c6ca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.296 [INFO][4942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.296 [INFO][4942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.296 [INFO][4942] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-b-488529c6ca' Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.305 [INFO][4942] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.313 [INFO][4942] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.322 [INFO][4942] ipam/ipam.go 489: Trying affinity for 192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.325 [INFO][4942] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.332 [INFO][4942] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.333 [INFO][4942] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.336 [INFO][4942] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2 Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.348 [INFO][4942] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.359 [INFO][4942] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.70/26] block=192.168.18.64/26 handle="k8s-pod-network.0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.362 [INFO][4942] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.70/26] handle="k8s-pod-network.0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.362 [INFO][4942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:26.413097 containerd[1589]: 2025-01-29 12:02:26.362 [INFO][4942] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.70/26] IPv6=[] ContainerID="0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" HandleID="k8s-pod-network.0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:26.413705 containerd[1589]: 2025-01-29 12:02:26.369 [INFO][4916] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-8kcss" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0", GenerateName:"calico-apiserver-7cf4d54ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c7ffc06-7657-4b76-aeac-67f169d0c448", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf4d54ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"", Pod:"calico-apiserver-7cf4d54ff6-8kcss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif86f281a59c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:26.413705 containerd[1589]: 2025-01-29 12:02:26.370 [INFO][4916] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.70/32] ContainerID="0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-8kcss" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:26.413705 containerd[1589]: 2025-01-29 12:02:26.370 [INFO][4916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif86f281a59c ContainerID="0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-8kcss" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:26.413705 containerd[1589]: 2025-01-29 12:02:26.376 [INFO][4916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-8kcss" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:26.413705 containerd[1589]: 2025-01-29 12:02:26.382 [INFO][4916] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-8kcss" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0", GenerateName:"calico-apiserver-7cf4d54ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c7ffc06-7657-4b76-aeac-67f169d0c448", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf4d54ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2", Pod:"calico-apiserver-7cf4d54ff6-8kcss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif86f281a59c", MAC:"ae:3c:5a:d5:ac:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:26.413705 containerd[1589]: 2025-01-29 12:02:26.408 [INFO][4916] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2" Namespace="calico-apiserver" Pod="calico-apiserver-7cf4d54ff6-8kcss" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:26.435601 containerd[1589]: time="2025-01-29T12:02:26.435559464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f4x79,Uid:f8acab89-8346-440a-bbf8-eaa1717109a0,Namespace:calico-system,Attempt:1,} returns sandbox id \"3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6\"" Jan 29 12:02:26.457602 containerd[1589]: time="2025-01-29T12:02:26.457424251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:26.457826 containerd[1589]: time="2025-01-29T12:02:26.457611452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:26.457826 containerd[1589]: time="2025-01-29T12:02:26.457659452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:26.457826 containerd[1589]: time="2025-01-29T12:02:26.457775452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:26.516752 containerd[1589]: time="2025-01-29T12:02:26.516546315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf4d54ff6-8kcss,Uid:3c7ffc06-7657-4b76-aeac-67f169d0c448,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2\"" Jan 29 12:02:27.616598 containerd[1589]: time="2025-01-29T12:02:27.616532791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:27.617997 containerd[1589]: time="2025-01-29T12:02:27.617948636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 29 12:02:27.619077 containerd[1589]: time="2025-01-29T12:02:27.619021359Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:27.622855 containerd[1589]: time="2025-01-29T12:02:27.622781650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:27.623926 containerd[1589]: time="2025-01-29T12:02:27.623857694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.149576104s" Jan 29 12:02:27.623926 containerd[1589]: time="2025-01-29T12:02:27.623906414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 12:02:27.626271 containerd[1589]: time="2025-01-29T12:02:27.626224341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:02:27.627927 containerd[1589]: time="2025-01-29T12:02:27.627847386Z" level=info msg="CreateContainer within sandbox \"bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:02:27.654127 containerd[1589]: time="2025-01-29T12:02:27.654076307Z" level=info msg="CreateContainer within sandbox \"bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f148166d7b1233132ead8651926340529961083f6cfb59dd83bc5f79f82777d3\"" Jan 29 12:02:27.657271 containerd[1589]: time="2025-01-29T12:02:27.655279430Z" level=info msg="StartContainer for \"f148166d7b1233132ead8651926340529961083f6cfb59dd83bc5f79f82777d3\"" Jan 29 12:02:27.742263 containerd[1589]: time="2025-01-29T12:02:27.741918897Z" level=info msg="StartContainer for \"f148166d7b1233132ead8651926340529961083f6cfb59dd83bc5f79f82777d3\" returns successfully" Jan 29 12:02:28.035865 kubelet[2982]: I0129 12:02:28.031898 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cf4d54ff6-gcwqx" podStartSLOduration=22.376654192 podStartE2EDuration="26.031872829s" podCreationTimestamp="2025-01-29 12:02:02 +0000 UTC" firstStartedPulling="2025-01-29 12:02:23.970092901 +0000 UTC m=+45.533105649" lastFinishedPulling="2025-01-29 12:02:27.625311538 +0000 UTC m=+49.188324286" observedRunningTime="2025-01-29 12:02:28.026958614 +0000 UTC m=+49.589971362" watchObservedRunningTime="2025-01-29 12:02:28.031872829 +0000 UTC m=+49.594885537" Jan 29 12:02:28.171874 systemd-networkd[1239]: calif86f281a59c: Gained IPv6LL Jan 29 12:02:28.174440 systemd-networkd[1239]: cali127fa60017e: Gained IPv6LL Jan 29 12:02:29.012547 kubelet[2982]: I0129 12:02:29.012502 2982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:02:29.074984 containerd[1589]: time="2025-01-29T12:02:29.074440331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:29.075785 containerd[1589]: time="2025-01-29T12:02:29.075731095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 12:02:29.076529 containerd[1589]: time="2025-01-29T12:02:29.076404657Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:29.080245 containerd[1589]: time="2025-01-29T12:02:29.080114549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:29.081077 containerd[1589]: time="2025-01-29T12:02:29.080832151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.45456609s" Jan 29 12:02:29.081077 containerd[1589]: time="2025-01-29T12:02:29.080880431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 12:02:29.084105 containerd[1589]: time="2025-01-29T12:02:29.082613676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:02:29.086497 containerd[1589]: time="2025-01-29T12:02:29.086418368Z" level=info msg="CreateContainer within sandbox \"3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:02:29.107800 containerd[1589]: time="2025-01-29T12:02:29.107743672Z" level=info msg="CreateContainer within sandbox \"3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5407efcc391410204d8ee5285194ad9cf1308a8fb1c6c0f89a9d0923bc38c8d8\"" Jan 29 12:02:29.110325 containerd[1589]: time="2025-01-29T12:02:29.108817916Z" level=info msg="StartContainer for \"5407efcc391410204d8ee5285194ad9cf1308a8fb1c6c0f89a9d0923bc38c8d8\"" Jan 29 12:02:29.191913 containerd[1589]: time="2025-01-29T12:02:29.191766087Z" level=info msg="StartContainer for \"5407efcc391410204d8ee5285194ad9cf1308a8fb1c6c0f89a9d0923bc38c8d8\" returns successfully" Jan 29 12:02:29.473214 containerd[1589]: time="2025-01-29T12:02:29.471995657Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:29.474027 containerd[1589]: time="2025-01-29T12:02:29.473958103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 12:02:29.477098 containerd[1589]: time="2025-01-29T12:02:29.476911832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 394.226796ms" Jan 29 12:02:29.477098 containerd[1589]: time="2025-01-29T12:02:29.476974792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 12:02:29.478937 containerd[1589]: time="2025-01-29T12:02:29.478587357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:02:29.483370 containerd[1589]: time="2025-01-29T12:02:29.482936650Z" level=info msg="CreateContainer within sandbox \"0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:02:29.522197 containerd[1589]: time="2025-01-29T12:02:29.520978165Z" level=info msg="CreateContainer within sandbox \"0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"37ef8180429cbc8d764b1ac8270c2fcbf48d50bc80e665e17b9214c2cbf78d3f\"" Jan 29 12:02:29.530508 containerd[1589]: time="2025-01-29T12:02:29.530299834Z" level=info msg="StartContainer for \"37ef8180429cbc8d764b1ac8270c2fcbf48d50bc80e665e17b9214c2cbf78d3f\"" Jan 29 12:02:29.610506 containerd[1589]: time="2025-01-29T12:02:29.610406956Z" level=info msg="StartContainer for \"37ef8180429cbc8d764b1ac8270c2fcbf48d50bc80e665e17b9214c2cbf78d3f\" returns successfully" Jan 29 12:02:30.040120 kubelet[2982]: I0129 12:02:30.040041 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cf4d54ff6-8kcss" podStartSLOduration=25.080173782 podStartE2EDuration="28.040022098s" podCreationTimestamp="2025-01-29 12:02:02 +0000 UTC" firstStartedPulling="2025-01-29 12:02:26.51835072 +0000 UTC m=+48.081363468" lastFinishedPulling="2025-01-29 12:02:29.478198996 +0000 UTC m=+51.041211784" observedRunningTime="2025-01-29 12:02:30.038778454 +0000 UTC m=+51.601791202" watchObservedRunningTime="2025-01-29 12:02:30.040022098 +0000 UTC m=+51.603034846" Jan 29 12:02:31.027918 kubelet[2982]: I0129 12:02:31.027336 2982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:02:31.117125 containerd[1589]: time="2025-01-29T12:02:31.117011458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:31.120293 containerd[1589]: time="2025-01-29T12:02:31.119715706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 12:02:31.121616 containerd[1589]: time="2025-01-29T12:02:31.121446991Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:31.132326 containerd[1589]: time="2025-01-29T12:02:31.130626819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:31.132326 containerd[1589]: time="2025-01-29T12:02:31.131776022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.653135145s" Jan 29 12:02:31.132326 containerd[1589]: time="2025-01-29T12:02:31.131825942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 12:02:31.140731 containerd[1589]: time="2025-01-29T12:02:31.138994164Z" level=info msg="CreateContainer within sandbox \"3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:02:31.167667 containerd[1589]: time="2025-01-29T12:02:31.167606889Z" level=info msg="CreateContainer within sandbox \"3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0b67257db00829de43ac12caa9e31fa03cc9f4639f036a4bb2068f25d26ad541\"" Jan 29 12:02:31.170198 containerd[1589]: time="2025-01-29T12:02:31.169550095Z" level=info msg="StartContainer for \"0b67257db00829de43ac12caa9e31fa03cc9f4639f036a4bb2068f25d26ad541\"" Jan 29 12:02:31.297822 containerd[1589]: time="2025-01-29T12:02:31.297513918Z" level=info msg="StartContainer for \"0b67257db00829de43ac12caa9e31fa03cc9f4639f036a4bb2068f25d26ad541\" returns successfully" Jan 29 12:02:31.708672 kubelet[2982]: I0129 12:02:31.708316 2982 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:02:31.708672 kubelet[2982]: I0129 12:02:31.708377 2982 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:02:32.058210 kubelet[2982]: I0129 12:02:32.056320 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-f4x79" podStartSLOduration=26.36133823 podStartE2EDuration="31.056292865s" podCreationTimestamp="2025-01-29 12:02:01 +0000 UTC" firstStartedPulling="2025-01-29 12:02:26.439262435 +0000 UTC m=+48.002275143" lastFinishedPulling="2025-01-29 12:02:31.13421703 +0000 UTC m=+52.697229778" observedRunningTime="2025-01-29 12:02:32.052628094 +0000 UTC m=+53.615640842" watchObservedRunningTime="2025-01-29 12:02:32.056292865 +0000 UTC m=+53.619305613" Jan 29 12:02:35.487994 kubelet[2982]: I0129 12:02:35.487640 2982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:02:37.645806 containerd[1589]: time="2025-01-29T12:02:37.644685810Z" level=info msg="StopContainer for \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\" with timeout 300 (s)" Jan 29 12:02:37.647329 containerd[1589]: time="2025-01-29T12:02:37.646160015Z" level=info msg="Stop container \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\" with signal terminated" Jan 29 12:02:37.865427 containerd[1589]: time="2025-01-29T12:02:37.864477683Z" level=info msg="StopContainer for \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\" with timeout 30 (s)" Jan 29 12:02:37.866587 containerd[1589]: time="2025-01-29T12:02:37.866541409Z" level=info msg="Stop container \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\" with signal terminated" Jan 29 12:02:37.941533 containerd[1589]: time="2025-01-29T12:02:37.939741459Z" level=info msg="shim disconnected" id=d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf namespace=k8s.io Jan 29 12:02:37.941533 containerd[1589]: time="2025-01-29T12:02:37.940689862Z" level=warning msg="cleaning up after shim disconnected" id=d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf namespace=k8s.io Jan 29 12:02:37.941533 containerd[1589]: time="2025-01-29T12:02:37.940705262Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:37.943580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf-rootfs.mount: Deactivated successfully. Jan 29 12:02:38.012083 containerd[1589]: time="2025-01-29T12:02:38.012038467Z" level=info msg="StopContainer for \"d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35\" with timeout 5 (s)" Jan 29 12:02:38.012464 containerd[1589]: time="2025-01-29T12:02:38.012435228Z" level=info msg="Stop container \"d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35\" with signal terminated" Jan 29 12:02:38.110421 containerd[1589]: time="2025-01-29T12:02:38.110235188Z" level=info msg="shim disconnected" id=d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35 namespace=k8s.io Jan 29 12:02:38.110421 containerd[1589]: time="2025-01-29T12:02:38.110304428Z" level=warning msg="cleaning up after shim disconnected" id=d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35 namespace=k8s.io Jan 29 12:02:38.110421 containerd[1589]: time="2025-01-29T12:02:38.110313668Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:38.119899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35-rootfs.mount: Deactivated successfully. Jan 29 12:02:38.131147 containerd[1589]: time="2025-01-29T12:02:38.131080008Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:02:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:02:38.229997 containerd[1589]: time="2025-01-29T12:02:38.229615770Z" level=info msg="StopContainer for \"d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35\" returns successfully" Jan 29 12:02:38.232441 containerd[1589]: time="2025-01-29T12:02:38.232310137Z" level=info msg="StopContainer for \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\" returns successfully" Jan 29 12:02:38.233068 containerd[1589]: time="2025-01-29T12:02:38.232760859Z" level=info msg="StopPodSandbox for \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\"" Jan 29 12:02:38.233068 containerd[1589]: time="2025-01-29T12:02:38.232892419Z" level=info msg="Container to stop \"d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:02:38.233068 containerd[1589]: time="2025-01-29T12:02:38.232906619Z" level=info msg="Container to stop \"eccb4d21223d05d210bcd531b733980ab75422fec400ba1bf52afa789cea6bca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:02:38.233068 containerd[1589]: time="2025-01-29T12:02:38.232918299Z" level=info msg="Container to stop \"5dffc4e863ed77cca66a17ad8b74203a083a3c23fddd03e204f1e280d6134d97\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:02:38.233689 containerd[1589]: time="2025-01-29T12:02:38.233623421Z" level=info msg="StopPodSandbox for \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\"" Jan 29 12:02:38.234568 containerd[1589]: time="2025-01-29T12:02:38.234055542Z" level=info msg="Container to stop \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:02:38.243694 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871-shm.mount: Deactivated successfully. Jan 29 12:02:38.243886 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b-shm.mount: Deactivated successfully. Jan 29 12:02:38.320190 containerd[1589]: time="2025-01-29T12:02:38.318673984Z" level=info msg="shim disconnected" id=77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871 namespace=k8s.io Jan 29 12:02:38.320190 containerd[1589]: time="2025-01-29T12:02:38.318773345Z" level=warning msg="cleaning up after shim disconnected" id=77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871 namespace=k8s.io Jan 29 12:02:38.320190 containerd[1589]: time="2025-01-29T12:02:38.318783345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:38.347206 containerd[1589]: time="2025-01-29T12:02:38.344856619Z" level=info msg="shim disconnected" id=1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b namespace=k8s.io Jan 29 12:02:38.347206 containerd[1589]: time="2025-01-29T12:02:38.344937779Z" level=warning msg="cleaning up after shim disconnected" id=1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b namespace=k8s.io Jan 29 12:02:38.347206 containerd[1589]: time="2025-01-29T12:02:38.344948459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:38.374498 containerd[1589]: time="2025-01-29T12:02:38.371480975Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:02:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:02:38.378535 containerd[1589]: time="2025-01-29T12:02:38.378472155Z" level=info msg="TearDown network for sandbox \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\" successfully" Jan 29 12:02:38.378535 containerd[1589]: time="2025-01-29T12:02:38.378519116Z" level=info msg="StopPodSandbox for \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\" returns successfully" Jan 29 12:02:38.483502 kubelet[2982]: I0129 12:02:38.483331 2982 topology_manager.go:215] "Topology Admit Handler" podUID="a171cb43-a3c7-4d04-af40-031547ae9f5b" podNamespace="calico-system" podName="calico-node-jc4dj" Jan 29 12:02:38.483962 kubelet[2982]: E0129 12:02:38.483508 2982 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56e6a48b-7702-4cea-a191-6f3a685269b6" containerName="calico-node" Jan 29 12:02:38.483962 kubelet[2982]: E0129 12:02:38.483522 2982 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56e6a48b-7702-4cea-a191-6f3a685269b6" containerName="install-cni" Jan 29 12:02:38.483962 kubelet[2982]: E0129 12:02:38.483530 2982 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56e6a48b-7702-4cea-a191-6f3a685269b6" containerName="flexvol-driver" Jan 29 12:02:38.483962 kubelet[2982]: I0129 12:02:38.483561 2982 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e6a48b-7702-4cea-a191-6f3a685269b6" containerName="calico-node" Jan 29 12:02:38.508624 systemd-networkd[1239]: cali177ca7502d0: Link DOWN Jan 29 12:02:38.509235 systemd-networkd[1239]: cali177ca7502d0: Lost carrier Jan 29 12:02:38.579814 kubelet[2982]: I0129 12:02:38.579760 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-cni-bin-dir\") pod \"56e6a48b-7702-4cea-a191-6f3a685269b6\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " Jan 29 12:02:38.579814 kubelet[2982]: I0129 12:02:38.579813 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-policysync\") pod \"56e6a48b-7702-4cea-a191-6f3a685269b6\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " Jan 29 12:02:38.580002 kubelet[2982]: I0129 12:02:38.579938 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-cni-net-dir\") pod \"56e6a48b-7702-4cea-a191-6f3a685269b6\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " Jan 29 12:02:38.580002 kubelet[2982]: I0129 12:02:38.579957 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-xtables-lock\") pod \"56e6a48b-7702-4cea-a191-6f3a685269b6\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " Jan 29 12:02:38.580002 kubelet[2982]: I0129 12:02:38.579975 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-lib-modules\") pod \"56e6a48b-7702-4cea-a191-6f3a685269b6\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " Jan 29 12:02:38.580068 kubelet[2982]: I0129 12:02:38.580006 2982 scope.go:117] "RemoveContainer" containerID="5dffc4e863ed77cca66a17ad8b74203a083a3c23fddd03e204f1e280d6134d97" Jan 29 12:02:38.580091 kubelet[2982]: I0129 12:02:38.580083 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-var-run-calico\") pod \"56e6a48b-7702-4cea-a191-6f3a685269b6\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " Jan 29 12:02:38.580115 kubelet[2982]: I0129 12:02:38.580104 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-flexvol-driver-host\") pod \"56e6a48b-7702-4cea-a191-6f3a685269b6\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " Jan 29 12:02:38.580142 kubelet[2982]: I0129 12:02:38.580131 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56e6a48b-7702-4cea-a191-6f3a685269b6-tigera-ca-bundle\") pod \"56e6a48b-7702-4cea-a191-6f3a685269b6\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " Jan 29 12:02:38.581176 kubelet[2982]: I0129 12:02:38.580239 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdvnh\" (UniqueName: \"kubernetes.io/projected/56e6a48b-7702-4cea-a191-6f3a685269b6-kube-api-access-bdvnh\") pod \"56e6a48b-7702-4cea-a191-6f3a685269b6\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " Jan 29 12:02:38.581176 kubelet[2982]: I0129 12:02:38.580266 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-cni-log-dir\") pod \"56e6a48b-7702-4cea-a191-6f3a685269b6\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " Jan 29 12:02:38.581176 kubelet[2982]: I0129 12:02:38.580414 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/56e6a48b-7702-4cea-a191-6f3a685269b6-node-certs\") pod \"56e6a48b-7702-4cea-a191-6f3a685269b6\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " Jan 29 12:02:38.581176 kubelet[2982]: I0129 12:02:38.580435 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-var-lib-calico\") pod \"56e6a48b-7702-4cea-a191-6f3a685269b6\" (UID: \"56e6a48b-7702-4cea-a191-6f3a685269b6\") " Jan 29 12:02:38.581176 kubelet[2982]: I0129 12:02:38.580475 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "56e6a48b-7702-4cea-a191-6f3a685269b6" (UID: "56e6a48b-7702-4cea-a191-6f3a685269b6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:38.581176 kubelet[2982]: I0129 12:02:38.580521 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "56e6a48b-7702-4cea-a191-6f3a685269b6" (UID: "56e6a48b-7702-4cea-a191-6f3a685269b6"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:38.581333 kubelet[2982]: I0129 12:02:38.580576 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a171cb43-a3c7-4d04-af40-031547ae9f5b-lib-modules\") pod \"calico-node-jc4dj\" (UID: \"a171cb43-a3c7-4d04-af40-031547ae9f5b\") " pod="calico-system/calico-node-jc4dj" Jan 29 12:02:38.581333 kubelet[2982]: I0129 12:02:38.580596 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a171cb43-a3c7-4d04-af40-031547ae9f5b-xtables-lock\") pod \"calico-node-jc4dj\" (UID: \"a171cb43-a3c7-4d04-af40-031547ae9f5b\") " pod="calico-system/calico-node-jc4dj" Jan 29 12:02:38.581333 kubelet[2982]: I0129 12:02:38.580615 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a171cb43-a3c7-4d04-af40-031547ae9f5b-var-lib-calico\") pod \"calico-node-jc4dj\" (UID: \"a171cb43-a3c7-4d04-af40-031547ae9f5b\") " pod="calico-system/calico-node-jc4dj" Jan 29 12:02:38.581333 kubelet[2982]: I0129 12:02:38.580731 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a171cb43-a3c7-4d04-af40-031547ae9f5b-cni-net-dir\") pod \"calico-node-jc4dj\" (UID: \"a171cb43-a3c7-4d04-af40-031547ae9f5b\") " pod="calico-system/calico-node-jc4dj" Jan 29 12:02:38.581333 kubelet[2982]: I0129 12:02:38.580752 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a171cb43-a3c7-4d04-af40-031547ae9f5b-cni-bin-dir\") pod \"calico-node-jc4dj\" (UID: \"a171cb43-a3c7-4d04-af40-031547ae9f5b\") " pod="calico-system/calico-node-jc4dj" Jan 29 12:02:38.581509 kubelet[2982]: I0129 12:02:38.580771 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhmqt\" (UniqueName: \"kubernetes.io/projected/a171cb43-a3c7-4d04-af40-031547ae9f5b-kube-api-access-hhmqt\") pod \"calico-node-jc4dj\" (UID: \"a171cb43-a3c7-4d04-af40-031547ae9f5b\") " pod="calico-system/calico-node-jc4dj" Jan 29 12:02:38.581509 kubelet[2982]: I0129 12:02:38.580895 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a171cb43-a3c7-4d04-af40-031547ae9f5b-node-certs\") pod \"calico-node-jc4dj\" (UID: \"a171cb43-a3c7-4d04-af40-031547ae9f5b\") " pod="calico-system/calico-node-jc4dj" Jan 29 12:02:38.581509 kubelet[2982]: I0129 12:02:38.580919 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a171cb43-a3c7-4d04-af40-031547ae9f5b-var-run-calico\") pod \"calico-node-jc4dj\" (UID: \"a171cb43-a3c7-4d04-af40-031547ae9f5b\") " pod="calico-system/calico-node-jc4dj" Jan 29 12:02:38.581509 kubelet[2982]: I0129 12:02:38.580934 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a171cb43-a3c7-4d04-af40-031547ae9f5b-flexvol-driver-host\") pod \"calico-node-jc4dj\" (UID: \"a171cb43-a3c7-4d04-af40-031547ae9f5b\") " pod="calico-system/calico-node-jc4dj" Jan 29 12:02:38.581509 kubelet[2982]: I0129 12:02:38.581049 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a171cb43-a3c7-4d04-af40-031547ae9f5b-tigera-ca-bundle\") pod \"calico-node-jc4dj\" (UID: \"a171cb43-a3c7-4d04-af40-031547ae9f5b\") " pod="calico-system/calico-node-jc4dj" Jan 29 12:02:38.581626 kubelet[2982]: I0129 12:02:38.581067 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a171cb43-a3c7-4d04-af40-031547ae9f5b-cni-log-dir\") pod \"calico-node-jc4dj\" (UID: \"a171cb43-a3c7-4d04-af40-031547ae9f5b\") " pod="calico-system/calico-node-jc4dj" Jan 29 12:02:38.581626 kubelet[2982]: I0129 12:02:38.581087 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a171cb43-a3c7-4d04-af40-031547ae9f5b-policysync\") pod \"calico-node-jc4dj\" (UID: \"a171cb43-a3c7-4d04-af40-031547ae9f5b\") " pod="calico-system/calico-node-jc4dj" Jan 29 12:02:38.581626 kubelet[2982]: I0129 12:02:38.581214 2982 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-cni-bin-dir\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:38.581626 kubelet[2982]: I0129 12:02:38.581230 2982 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-lib-modules\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:38.596694 kubelet[2982]: I0129 12:02:38.580540 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-policysync" (OuterVolumeSpecName: "policysync") pod "56e6a48b-7702-4cea-a191-6f3a685269b6" (UID: "56e6a48b-7702-4cea-a191-6f3a685269b6"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:38.596694 kubelet[2982]: I0129 12:02:38.580655 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "56e6a48b-7702-4cea-a191-6f3a685269b6" (UID: "56e6a48b-7702-4cea-a191-6f3a685269b6"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:38.596694 kubelet[2982]: I0129 12:02:38.580670 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "56e6a48b-7702-4cea-a191-6f3a685269b6" (UID: "56e6a48b-7702-4cea-a191-6f3a685269b6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:38.596694 kubelet[2982]: I0129 12:02:38.580687 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "56e6a48b-7702-4cea-a191-6f3a685269b6" (UID: "56e6a48b-7702-4cea-a191-6f3a685269b6"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:38.596694 kubelet[2982]: I0129 12:02:38.580797 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "56e6a48b-7702-4cea-a191-6f3a685269b6" (UID: "56e6a48b-7702-4cea-a191-6f3a685269b6"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:38.597853 kubelet[2982]: I0129 12:02:38.580819 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "56e6a48b-7702-4cea-a191-6f3a685269b6" (UID: "56e6a48b-7702-4cea-a191-6f3a685269b6"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:38.597853 kubelet[2982]: I0129 12:02:38.592051 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56e6a48b-7702-4cea-a191-6f3a685269b6-node-certs" (OuterVolumeSpecName: "node-certs") pod "56e6a48b-7702-4cea-a191-6f3a685269b6" (UID: "56e6a48b-7702-4cea-a191-6f3a685269b6"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:02:38.597853 kubelet[2982]: I0129 12:02:38.592104 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "56e6a48b-7702-4cea-a191-6f3a685269b6" (UID: "56e6a48b-7702-4cea-a191-6f3a685269b6"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:02:38.597853 kubelet[2982]: I0129 12:02:38.597688 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e6a48b-7702-4cea-a191-6f3a685269b6-kube-api-access-bdvnh" (OuterVolumeSpecName: "kube-api-access-bdvnh") pod "56e6a48b-7702-4cea-a191-6f3a685269b6" (UID: "56e6a48b-7702-4cea-a191-6f3a685269b6"). InnerVolumeSpecName "kube-api-access-bdvnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:02:38.600129 containerd[1589]: time="2025-01-29T12:02:38.599599508Z" level=info msg="RemoveContainer for \"5dffc4e863ed77cca66a17ad8b74203a083a3c23fddd03e204f1e280d6134d97\"" Jan 29 12:02:38.600656 kubelet[2982]: I0129 12:02:38.600621 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56e6a48b-7702-4cea-a191-6f3a685269b6-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "56e6a48b-7702-4cea-a191-6f3a685269b6" (UID: "56e6a48b-7702-4cea-a191-6f3a685269b6"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:02:38.607355 containerd[1589]: time="2025-01-29T12:02:38.607293330Z" level=info msg="RemoveContainer for \"5dffc4e863ed77cca66a17ad8b74203a083a3c23fddd03e204f1e280d6134d97\" returns successfully" Jan 29 12:02:38.620264 kubelet[2982]: I0129 12:02:38.620181 2982 scope.go:117] "RemoveContainer" containerID="d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35" Jan 29 12:02:38.627769 containerd[1589]: time="2025-01-29T12:02:38.627704068Z" level=info msg="RemoveContainer for \"d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35\"" Jan 29 12:02:38.651992 containerd[1589]: time="2025-01-29T12:02:38.651894618Z" level=info msg="RemoveContainer for \"d96c59cd44777f186b5a370fa836336906cf531553df3a55deac17173bf80d35\" returns successfully" Jan 29 12:02:38.655557 kubelet[2982]: I0129 12:02:38.653554 2982 scope.go:117] "RemoveContainer" containerID="eccb4d21223d05d210bcd531b733980ab75422fec400ba1bf52afa789cea6bca" Jan 29 12:02:38.658570 containerd[1589]: time="2025-01-29T12:02:38.658499836Z" level=info msg="RemoveContainer for \"eccb4d21223d05d210bcd531b733980ab75422fec400ba1bf52afa789cea6bca\"" Jan 29 12:02:38.673055 containerd[1589]: time="2025-01-29T12:02:38.672917678Z" level=info msg="RemoveContainer for \"eccb4d21223d05d210bcd531b733980ab75422fec400ba1bf52afa789cea6bca\" returns successfully" Jan 29 12:02:38.678266 containerd[1589]: time="2025-01-29T12:02:38.678086892Z" level=info msg="StopPodSandbox for \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\"" Jan 29 12:02:38.682261 kubelet[2982]: I0129 12:02:38.681968 2982 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-cni-log-dir\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:38.682261 kubelet[2982]: I0129 12:02:38.682003 2982 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-var-lib-calico\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:38.682261 kubelet[2982]: I0129 12:02:38.682014 2982 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/56e6a48b-7702-4cea-a191-6f3a685269b6-node-certs\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:38.682261 kubelet[2982]: I0129 12:02:38.682023 2982 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-xtables-lock\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:38.682261 kubelet[2982]: I0129 12:02:38.682032 2982 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-policysync\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:38.682261 kubelet[2982]: I0129 12:02:38.682039 2982 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-cni-net-dir\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:38.682261 kubelet[2982]: I0129 12:02:38.682048 2982 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-var-run-calico\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:38.682261 kubelet[2982]: I0129 12:02:38.682056 2982 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/56e6a48b-7702-4cea-a191-6f3a685269b6-flexvol-driver-host\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:38.682607 kubelet[2982]: I0129 12:02:38.682068 2982 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56e6a48b-7702-4cea-a191-6f3a685269b6-tigera-ca-bundle\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:38.682607 kubelet[2982]: I0129 12:02:38.682077 2982 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bdvnh\" (UniqueName: \"kubernetes.io/projected/56e6a48b-7702-4cea-a191-6f3a685269b6-kube-api-access-bdvnh\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.499 [INFO][5450] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.503 [INFO][5450] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" iface="eth0" netns="/var/run/netns/cni-4a80bdac-8a07-5a3d-37ea-35b334189878" Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.506 [INFO][5450] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" iface="eth0" netns="/var/run/netns/cni-4a80bdac-8a07-5a3d-37ea-35b334189878" Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.521 [INFO][5450] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" after=16.991729ms iface="eth0" netns="/var/run/netns/cni-4a80bdac-8a07-5a3d-37ea-35b334189878" Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.522 [INFO][5450] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.522 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.572 [INFO][5458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" HandleID="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.572 [INFO][5458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.573 [INFO][5458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.680 [INFO][5458] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" HandleID="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.680 [INFO][5458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" HandleID="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.691 [INFO][5458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:38.700286 containerd[1589]: 2025-01-29 12:02:38.696 [INFO][5450] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Jan 29 12:02:38.701227 containerd[1589]: time="2025-01-29T12:02:38.701136918Z" level=info msg="TearDown network for sandbox \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\" successfully" Jan 29 12:02:38.702262 containerd[1589]: time="2025-01-29T12:02:38.701198639Z" level=info msg="StopPodSandbox for \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\" returns successfully" Jan 29 12:02:38.705897 containerd[1589]: time="2025-01-29T12:02:38.703877926Z" level=info msg="StopPodSandbox for \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\"" Jan 29 12:02:38.795619 containerd[1589]: time="2025-01-29T12:02:38.795488508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jc4dj,Uid:a171cb43-a3c7-4d04-af40-031547ae9f5b,Namespace:calico-system,Attempt:0,}" Jan 29 12:02:38.797069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871-rootfs.mount: Deactivated successfully. Jan 29 12:02:38.797739 systemd[1]: run-netns-cni\x2d4a80bdac\x2d8a07\x2d5a3d\x2d37ea\x2d35b334189878.mount: Deactivated successfully. Jan 29 12:02:38.797838 systemd[1]: var-lib-kubelet-pods-56e6a48b\x2d7702\x2d4cea\x2da191\x2d6f3a685269b6-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 29 12:02:38.797946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b-rootfs.mount: Deactivated successfully. Jan 29 12:02:38.798040 systemd[1]: var-lib-kubelet-pods-56e6a48b\x2d7702\x2d4cea\x2da191\x2d6f3a685269b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbdvnh.mount: Deactivated successfully. Jan 29 12:02:38.799107 systemd[1]: var-lib-kubelet-pods-56e6a48b\x2d7702\x2d4cea\x2da191\x2d6f3a685269b6-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 29 12:02:38.911660 containerd[1589]: time="2025-01-29T12:02:38.910298957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:38.911660 containerd[1589]: time="2025-01-29T12:02:38.910368957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:38.911660 containerd[1589]: time="2025-01-29T12:02:38.910435597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:38.911660 containerd[1589]: time="2025-01-29T12:02:38.910541237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:38.946501 containerd[1589]: 2025-01-29 12:02:38.787 [WARNING][5486] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0", GenerateName:"calico-apiserver-7cf4d54ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a58702a-d30f-45ab-b50c-e13b846313b4", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf4d54ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c", Pod:"calico-apiserver-7cf4d54ff6-gcwqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15e5b532e7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:38.946501 containerd[1589]: 2025-01-29 12:02:38.790 [INFO][5486] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:38.946501 containerd[1589]: 2025-01-29 12:02:38.790 [INFO][5486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" iface="eth0" netns="" Jan 29 12:02:38.946501 containerd[1589]: 2025-01-29 12:02:38.790 [INFO][5486] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:38.946501 containerd[1589]: 2025-01-29 12:02:38.790 [INFO][5486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:38.946501 containerd[1589]: 2025-01-29 12:02:38.893 [INFO][5508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" HandleID="k8s-pod-network.4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:38.946501 containerd[1589]: 2025-01-29 12:02:38.894 [INFO][5508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:38.946501 containerd[1589]: 2025-01-29 12:02:38.894 [INFO][5508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:38.946501 containerd[1589]: 2025-01-29 12:02:38.924 [WARNING][5508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" HandleID="k8s-pod-network.4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:38.946501 containerd[1589]: 2025-01-29 12:02:38.924 [INFO][5508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" HandleID="k8s-pod-network.4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:38.946501 containerd[1589]: 2025-01-29 12:02:38.931 [INFO][5508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:38.946501 containerd[1589]: 2025-01-29 12:02:38.942 [INFO][5486] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:38.947016 containerd[1589]: time="2025-01-29T12:02:38.946536540Z" level=info msg="TearDown network for sandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\" successfully" Jan 29 12:02:38.947016 containerd[1589]: time="2025-01-29T12:02:38.946567340Z" level=info msg="StopPodSandbox for \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\" returns successfully" Jan 29 12:02:38.950200 containerd[1589]: time="2025-01-29T12:02:38.948121265Z" level=info msg="RemovePodSandbox for \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\"" Jan 29 12:02:38.951702 containerd[1589]: time="2025-01-29T12:02:38.951643355Z" level=info msg="Forcibly stopping sandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\"" Jan 29 12:02:38.957191 containerd[1589]: time="2025-01-29T12:02:38.956174928Z" level=info msg="shim disconnected" id=0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166 namespace=k8s.io Jan 29 12:02:38.957191 containerd[1589]: time="2025-01-29T12:02:38.956238408Z" level=warning msg="cleaning up after shim disconnected" id=0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166 namespace=k8s.io Jan 29 12:02:38.957191 containerd[1589]: time="2025-01-29T12:02:38.956249968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:38.961622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166-rootfs.mount: Deactivated successfully. Jan 29 12:02:39.039565 containerd[1589]: 2025-01-29 12:02:38.846 [WARNING][5500] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0", GenerateName:"calico-kube-controllers-77c4b4575d-", Namespace:"calico-system", SelfLink:"", UID:"8ea098a5-9143-4ffd-a41c-59ec409201c4", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77c4b4575d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871", Pod:"calico-kube-controllers-77c4b4575d-lt42c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali177ca7502d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:39.039565 containerd[1589]: 2025-01-29 12:02:38.847 [INFO][5500] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:39.039565 containerd[1589]: 2025-01-29 12:02:38.848 [INFO][5500] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" iface="eth0" netns="" Jan 29 12:02:39.039565 containerd[1589]: 2025-01-29 12:02:38.848 [INFO][5500] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:39.039565 containerd[1589]: 2025-01-29 12:02:38.848 [INFO][5500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:39.039565 containerd[1589]: 2025-01-29 12:02:38.988 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" HandleID="k8s-pod-network.d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:39.039565 containerd[1589]: 2025-01-29 12:02:38.989 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:39.039565 containerd[1589]: 2025-01-29 12:02:38.989 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:39.039565 containerd[1589]: 2025-01-29 12:02:39.007 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" HandleID="k8s-pod-network.d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:39.039565 containerd[1589]: 2025-01-29 12:02:39.007 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" HandleID="k8s-pod-network.d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:39.039565 containerd[1589]: 2025-01-29 12:02:39.010 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:39.039565 containerd[1589]: 2025-01-29 12:02:39.017 [INFO][5500] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:39.039565 containerd[1589]: time="2025-01-29T12:02:39.039343285Z" level=info msg="TearDown network for sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\" successfully" Jan 29 12:02:39.039565 containerd[1589]: time="2025-01-29T12:02:39.039368805Z" level=info msg="StopPodSandbox for \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\" returns successfully" Jan 29 12:02:39.087069 kubelet[2982]: I0129 12:02:39.085034 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ea098a5-9143-4ffd-a41c-59ec409201c4-tigera-ca-bundle\") pod \"8ea098a5-9143-4ffd-a41c-59ec409201c4\" (UID: \"8ea098a5-9143-4ffd-a41c-59ec409201c4\") " Jan 29 12:02:39.087069 kubelet[2982]: I0129 12:02:39.085087 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmtjh\" (UniqueName: \"kubernetes.io/projected/8ea098a5-9143-4ffd-a41c-59ec409201c4-kube-api-access-tmtjh\") pod \"8ea098a5-9143-4ffd-a41c-59ec409201c4\" (UID: \"8ea098a5-9143-4ffd-a41c-59ec409201c4\") " Jan 29 12:02:39.116582 kubelet[2982]: I0129 12:02:39.116518 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea098a5-9143-4ffd-a41c-59ec409201c4-kube-api-access-tmtjh" (OuterVolumeSpecName: "kube-api-access-tmtjh") pod "8ea098a5-9143-4ffd-a41c-59ec409201c4" (UID: "8ea098a5-9143-4ffd-a41c-59ec409201c4"). InnerVolumeSpecName "kube-api-access-tmtjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:02:39.127732 containerd[1589]: time="2025-01-29T12:02:39.127121375Z" level=info msg="StopContainer for \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\" returns successfully" Jan 29 12:02:39.129157 kubelet[2982]: I0129 12:02:39.129043 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ea098a5-9143-4ffd-a41c-59ec409201c4-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "8ea098a5-9143-4ffd-a41c-59ec409201c4" (UID: "8ea098a5-9143-4ffd-a41c-59ec409201c4"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:02:39.146095 containerd[1589]: time="2025-01-29T12:02:39.146046189Z" level=info msg="StopPodSandbox for \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\"" Jan 29 12:02:39.146095 containerd[1589]: time="2025-01-29T12:02:39.146106509Z" level=info msg="Container to stop \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:02:39.168157 kubelet[2982]: I0129 12:02:39.167494 2982 scope.go:117] "RemoveContainer" containerID="d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf" Jan 29 12:02:39.186013 kubelet[2982]: I0129 12:02:39.185515 2982 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tmtjh\" (UniqueName: \"kubernetes.io/projected/8ea098a5-9143-4ffd-a41c-59ec409201c4-kube-api-access-tmtjh\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:39.186013 kubelet[2982]: I0129 12:02:39.185552 2982 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ea098a5-9143-4ffd-a41c-59ec409201c4-tigera-ca-bundle\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:39.188586 containerd[1589]: time="2025-01-29T12:02:39.187933108Z" level=info msg="RemoveContainer for \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\"" Jan 29 12:02:39.201194 containerd[1589]: time="2025-01-29T12:02:39.199877262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jc4dj,Uid:a171cb43-a3c7-4d04-af40-031547ae9f5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d327e9483f41af317733fe8039845ef75b118e35fd09591472426d294f8e499\"" Jan 29 12:02:39.220852 containerd[1589]: time="2025-01-29T12:02:39.220709721Z" level=info msg="CreateContainer within sandbox \"8d327e9483f41af317733fe8039845ef75b118e35fd09591472426d294f8e499\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:02:39.234845 containerd[1589]: time="2025-01-29T12:02:39.234033199Z" level=info msg="RemoveContainer for \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\" returns successfully" Jan 29 12:02:39.240308 kubelet[2982]: I0129 12:02:39.236673 2982 scope.go:117] "RemoveContainer" containerID="d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf" Jan 29 12:02:39.240491 containerd[1589]: time="2025-01-29T12:02:39.237671289Z" level=error msg="ContainerStatus for \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\": not found" Jan 29 12:02:39.279271 kubelet[2982]: E0129 12:02:39.275676 2982 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\": not found" containerID="d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf" Jan 29 12:02:39.279271 kubelet[2982]: I0129 12:02:39.275825 2982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf"} err="failed to get container status \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\": rpc error: code = NotFound desc = an error occurred when try to find container \"d68f54701aada955e2b7d6a1ad0d019c3c815af825983d096fb98c3d6a92afdf\": not found" Jan 29 12:02:39.322852 containerd[1589]: time="2025-01-29T12:02:39.322575491Z" level=info msg="CreateContainer within sandbox \"8d327e9483f41af317733fe8039845ef75b118e35fd09591472426d294f8e499\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ae0b0d502506b1b39f5a7929f96623a93a6bcd3764b3b19e7c46b036546b3451\"" Jan 29 12:02:39.329242 containerd[1589]: time="2025-01-29T12:02:39.323606054Z" level=info msg="StartContainer for \"ae0b0d502506b1b39f5a7929f96623a93a6bcd3764b3b19e7c46b036546b3451\"" Jan 29 12:02:39.382629 containerd[1589]: time="2025-01-29T12:02:39.381302818Z" level=info msg="shim disconnected" id=df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8 namespace=k8s.io Jan 29 12:02:39.382629 containerd[1589]: time="2025-01-29T12:02:39.381394178Z" level=warning msg="cleaning up after shim disconnected" id=df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8 namespace=k8s.io Jan 29 12:02:39.382629 containerd[1589]: time="2025-01-29T12:02:39.381405418Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:39.423112 containerd[1589]: time="2025-01-29T12:02:39.422320255Z" level=info msg="TearDown network for sandbox \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\" successfully" Jan 29 12:02:39.428156 containerd[1589]: time="2025-01-29T12:02:39.427596790Z" level=info msg="StopPodSandbox for \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\" returns successfully" Jan 29 12:02:39.485412 containerd[1589]: 2025-01-29 12:02:39.393 [WARNING][5594] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0", GenerateName:"calico-apiserver-7cf4d54ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a58702a-d30f-45ab-b50c-e13b846313b4", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf4d54ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"bbcb766f9dfefd0c651849325cea05291c15e1a5a47a1acaa5dad3c154b66c9c", Pod:"calico-apiserver-7cf4d54ff6-gcwqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15e5b532e7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:39.485412 containerd[1589]: 2025-01-29 12:02:39.393 [INFO][5594] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:39.485412 containerd[1589]: 2025-01-29 12:02:39.393 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" iface="eth0" netns="" Jan 29 12:02:39.485412 containerd[1589]: 2025-01-29 12:02:39.393 [INFO][5594] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:39.485412 containerd[1589]: 2025-01-29 12:02:39.393 [INFO][5594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:39.485412 containerd[1589]: 2025-01-29 12:02:39.442 [INFO][5654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" HandleID="k8s-pod-network.4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:39.485412 containerd[1589]: 2025-01-29 12:02:39.442 [INFO][5654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:39.485412 containerd[1589]: 2025-01-29 12:02:39.442 [INFO][5654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:39.485412 containerd[1589]: 2025-01-29 12:02:39.465 [WARNING][5654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" HandleID="k8s-pod-network.4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:39.485412 containerd[1589]: 2025-01-29 12:02:39.465 [INFO][5654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" HandleID="k8s-pod-network.4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--gcwqx-eth0" Jan 29 12:02:39.485412 containerd[1589]: 2025-01-29 12:02:39.470 [INFO][5654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:39.485412 containerd[1589]: 2025-01-29 12:02:39.479 [INFO][5594] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93" Jan 29 12:02:39.490119 containerd[1589]: time="2025-01-29T12:02:39.490056167Z" level=info msg="TearDown network for sandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\" successfully" Jan 29 12:02:39.490691 kubelet[2982]: I0129 12:02:39.490568 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac-typha-certs\") pod \"9ad8bc6e-a42d-4698-bbc7-1235ed8120ac\" (UID: \"9ad8bc6e-a42d-4698-bbc7-1235ed8120ac\") " Jan 29 12:02:39.490691 kubelet[2982]: I0129 12:02:39.490639 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac-tigera-ca-bundle\") pod \"9ad8bc6e-a42d-4698-bbc7-1235ed8120ac\" (UID: \"9ad8bc6e-a42d-4698-bbc7-1235ed8120ac\") " Jan 29 12:02:39.490691 kubelet[2982]: I0129 12:02:39.490665 2982 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzjnt\" (UniqueName: \"kubernetes.io/projected/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac-kube-api-access-hzjnt\") pod \"9ad8bc6e-a42d-4698-bbc7-1235ed8120ac\" (UID: \"9ad8bc6e-a42d-4698-bbc7-1235ed8120ac\") " Jan 29 12:02:39.500229 containerd[1589]: time="2025-01-29T12:02:39.498821152Z" level=info msg="StartContainer for \"ae0b0d502506b1b39f5a7929f96623a93a6bcd3764b3b19e7c46b036546b3451\" returns successfully" Jan 29 12:02:39.501786 kubelet[2982]: I0129 12:02:39.498472 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac-kube-api-access-hzjnt" (OuterVolumeSpecName: "kube-api-access-hzjnt") pod "9ad8bc6e-a42d-4698-bbc7-1235ed8120ac" (UID: "9ad8bc6e-a42d-4698-bbc7-1235ed8120ac"). InnerVolumeSpecName "kube-api-access-hzjnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:02:39.505688 kubelet[2982]: I0129 12:02:39.505064 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "9ad8bc6e-a42d-4698-bbc7-1235ed8120ac" (UID: "9ad8bc6e-a42d-4698-bbc7-1235ed8120ac"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:02:39.507133 kubelet[2982]: I0129 12:02:39.506453 2982 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "9ad8bc6e-a42d-4698-bbc7-1235ed8120ac" (UID: "9ad8bc6e-a42d-4698-bbc7-1235ed8120ac"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:02:39.513204 containerd[1589]: time="2025-01-29T12:02:39.510108024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:02:39.513204 containerd[1589]: time="2025-01-29T12:02:39.510230265Z" level=info msg="RemovePodSandbox \"4191d52c9c941b86007f2f2a8575df4cabfce4e01e7c29b4a79a0832ae0b1a93\" returns successfully" Jan 29 12:02:39.513204 containerd[1589]: time="2025-01-29T12:02:39.512931672Z" level=info msg="StopPodSandbox for \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\"" Jan 29 12:02:39.591386 kubelet[2982]: I0129 12:02:39.591286 2982 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac-tigera-ca-bundle\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:39.591386 kubelet[2982]: I0129 12:02:39.591333 2982 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hzjnt\" (UniqueName: \"kubernetes.io/projected/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac-kube-api-access-hzjnt\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:39.591386 kubelet[2982]: I0129 12:02:39.591344 2982 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac-typha-certs\") on node \"ci-4081-3-0-b-488529c6ca\" DevicePath \"\"" Jan 29 12:02:39.641915 containerd[1589]: time="2025-01-29T12:02:39.641110557Z" level=info msg="shim disconnected" id=ae0b0d502506b1b39f5a7929f96623a93a6bcd3764b3b19e7c46b036546b3451 namespace=k8s.io Jan 29 12:02:39.644852 containerd[1589]: time="2025-01-29T12:02:39.641388958Z" level=warning msg="cleaning up after shim disconnected" id=ae0b0d502506b1b39f5a7929f96623a93a6bcd3764b3b19e7c46b036546b3451 namespace=k8s.io Jan 29 12:02:39.644852 containerd[1589]: time="2025-01-29T12:02:39.643659044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:39.722990 containerd[1589]: 2025-01-29 12:02:39.605 [WARNING][5697] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"411fde72-7e67-4c52-beb5-274e475ac3a2", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50", Pod:"coredns-7db6d8ff4d-7httm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia70da8503bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:39.722990 containerd[1589]: 2025-01-29 12:02:39.606 [INFO][5697] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:39.722990 containerd[1589]: 2025-01-29 12:02:39.606 [INFO][5697] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" iface="eth0" netns="" Jan 29 12:02:39.722990 containerd[1589]: 2025-01-29 12:02:39.606 [INFO][5697] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:39.722990 containerd[1589]: 2025-01-29 12:02:39.606 [INFO][5697] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:39.722990 containerd[1589]: 2025-01-29 12:02:39.684 [INFO][5709] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" HandleID="k8s-pod-network.b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:39.722990 containerd[1589]: 2025-01-29 12:02:39.684 [INFO][5709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:39.722990 containerd[1589]: 2025-01-29 12:02:39.684 [INFO][5709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:39.722990 containerd[1589]: 2025-01-29 12:02:39.713 [WARNING][5709] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" HandleID="k8s-pod-network.b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:39.722990 containerd[1589]: 2025-01-29 12:02:39.713 [INFO][5709] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" HandleID="k8s-pod-network.b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:39.722990 containerd[1589]: 2025-01-29 12:02:39.718 [INFO][5709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:39.722990 containerd[1589]: 2025-01-29 12:02:39.720 [INFO][5697] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:39.724557 containerd[1589]: time="2025-01-29T12:02:39.723148590Z" level=info msg="TearDown network for sandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\" successfully" Jan 29 12:02:39.724557 containerd[1589]: time="2025-01-29T12:02:39.723217351Z" level=info msg="StopPodSandbox for \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\" returns successfully" Jan 29 12:02:39.726135 containerd[1589]: time="2025-01-29T12:02:39.724708355Z" level=info msg="RemovePodSandbox for \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\"" Jan 29 12:02:39.726135 containerd[1589]: time="2025-01-29T12:02:39.724758835Z" level=info msg="Forcibly stopping sandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\"" Jan 29 12:02:39.789236 systemd[1]: var-lib-kubelet-pods-8ea098a5\x2d9143\x2d4ffd\x2da41c\x2d59ec409201c4-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 29 12:02:39.791786 systemd[1]: var-lib-kubelet-pods-8ea098a5\x2d9143\x2d4ffd\x2da41c\x2d59ec409201c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtmtjh.mount: Deactivated successfully. Jan 29 12:02:39.791986 systemd[1]: var-lib-kubelet-pods-9ad8bc6e\x2da42d\x2d4698\x2dbbc7\x2d1235ed8120ac-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 29 12:02:39.792088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8-rootfs.mount: Deactivated successfully. Jan 29 12:02:39.792666 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8-shm.mount: Deactivated successfully. Jan 29 12:02:39.792832 systemd[1]: var-lib-kubelet-pods-9ad8bc6e\x2da42d\x2d4698\x2dbbc7\x2d1235ed8120ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhzjnt.mount: Deactivated successfully. Jan 29 12:02:39.792919 systemd[1]: var-lib-kubelet-pods-9ad8bc6e\x2da42d\x2d4698\x2dbbc7\x2d1235ed8120ac-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 29 12:02:39.875430 containerd[1589]: 2025-01-29 12:02:39.805 [WARNING][5749] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"411fde72-7e67-4c52-beb5-274e475ac3a2", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"02ef5ff8e4d501f44d6850e1c497a531a04767da0c01849380d1b025524bdd50", Pod:"coredns-7db6d8ff4d-7httm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia70da8503bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:39.875430 containerd[1589]: 2025-01-29 12:02:39.805 [INFO][5749] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:39.875430 containerd[1589]: 2025-01-29 12:02:39.805 [INFO][5749] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" iface="eth0" netns="" Jan 29 12:02:39.875430 containerd[1589]: 2025-01-29 12:02:39.805 [INFO][5749] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:39.875430 containerd[1589]: 2025-01-29 12:02:39.805 [INFO][5749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:39.875430 containerd[1589]: 2025-01-29 12:02:39.844 [INFO][5756] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" HandleID="k8s-pod-network.b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:39.875430 containerd[1589]: 2025-01-29 12:02:39.844 [INFO][5756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:39.875430 containerd[1589]: 2025-01-29 12:02:39.844 [INFO][5756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:39.875430 containerd[1589]: 2025-01-29 12:02:39.865 [WARNING][5756] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" HandleID="k8s-pod-network.b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:39.875430 containerd[1589]: 2025-01-29 12:02:39.865 [INFO][5756] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" HandleID="k8s-pod-network.b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--7httm-eth0" Jan 29 12:02:39.875430 containerd[1589]: 2025-01-29 12:02:39.870 [INFO][5756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:39.875430 containerd[1589]: 2025-01-29 12:02:39.872 [INFO][5749] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd" Jan 29 12:02:39.877615 containerd[1589]: time="2025-01-29T12:02:39.875486304Z" level=info msg="TearDown network for sandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\" successfully" Jan 29 12:02:39.882717 containerd[1589]: time="2025-01-29T12:02:39.882544684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:02:39.882717 containerd[1589]: time="2025-01-29T12:02:39.882647484Z" level=info msg="RemovePodSandbox \"b0d1a66808878ef68c2a1dbb259f646aea03e4fe4026815699d0ab0b3ba9f0dd\" returns successfully" Jan 29 12:02:39.883542 containerd[1589]: time="2025-01-29T12:02:39.883483686Z" level=info msg="StopPodSandbox for \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\"" Jan 29 12:02:40.002364 containerd[1589]: 2025-01-29 12:02:39.943 [WARNING][5775] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f8acab89-8346-440a-bbf8-eaa1717109a0", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6", Pod:"csi-node-driver-f4x79", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali127fa60017e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:40.002364 containerd[1589]: 2025-01-29 12:02:39.944 [INFO][5775] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:40.002364 containerd[1589]: 2025-01-29 12:02:39.944 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" iface="eth0" netns="" Jan 29 12:02:40.002364 containerd[1589]: 2025-01-29 12:02:39.944 [INFO][5775] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:40.002364 containerd[1589]: 2025-01-29 12:02:39.944 [INFO][5775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:40.002364 containerd[1589]: 2025-01-29 12:02:39.972 [INFO][5781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" HandleID="k8s-pod-network.ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Workload="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:40.002364 containerd[1589]: 2025-01-29 12:02:39.972 [INFO][5781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:40.002364 containerd[1589]: 2025-01-29 12:02:39.972 [INFO][5781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:40.002364 containerd[1589]: 2025-01-29 12:02:39.987 [WARNING][5781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" HandleID="k8s-pod-network.ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Workload="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:40.002364 containerd[1589]: 2025-01-29 12:02:39.987 [INFO][5781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" HandleID="k8s-pod-network.ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Workload="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:40.002364 containerd[1589]: 2025-01-29 12:02:39.993 [INFO][5781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:40.002364 containerd[1589]: 2025-01-29 12:02:39.999 [INFO][5775] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:40.004327 containerd[1589]: time="2025-01-29T12:02:40.002327944Z" level=info msg="TearDown network for sandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\" successfully" Jan 29 12:02:40.004848 containerd[1589]: time="2025-01-29T12:02:40.004327150Z" level=info msg="StopPodSandbox for \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\" returns successfully" Jan 29 12:02:40.004848 containerd[1589]: time="2025-01-29T12:02:40.005209433Z" level=info msg="RemovePodSandbox for \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\"" Jan 29 12:02:40.004848 containerd[1589]: time="2025-01-29T12:02:40.005255553Z" level=info msg="Forcibly stopping sandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\"" Jan 29 12:02:40.101874 containerd[1589]: 2025-01-29 12:02:40.057 [WARNING][5799] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f8acab89-8346-440a-bbf8-eaa1717109a0", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"3aa8a8dfe0734cf47b2fad679fe442db77d3e56928220d73ccd20e5e7ab5c9e6", Pod:"csi-node-driver-f4x79", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali127fa60017e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:40.101874 containerd[1589]: 2025-01-29 12:02:40.058 [INFO][5799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:40.101874 containerd[1589]: 2025-01-29 12:02:40.058 [INFO][5799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" iface="eth0" netns="" Jan 29 12:02:40.101874 containerd[1589]: 2025-01-29 12:02:40.058 [INFO][5799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:40.101874 containerd[1589]: 2025-01-29 12:02:40.058 [INFO][5799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:40.101874 containerd[1589]: 2025-01-29 12:02:40.082 [INFO][5806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" HandleID="k8s-pod-network.ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Workload="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:40.101874 containerd[1589]: 2025-01-29 12:02:40.082 [INFO][5806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:40.101874 containerd[1589]: 2025-01-29 12:02:40.082 [INFO][5806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:40.101874 containerd[1589]: 2025-01-29 12:02:40.095 [WARNING][5806] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" HandleID="k8s-pod-network.ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Workload="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:40.101874 containerd[1589]: 2025-01-29 12:02:40.095 [INFO][5806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" HandleID="k8s-pod-network.ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Workload="ci--4081--3--0--b--488529c6ca-k8s-csi--node--driver--f4x79-eth0" Jan 29 12:02:40.101874 containerd[1589]: 2025-01-29 12:02:40.098 [INFO][5806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:40.101874 containerd[1589]: 2025-01-29 12:02:40.100 [INFO][5799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966" Jan 29 12:02:40.102550 containerd[1589]: time="2025-01-29T12:02:40.101912986Z" level=info msg="TearDown network for sandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\" successfully" Jan 29 12:02:40.106067 containerd[1589]: time="2025-01-29T12:02:40.106012198Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:02:40.106422 containerd[1589]: time="2025-01-29T12:02:40.106095678Z" level=info msg="RemovePodSandbox \"ef16604c326016efc520c54e4a2f5004c6fd42ef7f54e14c2c76cb3e8f19a966\" returns successfully" Jan 29 12:02:40.107464 containerd[1589]: time="2025-01-29T12:02:40.107090241Z" level=info msg="StopPodSandbox for \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\"" Jan 29 12:02:40.175980 kubelet[2982]: I0129 12:02:40.175943 2982 scope.go:117] "RemoveContainer" containerID="0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166" Jan 29 12:02:40.183004 containerd[1589]: time="2025-01-29T12:02:40.180352528Z" level=info msg="RemoveContainer for \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\"" Jan 29 12:02:40.191786 containerd[1589]: time="2025-01-29T12:02:40.191738600Z" level=info msg="RemoveContainer for \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\" returns successfully" Jan 29 12:02:40.211261 kubelet[2982]: I0129 12:02:40.211222 2982 scope.go:117] "RemoveContainer" containerID="0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166" Jan 29 12:02:40.212568 containerd[1589]: time="2025-01-29T12:02:40.212523579Z" level=info msg="CreateContainer within sandbox \"8d327e9483f41af317733fe8039845ef75b118e35fd09591472426d294f8e499\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:02:40.214431 containerd[1589]: time="2025-01-29T12:02:40.213155221Z" level=error msg="ContainerStatus for \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\": not found" Jan 29 12:02:40.214815 kubelet[2982]: E0129 12:02:40.214746 2982 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\": not found" containerID="0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166" Jan 29 12:02:40.214815 kubelet[2982]: I0129 12:02:40.214783 2982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166"} err="failed to get container status \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e6970b48b358310162091006a6a096a6e53f2b947faad737d9c19d109575166\": not found" Jan 29 12:02:40.264354 containerd[1589]: time="2025-01-29T12:02:40.264123245Z" level=info msg="CreateContainer within sandbox \"8d327e9483f41af317733fe8039845ef75b118e35fd09591472426d294f8e499\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"10e087c62fb193681b6b11929b86a574194481a9f70252b159cf19405e3d9f65\"" Jan 29 12:02:40.266600 containerd[1589]: time="2025-01-29T12:02:40.266486692Z" level=info msg="StartContainer for \"10e087c62fb193681b6b11929b86a574194481a9f70252b159cf19405e3d9f65\"" Jan 29 12:02:40.274967 containerd[1589]: 2025-01-29 12:02:40.155 [WARNING][5824] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0", GenerateName:"calico-apiserver-7cf4d54ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c7ffc06-7657-4b76-aeac-67f169d0c448", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf4d54ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2", Pod:"calico-apiserver-7cf4d54ff6-8kcss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif86f281a59c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:40.274967 containerd[1589]: 2025-01-29 12:02:40.155 [INFO][5824] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:40.274967 containerd[1589]: 2025-01-29 12:02:40.155 [INFO][5824] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" iface="eth0" netns="" Jan 29 12:02:40.274967 containerd[1589]: 2025-01-29 12:02:40.155 [INFO][5824] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:40.274967 containerd[1589]: 2025-01-29 12:02:40.155 [INFO][5824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:40.274967 containerd[1589]: 2025-01-29 12:02:40.219 [INFO][5830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" HandleID="k8s-pod-network.9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:40.274967 containerd[1589]: 2025-01-29 12:02:40.219 [INFO][5830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:40.274967 containerd[1589]: 2025-01-29 12:02:40.219 [INFO][5830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:40.274967 containerd[1589]: 2025-01-29 12:02:40.261 [WARNING][5830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" HandleID="k8s-pod-network.9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:40.274967 containerd[1589]: 2025-01-29 12:02:40.261 [INFO][5830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" HandleID="k8s-pod-network.9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:40.274967 containerd[1589]: 2025-01-29 12:02:40.270 [INFO][5830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:40.274967 containerd[1589]: 2025-01-29 12:02:40.272 [INFO][5824] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:40.277430 containerd[1589]: time="2025-01-29T12:02:40.276017359Z" level=info msg="TearDown network for sandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\" successfully" Jan 29 12:02:40.277430 containerd[1589]: time="2025-01-29T12:02:40.276295999Z" level=info msg="StopPodSandbox for \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\" returns successfully" Jan 29 12:02:40.278546 containerd[1589]: time="2025-01-29T12:02:40.278117965Z" level=info msg="RemovePodSandbox for \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\"" Jan 29 12:02:40.278546 containerd[1589]: time="2025-01-29T12:02:40.278215605Z" level=info msg="Forcibly stopping sandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\"" Jan 29 12:02:40.389654 containerd[1589]: time="2025-01-29T12:02:40.388263476Z" level=info msg="StartContainer for \"10e087c62fb193681b6b11929b86a574194481a9f70252b159cf19405e3d9f65\" returns successfully" Jan 29 12:02:40.434424 containerd[1589]: 2025-01-29 12:02:40.370 [WARNING][5861] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0", GenerateName:"calico-apiserver-7cf4d54ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c7ffc06-7657-4b76-aeac-67f169d0c448", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf4d54ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"0bbb20a44b3199dc46bedcd20d212e9585535b7fd8add0c1c30b3e138a2a75e2", Pod:"calico-apiserver-7cf4d54ff6-8kcss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif86f281a59c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:40.434424 containerd[1589]: 2025-01-29 12:02:40.370 [INFO][5861] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:40.434424 containerd[1589]: 2025-01-29 12:02:40.370 [INFO][5861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" iface="eth0" netns="" Jan 29 12:02:40.434424 containerd[1589]: 2025-01-29 12:02:40.370 [INFO][5861] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:40.434424 containerd[1589]: 2025-01-29 12:02:40.370 [INFO][5861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:40.434424 containerd[1589]: 2025-01-29 12:02:40.406 [INFO][5889] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" HandleID="k8s-pod-network.9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:40.434424 containerd[1589]: 2025-01-29 12:02:40.406 [INFO][5889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:40.434424 containerd[1589]: 2025-01-29 12:02:40.406 [INFO][5889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:40.434424 containerd[1589]: 2025-01-29 12:02:40.425 [WARNING][5889] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" HandleID="k8s-pod-network.9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:40.434424 containerd[1589]: 2025-01-29 12:02:40.425 [INFO][5889] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" HandleID="k8s-pod-network.9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--apiserver--7cf4d54ff6--8kcss-eth0" Jan 29 12:02:40.434424 containerd[1589]: 2025-01-29 12:02:40.429 [INFO][5889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:40.434424 containerd[1589]: 2025-01-29 12:02:40.431 [INFO][5861] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e" Jan 29 12:02:40.435533 containerd[1589]: time="2025-01-29T12:02:40.434478447Z" level=info msg="TearDown network for sandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\" successfully" Jan 29 12:02:40.438401 containerd[1589]: time="2025-01-29T12:02:40.438315738Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:02:40.441000 containerd[1589]: time="2025-01-29T12:02:40.438419418Z" level=info msg="RemovePodSandbox \"9c6cbcf4326efb36efd835922662fafe08cebd15949a4ec0868898eb10af352e\" returns successfully" Jan 29 12:02:40.441000 containerd[1589]: time="2025-01-29T12:02:40.440724785Z" level=info msg="StopPodSandbox for \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\"" Jan 29 12:02:40.441000 containerd[1589]: time="2025-01-29T12:02:40.440883865Z" level=info msg="TearDown network for sandbox \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\" successfully" Jan 29 12:02:40.441000 containerd[1589]: time="2025-01-29T12:02:40.440900585Z" level=info msg="StopPodSandbox for \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\" returns successfully" Jan 29 12:02:40.442382 containerd[1589]: time="2025-01-29T12:02:40.441812388Z" level=info msg="RemovePodSandbox for \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\"" Jan 29 12:02:40.442382 containerd[1589]: time="2025-01-29T12:02:40.441888348Z" level=info msg="Forcibly stopping sandbox \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\"" Jan 29 12:02:40.442382 containerd[1589]: time="2025-01-29T12:02:40.441968628Z" level=info msg="TearDown network for sandbox \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\" successfully" Jan 29 12:02:40.448596 containerd[1589]: time="2025-01-29T12:02:40.448478086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:02:40.449313 containerd[1589]: time="2025-01-29T12:02:40.449095208Z" level=info msg="RemovePodSandbox \"1fb8cd386c6f010a0eef194ff3e3e6e18bddcfa6495af2e7b4c4a1cce0934c8b\" returns successfully" Jan 29 12:02:40.450850 containerd[1589]: time="2025-01-29T12:02:40.450805773Z" level=info msg="StopPodSandbox for \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\"" Jan 29 12:02:40.557611 kubelet[2982]: I0129 12:02:40.557394 2982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e6a48b-7702-4cea-a191-6f3a685269b6" path="/var/lib/kubelet/pods/56e6a48b-7702-4cea-a191-6f3a685269b6/volumes" Jan 29 12:02:40.562567 kubelet[2982]: I0129 12:02:40.561935 2982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea098a5-9143-4ffd-a41c-59ec409201c4" path="/var/lib/kubelet/pods/8ea098a5-9143-4ffd-a41c-59ec409201c4/volumes" Jan 29 12:02:40.563900 kubelet[2982]: I0129 12:02:40.562754 2982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad8bc6e-a42d-4698-bbc7-1235ed8120ac" path="/var/lib/kubelet/pods/9ad8bc6e-a42d-4698-bbc7-1235ed8120ac/volumes" Jan 29 12:02:40.580054 containerd[1589]: 2025-01-29 12:02:40.515 [WARNING][5916] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d5e90633-37b9-444b-9d34-7ea4734213c7", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e", Pod:"coredns-7db6d8ff4d-hrwhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8abd84b6799", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:40.580054 containerd[1589]: 2025-01-29 12:02:40.516 [INFO][5916] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:40.580054 containerd[1589]: 2025-01-29 12:02:40.516 [INFO][5916] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" iface="eth0" netns="" Jan 29 12:02:40.580054 containerd[1589]: 2025-01-29 12:02:40.516 [INFO][5916] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:40.580054 containerd[1589]: 2025-01-29 12:02:40.516 [INFO][5916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:40.580054 containerd[1589]: 2025-01-29 12:02:40.547 [INFO][5923] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" HandleID="k8s-pod-network.be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:40.580054 containerd[1589]: 2025-01-29 12:02:40.547 [INFO][5923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:40.580054 containerd[1589]: 2025-01-29 12:02:40.548 [INFO][5923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:40.580054 containerd[1589]: 2025-01-29 12:02:40.565 [WARNING][5923] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" HandleID="k8s-pod-network.be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:40.580054 containerd[1589]: 2025-01-29 12:02:40.566 [INFO][5923] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" HandleID="k8s-pod-network.be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:40.580054 containerd[1589]: 2025-01-29 12:02:40.572 [INFO][5923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:40.580054 containerd[1589]: 2025-01-29 12:02:40.576 [INFO][5916] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:40.580054 containerd[1589]: time="2025-01-29T12:02:40.579518057Z" level=info msg="TearDown network for sandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\" successfully" Jan 29 12:02:40.580054 containerd[1589]: time="2025-01-29T12:02:40.579545257Z" level=info msg="StopPodSandbox for \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\" returns successfully" Jan 29 12:02:40.584242 containerd[1589]: time="2025-01-29T12:02:40.582738066Z" level=info msg="RemovePodSandbox for \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\"" Jan 29 12:02:40.584242 containerd[1589]: time="2025-01-29T12:02:40.582782146Z" level=info msg="Forcibly stopping sandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\"" Jan 29 12:02:40.765924 containerd[1589]: 2025-01-29 12:02:40.667 [WARNING][5942] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d5e90633-37b9-444b-9d34-7ea4734213c7", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"f403d1a2f5ffd3556479b61a5592b5509ee3b6ee4f3acaabdcbd5970e9e4f55e", Pod:"coredns-7db6d8ff4d-hrwhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8abd84b6799", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:40.765924 containerd[1589]: 2025-01-29 12:02:40.668 [INFO][5942] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:40.765924 containerd[1589]: 2025-01-29 12:02:40.668 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" iface="eth0" netns="" Jan 29 12:02:40.765924 containerd[1589]: 2025-01-29 12:02:40.668 [INFO][5942] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:40.765924 containerd[1589]: 2025-01-29 12:02:40.668 [INFO][5942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:40.765924 containerd[1589]: 2025-01-29 12:02:40.728 [INFO][5948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" HandleID="k8s-pod-network.be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:40.765924 containerd[1589]: 2025-01-29 12:02:40.730 [INFO][5948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:40.765924 containerd[1589]: 2025-01-29 12:02:40.730 [INFO][5948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:40.765924 containerd[1589]: 2025-01-29 12:02:40.750 [WARNING][5948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" HandleID="k8s-pod-network.be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:40.765924 containerd[1589]: 2025-01-29 12:02:40.751 [INFO][5948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" HandleID="k8s-pod-network.be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Workload="ci--4081--3--0--b--488529c6ca-k8s-coredns--7db6d8ff4d--hrwhb-eth0" Jan 29 12:02:40.765924 containerd[1589]: 2025-01-29 12:02:40.756 [INFO][5948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:40.765924 containerd[1589]: 2025-01-29 12:02:40.760 [INFO][5942] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9" Jan 29 12:02:40.770250 containerd[1589]: time="2025-01-29T12:02:40.765864624Z" level=info msg="TearDown network for sandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\" successfully" Jan 29 12:02:40.778500 containerd[1589]: time="2025-01-29T12:02:40.776232494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:02:40.778500 containerd[1589]: time="2025-01-29T12:02:40.776311614Z" level=info msg="RemovePodSandbox \"be04da30605af022b540f3daa7d1dd9ecd74f4df1cb7359d7b96ff70b2b78ff9\" returns successfully" Jan 29 12:02:40.779240 containerd[1589]: time="2025-01-29T12:02:40.778864781Z" level=info msg="StopPodSandbox for \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\"" Jan 29 12:02:40.946830 containerd[1589]: 2025-01-29 12:02:40.882 [WARNING][5966] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:40.946830 containerd[1589]: 2025-01-29 12:02:40.882 [INFO][5966] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:40.946830 containerd[1589]: 2025-01-29 12:02:40.882 [INFO][5966] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" iface="eth0" netns="" Jan 29 12:02:40.946830 containerd[1589]: 2025-01-29 12:02:40.882 [INFO][5966] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:40.946830 containerd[1589]: 2025-01-29 12:02:40.882 [INFO][5966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:40.946830 containerd[1589]: 2025-01-29 12:02:40.919 [INFO][5972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" HandleID="k8s-pod-network.d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:40.946830 containerd[1589]: 2025-01-29 12:02:40.920 [INFO][5972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:40.946830 containerd[1589]: 2025-01-29 12:02:40.920 [INFO][5972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:40.946830 containerd[1589]: 2025-01-29 12:02:40.937 [WARNING][5972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" HandleID="k8s-pod-network.d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:40.946830 containerd[1589]: 2025-01-29 12:02:40.937 [INFO][5972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" HandleID="k8s-pod-network.d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:40.946830 containerd[1589]: 2025-01-29 12:02:40.941 [INFO][5972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:40.946830 containerd[1589]: 2025-01-29 12:02:40.944 [INFO][5966] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:40.949288 containerd[1589]: time="2025-01-29T12:02:40.946975777Z" level=info msg="TearDown network for sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\" successfully" Jan 29 12:02:40.949288 containerd[1589]: time="2025-01-29T12:02:40.947149337Z" level=info msg="StopPodSandbox for \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\" returns successfully" Jan 29 12:02:40.949288 containerd[1589]: time="2025-01-29T12:02:40.948853422Z" level=info msg="RemovePodSandbox for \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\"" Jan 29 12:02:40.949288 containerd[1589]: time="2025-01-29T12:02:40.948893982Z" level=info msg="Forcibly stopping sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\"" Jan 29 12:02:41.118923 containerd[1589]: 2025-01-29 12:02:41.020 [WARNING][5990] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:41.118923 containerd[1589]: 2025-01-29 12:02:41.021 [INFO][5990] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:41.118923 containerd[1589]: 2025-01-29 12:02:41.021 [INFO][5990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" iface="eth0" netns="" Jan 29 12:02:41.118923 containerd[1589]: 2025-01-29 12:02:41.021 [INFO][5990] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:41.118923 containerd[1589]: 2025-01-29 12:02:41.021 [INFO][5990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:41.118923 containerd[1589]: 2025-01-29 12:02:41.088 [INFO][5996] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" HandleID="k8s-pod-network.d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:41.118923 containerd[1589]: 2025-01-29 12:02:41.088 [INFO][5996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:41.118923 containerd[1589]: 2025-01-29 12:02:41.088 [INFO][5996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:41.118923 containerd[1589]: 2025-01-29 12:02:41.103 [WARNING][5996] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" HandleID="k8s-pod-network.d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:41.118923 containerd[1589]: 2025-01-29 12:02:41.103 [INFO][5996] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" HandleID="k8s-pod-network.d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:02:41.118923 containerd[1589]: 2025-01-29 12:02:41.110 [INFO][5996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:41.118923 containerd[1589]: 2025-01-29 12:02:41.113 [INFO][5990] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2" Jan 29 12:02:41.119329 containerd[1589]: time="2025-01-29T12:02:41.118969261Z" level=info msg="TearDown network for sandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\" successfully" Jan 29 12:02:41.128890 containerd[1589]: time="2025-01-29T12:02:41.128609008Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:02:41.129057 containerd[1589]: time="2025-01-29T12:02:41.128920729Z" level=info msg="RemovePodSandbox \"d764e1d90b47b482dce6b5d1e6fb8825e6cbec8a6e8f2c64403017e65e7b7ad2\" returns successfully" Jan 29 12:02:41.654592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10e087c62fb193681b6b11929b86a574194481a9f70252b159cf19405e3d9f65-rootfs.mount: Deactivated successfully. Jan 29 12:02:41.666637 containerd[1589]: time="2025-01-29T12:02:41.666525002Z" level=info msg="shim disconnected" id=10e087c62fb193681b6b11929b86a574194481a9f70252b159cf19405e3d9f65 namespace=k8s.io Jan 29 12:02:41.666637 containerd[1589]: time="2025-01-29T12:02:41.666633162Z" level=warning msg="cleaning up after shim disconnected" id=10e087c62fb193681b6b11929b86a574194481a9f70252b159cf19405e3d9f65 namespace=k8s.io Jan 29 12:02:41.666637 containerd[1589]: time="2025-01-29T12:02:41.666643402Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:02:41.691383 containerd[1589]: time="2025-01-29T12:02:41.691305552Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:02:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:02:42.271619 kubelet[2982]: I0129 12:02:42.271414 2982 topology_manager.go:215] "Topology Admit Handler" podUID="acb8039c-6332-4202-b539-d4a0a25c4898" podNamespace="calico-system" podName="calico-typha-bb9458767-txg6x" Jan 29 12:02:42.274707 kubelet[2982]: E0129 12:02:42.273044 2982 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ad8bc6e-a42d-4698-bbc7-1235ed8120ac" containerName="calico-typha" Jan 29 12:02:42.274707 kubelet[2982]: E0129 12:02:42.273803 2982 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ea098a5-9143-4ffd-a41c-59ec409201c4" containerName="calico-kube-controllers" Jan 29 12:02:42.274707 kubelet[2982]: I0129 12:02:42.273893 2982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad8bc6e-a42d-4698-bbc7-1235ed8120ac" containerName="calico-typha" Jan 29 12:02:42.274707 kubelet[2982]: I0129 12:02:42.273909 2982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea098a5-9143-4ffd-a41c-59ec409201c4" containerName="calico-kube-controllers" Jan 29 12:02:42.327242 containerd[1589]: time="2025-01-29T12:02:42.327188976Z" level=info msg="CreateContainer within sandbox \"8d327e9483f41af317733fe8039845ef75b118e35fd09591472426d294f8e499\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:02:42.376483 containerd[1589]: time="2025-01-29T12:02:42.376340313Z" level=info msg="CreateContainer within sandbox \"8d327e9483f41af317733fe8039845ef75b118e35fd09591472426d294f8e499\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d0ffc37b21c7a0d95f8e6494a9f53a5a57604eed9d8d950c6f341eb43fe14829\"" Jan 29 12:02:42.380422 containerd[1589]: time="2025-01-29T12:02:42.377286996Z" level=info msg="StartContainer for \"d0ffc37b21c7a0d95f8e6494a9f53a5a57604eed9d8d950c6f341eb43fe14829\"" Jan 29 12:02:42.418774 kubelet[2982]: I0129 12:02:42.418641 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/acb8039c-6332-4202-b539-d4a0a25c4898-typha-certs\") pod \"calico-typha-bb9458767-txg6x\" (UID: \"acb8039c-6332-4202-b539-d4a0a25c4898\") " pod="calico-system/calico-typha-bb9458767-txg6x" Jan 29 12:02:42.419746 kubelet[2982]: I0129 12:02:42.419623 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k9s6\" (UniqueName: \"kubernetes.io/projected/acb8039c-6332-4202-b539-d4a0a25c4898-kube-api-access-2k9s6\") pod \"calico-typha-bb9458767-txg6x\" (UID: \"acb8039c-6332-4202-b539-d4a0a25c4898\") " pod="calico-system/calico-typha-bb9458767-txg6x" Jan 29 12:02:42.420027 kubelet[2982]: I0129 12:02:42.420005 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acb8039c-6332-4202-b539-d4a0a25c4898-tigera-ca-bundle\") pod \"calico-typha-bb9458767-txg6x\" (UID: \"acb8039c-6332-4202-b539-d4a0a25c4898\") " pod="calico-system/calico-typha-bb9458767-txg6x" Jan 29 12:02:42.457684 containerd[1589]: time="2025-01-29T12:02:42.457632301Z" level=info msg="StartContainer for \"d0ffc37b21c7a0d95f8e6494a9f53a5a57604eed9d8d950c6f341eb43fe14829\" returns successfully" Jan 29 12:02:42.580691 containerd[1589]: time="2025-01-29T12:02:42.580311204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bb9458767-txg6x,Uid:acb8039c-6332-4202-b539-d4a0a25c4898,Namespace:calico-system,Attempt:0,}" Jan 29 12:02:42.634333 containerd[1589]: time="2025-01-29T12:02:42.633715754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:42.634333 containerd[1589]: time="2025-01-29T12:02:42.633788154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:42.634333 containerd[1589]: time="2025-01-29T12:02:42.633804634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:42.634333 containerd[1589]: time="2025-01-29T12:02:42.633975075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:42.643926 kubelet[2982]: I0129 12:02:42.643843 2982 topology_manager.go:215] "Topology Admit Handler" podUID="6fa48210-93c7-49d1-a060-c54256f3dd92" podNamespace="calico-system" podName="calico-kube-controllers-c75fbff7f-g24ln" Jan 29 12:02:42.741130 containerd[1589]: time="2025-01-29T12:02:42.740942054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bb9458767-txg6x,Uid:acb8039c-6332-4202-b539-d4a0a25c4898,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8eda427393ca865247d7b01042e8a2f2ebb86b00d85ab046e483f3d82150bf8\"" Jan 29 12:02:42.760874 containerd[1589]: time="2025-01-29T12:02:42.760664389Z" level=info msg="CreateContainer within sandbox \"c8eda427393ca865247d7b01042e8a2f2ebb86b00d85ab046e483f3d82150bf8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:02:42.791213 containerd[1589]: time="2025-01-29T12:02:42.791019274Z" level=info msg="CreateContainer within sandbox \"c8eda427393ca865247d7b01042e8a2f2ebb86b00d85ab046e483f3d82150bf8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"36e2e2ee79508819d88ed7bde83a4a9e89aaa52b069df52077ead9baad49bf6d\"" Jan 29 12:02:42.793039 containerd[1589]: time="2025-01-29T12:02:42.792976800Z" level=info msg="StartContainer for \"36e2e2ee79508819d88ed7bde83a4a9e89aaa52b069df52077ead9baad49bf6d\"" Jan 29 12:02:42.823262 kubelet[2982]: I0129 12:02:42.822396 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv4qf\" (UniqueName: \"kubernetes.io/projected/6fa48210-93c7-49d1-a060-c54256f3dd92-kube-api-access-dv4qf\") pod \"calico-kube-controllers-c75fbff7f-g24ln\" (UID: \"6fa48210-93c7-49d1-a060-c54256f3dd92\") " pod="calico-system/calico-kube-controllers-c75fbff7f-g24ln" Jan 29 12:02:42.823634 kubelet[2982]: I0129 12:02:42.823538 2982 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fa48210-93c7-49d1-a060-c54256f3dd92-tigera-ca-bundle\") pod \"calico-kube-controllers-c75fbff7f-g24ln\" (UID: \"6fa48210-93c7-49d1-a060-c54256f3dd92\") " pod="calico-system/calico-kube-controllers-c75fbff7f-g24ln" Jan 29 12:02:42.937240 containerd[1589]: time="2025-01-29T12:02:42.937184963Z" level=info msg="StartContainer for \"36e2e2ee79508819d88ed7bde83a4a9e89aaa52b069df52077ead9baad49bf6d\" returns successfully" Jan 29 12:02:42.964905 containerd[1589]: time="2025-01-29T12:02:42.963607557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c75fbff7f-g24ln,Uid:6fa48210-93c7-49d1-a060-c54256f3dd92,Namespace:calico-system,Attempt:0,}" Jan 29 12:02:43.242794 systemd-networkd[1239]: cali1a2eb4fa2c6: Link UP Jan 29 12:02:43.242974 systemd-networkd[1239]: cali1a2eb4fa2c6: Gained carrier Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.090 [INFO][6163] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0 calico-kube-controllers-c75fbff7f- calico-system 6fa48210-93c7-49d1-a060-c54256f3dd92 1032 0 2025-01-29 12:02:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c75fbff7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-b-488529c6ca calico-kube-controllers-c75fbff7f-g24ln eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1a2eb4fa2c6 [] []}} ContainerID="fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" Namespace="calico-system" Pod="calico-kube-controllers-c75fbff7f-g24ln" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-" Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.090 [INFO][6163] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" Namespace="calico-system" Pod="calico-kube-controllers-c75fbff7f-g24ln" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0" Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.148 [INFO][6186] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" HandleID="k8s-pod-network.fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0" Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.171 [INFO][6186] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" HandleID="k8s-pod-network.fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316e20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-b-488529c6ca", "pod":"calico-kube-controllers-c75fbff7f-g24ln", "timestamp":"2025-01-29 12:02:43.148311232 +0000 UTC"}, Hostname:"ci-4081-3-0-b-488529c6ca", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.172 [INFO][6186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.172 [INFO][6186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.172 [INFO][6186] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-b-488529c6ca' Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.178 [INFO][6186] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.188 [INFO][6186] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.196 [INFO][6186] ipam/ipam.go 489: Trying affinity for 192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.200 [INFO][6186] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.203 [INFO][6186] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.203 [INFO][6186] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.207 [INFO][6186] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01 Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.217 [INFO][6186] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.236 [INFO][6186] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.71/26] block=192.168.18.64/26 handle="k8s-pod-network.fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.236 [INFO][6186] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.71/26] handle="k8s-pod-network.fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" host="ci-4081-3-0-b-488529c6ca" Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.236 [INFO][6186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:02:43.311598 containerd[1589]: 2025-01-29 12:02:43.236 [INFO][6186] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.71/26] IPv6=[] ContainerID="fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" HandleID="k8s-pod-network.fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0" Jan 29 12:02:43.312205 containerd[1589]: 2025-01-29 12:02:43.239 [INFO][6163] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" Namespace="calico-system" Pod="calico-kube-controllers-c75fbff7f-g24ln" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0", GenerateName:"calico-kube-controllers-c75fbff7f-", Namespace:"calico-system", SelfLink:"", UID:"6fa48210-93c7-49d1-a060-c54256f3dd92", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c75fbff7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"", Pod:"calico-kube-controllers-c75fbff7f-g24ln", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a2eb4fa2c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:43.312205 containerd[1589]: 2025-01-29 12:02:43.239 [INFO][6163] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.71/32] ContainerID="fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" Namespace="calico-system" Pod="calico-kube-controllers-c75fbff7f-g24ln" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0" Jan 29 12:02:43.312205 containerd[1589]: 2025-01-29 12:02:43.239 [INFO][6163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a2eb4fa2c6 ContainerID="fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" Namespace="calico-system" Pod="calico-kube-controllers-c75fbff7f-g24ln" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0" Jan 29 12:02:43.312205 containerd[1589]: 2025-01-29 12:02:43.244 [INFO][6163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" Namespace="calico-system" Pod="calico-kube-controllers-c75fbff7f-g24ln" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0" Jan 29 12:02:43.312205 containerd[1589]: 2025-01-29 12:02:43.248 [INFO][6163] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" Namespace="calico-system" Pod="calico-kube-controllers-c75fbff7f-g24ln" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0", GenerateName:"calico-kube-controllers-c75fbff7f-", Namespace:"calico-system", SelfLink:"", UID:"6fa48210-93c7-49d1-a060-c54256f3dd92", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c75fbff7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-b-488529c6ca", ContainerID:"fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01", Pod:"calico-kube-controllers-c75fbff7f-g24ln", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a2eb4fa2c6", MAC:"d2:c6:32:72:4e:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:02:43.312205 containerd[1589]: 2025-01-29 12:02:43.300 [INFO][6163] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01" Namespace="calico-system" Pod="calico-kube-controllers-c75fbff7f-g24ln" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--c75fbff7f--g24ln-eth0" Jan 29 12:02:43.386664 kubelet[2982]: I0129 12:02:43.385027 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jc4dj" podStartSLOduration=5.385000251 podStartE2EDuration="5.385000251s" podCreationTimestamp="2025-01-29 12:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:43.382876285 +0000 UTC m=+64.945889073" watchObservedRunningTime="2025-01-29 12:02:43.385000251 +0000 UTC m=+64.948013039" Jan 29 12:02:43.404135 containerd[1589]: time="2025-01-29T12:02:43.403988664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:02:43.404135 containerd[1589]: time="2025-01-29T12:02:43.404066784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:02:43.404135 containerd[1589]: time="2025-01-29T12:02:43.404077944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:43.407175 containerd[1589]: time="2025-01-29T12:02:43.405065427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:02:43.427116 kubelet[2982]: I0129 12:02:43.427043 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bb9458767-txg6x" podStartSLOduration=6.427014608 podStartE2EDuration="6.427014608s" podCreationTimestamp="2025-01-29 12:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:43.424741282 +0000 UTC m=+64.987753990" watchObservedRunningTime="2025-01-29 12:02:43.427014608 +0000 UTC m=+64.990027356" Jan 29 12:02:43.556652 containerd[1589]: time="2025-01-29T12:02:43.556593689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c75fbff7f-g24ln,Uid:6fa48210-93c7-49d1-a060-c54256f3dd92,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01\"" Jan 29 12:02:43.573333 containerd[1589]: time="2025-01-29T12:02:43.573278935Z" level=info msg="CreateContainer within sandbox \"fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:02:43.596489 containerd[1589]: time="2025-01-29T12:02:43.596004799Z" level=info msg="CreateContainer within sandbox \"fb7f4d457f1a0b697aace3963389b028bc4321c4651a29eccdfeaa2d34a9ee01\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"be444ad50fee2947cd949995cbeec6d7b543eef7b9f815b9bdfe3c12e2ff5bc0\"" Jan 29 12:02:43.597553 containerd[1589]: time="2025-01-29T12:02:43.597374482Z" level=info msg="StartContainer for \"be444ad50fee2947cd949995cbeec6d7b543eef7b9f815b9bdfe3c12e2ff5bc0\"" Jan 29 12:02:43.703033 containerd[1589]: time="2025-01-29T12:02:43.702952856Z" level=info msg="StartContainer for \"be444ad50fee2947cd949995cbeec6d7b543eef7b9f815b9bdfe3c12e2ff5bc0\" returns successfully" Jan 29 12:02:44.298545 systemd-networkd[1239]: cali1a2eb4fa2c6: Gained IPv6LL Jan 29 12:02:45.376980 systemd[1]: run-containerd-runc-k8s.io-be444ad50fee2947cd949995cbeec6d7b543eef7b9f815b9bdfe3c12e2ff5bc0-runc.ZIphuG.mount: Deactivated successfully. Jan 29 12:02:46.416223 kubelet[2982]: I0129 12:02:46.415662 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c75fbff7f-g24ln" podStartSLOduration=7.415640272 podStartE2EDuration="7.415640272s" podCreationTimestamp="2025-01-29 12:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:02:44.3742362 +0000 UTC m=+65.937248908" watchObservedRunningTime="2025-01-29 12:02:46.415640272 +0000 UTC m=+67.978653020" Jan 29 12:03:06.274987 kubelet[2982]: I0129 12:03:06.274757 2982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:03:41.136381 containerd[1589]: time="2025-01-29T12:03:41.135609470Z" level=info msg="StopPodSandbox for \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\"" Jan 29 12:03:41.274976 containerd[1589]: 2025-01-29 12:03:41.219 [WARNING][6687] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:03:41.274976 containerd[1589]: 2025-01-29 12:03:41.219 [INFO][6687] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Jan 29 12:03:41.274976 containerd[1589]: 2025-01-29 12:03:41.219 [INFO][6687] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" iface="eth0" netns="" Jan 29 12:03:41.274976 containerd[1589]: 2025-01-29 12:03:41.219 [INFO][6687] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Jan 29 12:03:41.274976 containerd[1589]: 2025-01-29 12:03:41.219 [INFO][6687] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Jan 29 12:03:41.274976 containerd[1589]: 2025-01-29 12:03:41.255 [INFO][6693] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" HandleID="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:03:41.274976 containerd[1589]: 2025-01-29 12:03:41.255 [INFO][6693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:41.274976 containerd[1589]: 2025-01-29 12:03:41.255 [INFO][6693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:41.274976 containerd[1589]: 2025-01-29 12:03:41.267 [WARNING][6693] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" HandleID="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:03:41.274976 containerd[1589]: 2025-01-29 12:03:41.267 [INFO][6693] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" HandleID="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:03:41.274976 containerd[1589]: 2025-01-29 12:03:41.271 [INFO][6693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:41.274976 containerd[1589]: 2025-01-29 12:03:41.273 [INFO][6687] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Jan 29 12:03:41.275440 containerd[1589]: time="2025-01-29T12:03:41.275042546Z" level=info msg="TearDown network for sandbox \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\" successfully" Jan 29 12:03:41.275440 containerd[1589]: time="2025-01-29T12:03:41.275076786Z" level=info msg="StopPodSandbox for \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\" returns successfully" Jan 29 12:03:41.276366 containerd[1589]: time="2025-01-29T12:03:41.275989035Z" level=info msg="RemovePodSandbox for \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\"" Jan 29 12:03:41.276366 containerd[1589]: time="2025-01-29T12:03:41.276029995Z" level=info msg="Forcibly stopping sandbox \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\"" Jan 29 12:03:41.373267 containerd[1589]: 2025-01-29 12:03:41.326 [WARNING][6712] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" WorkloadEndpoint="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:03:41.373267 containerd[1589]: 2025-01-29 12:03:41.326 [INFO][6712] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Jan 29 12:03:41.373267 containerd[1589]: 2025-01-29 12:03:41.326 [INFO][6712] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" iface="eth0" netns="" Jan 29 12:03:41.373267 containerd[1589]: 2025-01-29 12:03:41.326 [INFO][6712] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Jan 29 12:03:41.373267 containerd[1589]: 2025-01-29 12:03:41.326 [INFO][6712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Jan 29 12:03:41.373267 containerd[1589]: 2025-01-29 12:03:41.352 [INFO][6718] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" HandleID="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:03:41.373267 containerd[1589]: 2025-01-29 12:03:41.352 [INFO][6718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:41.373267 containerd[1589]: 2025-01-29 12:03:41.352 [INFO][6718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:41.373267 containerd[1589]: 2025-01-29 12:03:41.365 [WARNING][6718] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" HandleID="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:03:41.373267 containerd[1589]: 2025-01-29 12:03:41.365 [INFO][6718] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" HandleID="k8s-pod-network.77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Workload="ci--4081--3--0--b--488529c6ca-k8s-calico--kube--controllers--77c4b4575d--lt42c-eth0" Jan 29 12:03:41.373267 containerd[1589]: 2025-01-29 12:03:41.368 [INFO][6718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:41.373267 containerd[1589]: 2025-01-29 12:03:41.370 [INFO][6712] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871" Jan 29 12:03:41.373267 containerd[1589]: time="2025-01-29T12:03:41.372318264Z" level=info msg="TearDown network for sandbox \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\" successfully" Jan 29 12:03:41.379253 containerd[1589]: time="2025-01-29T12:03:41.379196529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:03:41.379617 containerd[1589]: time="2025-01-29T12:03:41.379473812Z" level=info msg="RemovePodSandbox \"77322788680d1ea8d56034bfe4065fcb75c4ace5f44792ab7ae622c58aaec871\" returns successfully" Jan 29 12:03:41.380434 containerd[1589]: time="2025-01-29T12:03:41.380392140Z" level=info msg="StopPodSandbox for \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\"" Jan 29 12:03:41.380569 containerd[1589]: time="2025-01-29T12:03:41.380546142Z" level=info msg="TearDown network for sandbox \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\" successfully" Jan 29 12:03:41.380614 containerd[1589]: time="2025-01-29T12:03:41.380571622Z" level=info msg="StopPodSandbox for \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\" returns successfully" Jan 29 12:03:41.381067 containerd[1589]: time="2025-01-29T12:03:41.381039107Z" level=info msg="RemovePodSandbox for \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\"" Jan 29 12:03:41.381112 containerd[1589]: time="2025-01-29T12:03:41.381080467Z" level=info msg="Forcibly stopping sandbox \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\"" Jan 29 12:03:41.381207 containerd[1589]: time="2025-01-29T12:03:41.381155628Z" level=info msg="TearDown network for sandbox \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\" successfully" Jan 29 12:03:41.386192 containerd[1589]: time="2025-01-29T12:03:41.386105794Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:03:41.386605 containerd[1589]: time="2025-01-29T12:03:41.386209675Z" level=info msg="RemovePodSandbox \"df612af59347bc3edf54ae373a884163214be2775a474bde17c3cbface6f01b8\" returns successfully" Jan 29 12:03:42.989934 kernel: hrtimer: interrupt took 3098509 ns Jan 29 12:03:43.005774 systemd[1]: run-containerd-runc-k8s.io-be444ad50fee2947cd949995cbeec6d7b543eef7b9f815b9bdfe3c12e2ff5bc0-runc.JxL5LZ.mount: Deactivated successfully. Jan 29 12:03:44.486330 update_engine[1578]: I20250129 12:03:44.486200 1578 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 12:03:44.486330 update_engine[1578]: I20250129 12:03:44.486315 1578 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 12:03:44.487124 update_engine[1578]: I20250129 12:03:44.486936 1578 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 12:03:44.488719 update_engine[1578]: I20250129 12:03:44.488651 1578 omaha_request_params.cc:62] Current group set to lts Jan 29 12:03:44.489117 update_engine[1578]: I20250129 12:03:44.488843 1578 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 12:03:44.489117 update_engine[1578]: I20250129 12:03:44.488859 1578 update_attempter.cc:643] Scheduling an action processor start. Jan 29 12:03:44.489117 update_engine[1578]: I20250129 12:03:44.488889 1578 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:03:44.490450 locksmithd[1619]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 12:03:44.491037 update_engine[1578]: I20250129 12:03:44.490388 1578 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 12:03:44.492839 update_engine[1578]: I20250129 12:03:44.491230 1578 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:03:44.492839 update_engine[1578]: I20250129 12:03:44.491273 1578 omaha_request_action.cc:272] Request: Jan 29 12:03:44.492839 update_engine[1578]: Jan 29 12:03:44.492839 update_engine[1578]: Jan 29 12:03:44.492839 update_engine[1578]: Jan 29 12:03:44.492839 update_engine[1578]: Jan 29 12:03:44.492839 update_engine[1578]: Jan 29 12:03:44.492839 update_engine[1578]: Jan 29 12:03:44.492839 update_engine[1578]: Jan 29 12:03:44.492839 update_engine[1578]: Jan 29 12:03:44.492839 update_engine[1578]: I20250129 12:03:44.491284 1578 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:03:44.495438 update_engine[1578]: I20250129 12:03:44.495384 1578 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:03:44.496108 update_engine[1578]: I20250129 12:03:44.496069 1578 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:03:44.497714 update_engine[1578]: E20250129 12:03:44.497661 1578 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:03:44.497962 update_engine[1578]: I20250129 12:03:44.497941 1578 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 12:03:50.499282 systemd[1]: sshd@8-188.34.178.132:22-36.41.71.82:53924.service: Deactivated successfully. Jan 29 12:03:54.397369 update_engine[1578]: I20250129 12:03:54.396096 1578 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:03:54.397369 update_engine[1578]: I20250129 12:03:54.396567 1578 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:03:54.397369 update_engine[1578]: I20250129 12:03:54.396999 1578 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:03:54.398874 update_engine[1578]: E20250129 12:03:54.398821 1578 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:03:54.399095 update_engine[1578]: I20250129 12:03:54.399071 1578 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 12:03:54.935226 systemd[1]: Started sshd@9-188.34.178.132:22-36.41.71.82:45386.service - OpenSSH per-connection server daemon (36.41.71.82:45386). Jan 29 12:04:04.391203 update_engine[1578]: I20250129 12:04:04.390619 1578 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:04:04.391203 update_engine[1578]: I20250129 12:04:04.390977 1578 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:04:04.392503 update_engine[1578]: I20250129 12:04:04.392402 1578 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:04:04.393345 update_engine[1578]: E20250129 12:04:04.393234 1578 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:04:04.393345 update_engine[1578]: I20250129 12:04:04.393317 1578 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 12:04:14.396106 update_engine[1578]: I20250129 12:04:14.395380 1578 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:04:14.396106 update_engine[1578]: I20250129 12:04:14.395735 1578 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:04:14.396106 update_engine[1578]: I20250129 12:04:14.396018 1578 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:04:14.398977 update_engine[1578]: E20250129 12:04:14.397311 1578 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:04:14.398977 update_engine[1578]: I20250129 12:04:14.397409 1578 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 12:04:14.398977 update_engine[1578]: I20250129 12:04:14.397423 1578 omaha_request_action.cc:617] Omaha request response: Jan 29 12:04:14.398977 update_engine[1578]: E20250129 12:04:14.397552 1578 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 12:04:14.398977 update_engine[1578]: I20250129 12:04:14.397589 1578 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 12:04:14.398977 update_engine[1578]: I20250129 12:04:14.397601 1578 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:04:14.398977 update_engine[1578]: I20250129 12:04:14.397663 1578 update_attempter.cc:306] Processing Done. Jan 29 12:04:14.398977 update_engine[1578]: E20250129 12:04:14.397691 1578 update_attempter.cc:619] Update failed. Jan 29 12:04:14.398977 update_engine[1578]: I20250129 12:04:14.397699 1578 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 12:04:14.398977 update_engine[1578]: I20250129 12:04:14.397707 1578 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 12:04:14.398977 update_engine[1578]: I20250129 12:04:14.397717 1578 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 12:04:14.398977 update_engine[1578]: I20250129 12:04:14.397817 1578 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:04:14.398977 update_engine[1578]: I20250129 12:04:14.397851 1578 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:04:14.398977 update_engine[1578]: I20250129 12:04:14.397861 1578 omaha_request_action.cc:272] Request: Jan 29 12:04:14.398977 update_engine[1578]: Jan 29 12:04:14.398977 update_engine[1578]: Jan 29 12:04:14.398977 update_engine[1578]: Jan 29 12:04:14.399414 locksmithd[1619]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 12:04:14.399772 update_engine[1578]: Jan 29 12:04:14.399772 update_engine[1578]: Jan 29 12:04:14.399772 update_engine[1578]: Jan 29 12:04:14.399772 update_engine[1578]: I20250129 12:04:14.397869 1578 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:04:14.399772 update_engine[1578]: I20250129 12:04:14.398118 1578 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:04:14.399772 update_engine[1578]: I20250129 12:04:14.398499 1578 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:04:14.400583 update_engine[1578]: E20250129 12:04:14.400327 1578 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:04:14.400583 update_engine[1578]: I20250129 12:04:14.400411 1578 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 12:04:14.400583 update_engine[1578]: I20250129 12:04:14.400422 1578 omaha_request_action.cc:617] Omaha request response: Jan 29 12:04:14.400583 update_engine[1578]: I20250129 12:04:14.400431 1578 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:04:14.400583 update_engine[1578]: I20250129 12:04:14.400439 1578 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:04:14.400583 update_engine[1578]: I20250129 12:04:14.400445 1578 update_attempter.cc:306] Processing Done. Jan 29 12:04:14.400583 update_engine[1578]: I20250129 12:04:14.400452 1578 update_attempter.cc:310] Error event sent. Jan 29 12:04:14.400583 update_engine[1578]: I20250129 12:04:14.400465 1578 update_check_scheduler.cc:74] Next update check in 48m24s Jan 29 12:04:14.400901 locksmithd[1619]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 12:05:54.957753 systemd[1]: sshd@9-188.34.178.132:22-36.41.71.82:45386.service: Deactivated successfully. Jan 29 12:06:00.560235 systemd[1]: Started sshd@10-188.34.178.132:22-36.41.71.82:42270.service - OpenSSH per-connection server daemon (36.41.71.82:42270). Jan 29 12:06:12.989516 systemd[1]: run-containerd-runc-k8s.io-be444ad50fee2947cd949995cbeec6d7b543eef7b9f815b9bdfe3c12e2ff5bc0-runc.NlbeMT.mount: Deactivated successfully. Jan 29 12:06:31.409657 systemd[1]: Started sshd@11-188.34.178.132:22-139.178.89.65:53560.service - OpenSSH per-connection server daemon (139.178.89.65:53560). Jan 29 12:06:32.397062 sshd[7091]: Accepted publickey for core from 139.178.89.65 port 53560 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:32.399931 sshd[7091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:32.407460 systemd-logind[1568]: New session 8 of user core. Jan 29 12:06:32.414804 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:06:33.187891 sshd[7091]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:33.193750 systemd-logind[1568]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:06:33.194616 systemd[1]: sshd@11-188.34.178.132:22-139.178.89.65:53560.service: Deactivated successfully. Jan 29 12:06:33.202718 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:06:33.207962 systemd-logind[1568]: Removed session 8. Jan 29 12:06:38.363688 systemd[1]: Started sshd@12-188.34.178.132:22-139.178.89.65:53568.service - OpenSSH per-connection server daemon (139.178.89.65:53568). Jan 29 12:06:39.355264 sshd[7105]: Accepted publickey for core from 139.178.89.65 port 53568 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:39.357518 sshd[7105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:39.365979 systemd-logind[1568]: New session 9 of user core. Jan 29 12:06:39.368625 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:06:40.148910 sshd[7105]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:40.161589 systemd[1]: sshd@12-188.34.178.132:22-139.178.89.65:53568.service: Deactivated successfully. Jan 29 12:06:40.173520 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:06:40.180371 systemd-logind[1568]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:06:40.184915 systemd-logind[1568]: Removed session 9. Jan 29 12:06:45.315501 systemd[1]: Started sshd@13-188.34.178.132:22-139.178.89.65:45778.service - OpenSSH per-connection server daemon (139.178.89.65:45778). Jan 29 12:06:46.309093 sshd[7181]: Accepted publickey for core from 139.178.89.65 port 45778 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:46.312468 sshd[7181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:46.325372 systemd-logind[1568]: New session 10 of user core. Jan 29 12:06:46.328667 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:06:47.099981 sshd[7181]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:47.104410 systemd-logind[1568]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:06:47.104620 systemd[1]: sshd@13-188.34.178.132:22-139.178.89.65:45778.service: Deactivated successfully. Jan 29 12:06:47.109079 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:06:47.112594 systemd-logind[1568]: Removed session 10. Jan 29 12:06:47.269593 systemd[1]: Started sshd@14-188.34.178.132:22-139.178.89.65:45784.service - OpenSSH per-connection server daemon (139.178.89.65:45784). Jan 29 12:06:48.262694 sshd[7197]: Accepted publickey for core from 139.178.89.65 port 45784 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:48.265426 sshd[7197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:48.273690 systemd-logind[1568]: New session 11 of user core. Jan 29 12:06:48.279887 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:06:49.073932 sshd[7197]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:49.079627 systemd[1]: sshd@14-188.34.178.132:22-139.178.89.65:45784.service: Deactivated successfully. Jan 29 12:06:49.087696 systemd-logind[1568]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:06:49.088033 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:06:49.092248 systemd-logind[1568]: Removed session 11. Jan 29 12:06:49.251666 systemd[1]: Started sshd@15-188.34.178.132:22-139.178.89.65:45798.service - OpenSSH per-connection server daemon (139.178.89.65:45798). Jan 29 12:06:50.242757 sshd[7212]: Accepted publickey for core from 139.178.89.65 port 45798 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:50.245857 sshd[7212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:50.260713 systemd-logind[1568]: New session 12 of user core. Jan 29 12:06:50.267595 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:06:51.007027 sshd[7212]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:51.021488 systemd[1]: sshd@15-188.34.178.132:22-139.178.89.65:45798.service: Deactivated successfully. Jan 29 12:06:51.027750 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:06:51.030795 systemd-logind[1568]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:06:51.033830 systemd-logind[1568]: Removed session 12. Jan 29 12:06:56.178315 systemd[1]: Started sshd@16-188.34.178.132:22-139.178.89.65:56894.service - OpenSSH per-connection server daemon (139.178.89.65:56894). Jan 29 12:06:57.162336 sshd[7228]: Accepted publickey for core from 139.178.89.65 port 56894 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:57.164596 sshd[7228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:57.173131 systemd-logind[1568]: New session 13 of user core. Jan 29 12:06:57.176569 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:06:57.944539 sshd[7228]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:57.956555 systemd[1]: sshd@16-188.34.178.132:22-139.178.89.65:56894.service: Deactivated successfully. Jan 29 12:06:57.962552 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:06:57.965655 systemd-logind[1568]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:06:57.968926 systemd-logind[1568]: Removed session 13. Jan 29 12:06:58.112584 systemd[1]: Started sshd@17-188.34.178.132:22-139.178.89.65:56906.service - OpenSSH per-connection server daemon (139.178.89.65:56906). Jan 29 12:06:59.098349 sshd[7242]: Accepted publickey for core from 139.178.89.65 port 56906 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:59.101156 sshd[7242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:59.110609 systemd-logind[1568]: New session 14 of user core. Jan 29 12:06:59.115564 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:07:00.035153 sshd[7242]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:00.041812 systemd[1]: sshd@17-188.34.178.132:22-139.178.89.65:56906.service: Deactivated successfully. Jan 29 12:07:00.050452 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:07:00.054924 systemd-logind[1568]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:07:00.057004 systemd-logind[1568]: Removed session 14. Jan 29 12:07:00.202670 systemd[1]: Started sshd@18-188.34.178.132:22-139.178.89.65:56908.service - OpenSSH per-connection server daemon (139.178.89.65:56908). Jan 29 12:07:01.198765 sshd[7253]: Accepted publickey for core from 139.178.89.65 port 56908 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:07:01.201483 sshd[7253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:01.208694 systemd-logind[1568]: New session 15 of user core. Jan 29 12:07:01.215906 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:07:03.783038 sshd[7253]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:03.788556 systemd[1]: sshd@18-188.34.178.132:22-139.178.89.65:56908.service: Deactivated successfully. Jan 29 12:07:03.794087 systemd-logind[1568]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:07:03.794948 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:07:03.797722 systemd-logind[1568]: Removed session 15. Jan 29 12:07:03.951523 systemd[1]: Started sshd@19-188.34.178.132:22-139.178.89.65:55300.service - OpenSSH per-connection server daemon (139.178.89.65:55300). Jan 29 12:07:04.941882 sshd[7272]: Accepted publickey for core from 139.178.89.65 port 55300 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:07:04.945147 sshd[7272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:04.951929 systemd-logind[1568]: New session 16 of user core. Jan 29 12:07:04.957790 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:07:05.883980 sshd[7272]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:05.890414 systemd[1]: sshd@19-188.34.178.132:22-139.178.89.65:55300.service: Deactivated successfully. Jan 29 12:07:05.903344 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:07:05.904137 systemd-logind[1568]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:07:05.908301 systemd-logind[1568]: Removed session 16. Jan 29 12:07:06.052811 systemd[1]: Started sshd@20-188.34.178.132:22-139.178.89.65:55316.service - OpenSSH per-connection server daemon (139.178.89.65:55316). Jan 29 12:07:07.047189 sshd[7284]: Accepted publickey for core from 139.178.89.65 port 55316 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:07:07.052366 sshd[7284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:07.062294 systemd-logind[1568]: New session 17 of user core. Jan 29 12:07:07.068641 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:07:07.828540 sshd[7284]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:07.836066 systemd[1]: sshd@20-188.34.178.132:22-139.178.89.65:55316.service: Deactivated successfully. Jan 29 12:07:07.840579 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:07:07.841671 systemd-logind[1568]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:07:07.843211 systemd-logind[1568]: Removed session 17. Jan 29 12:07:12.995804 systemd[1]: Started sshd@21-188.34.178.132:22-139.178.89.65:51418.service - OpenSSH per-connection server daemon (139.178.89.65:51418). Jan 29 12:07:13.995888 sshd[7338]: Accepted publickey for core from 139.178.89.65 port 51418 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:07:13.999672 sshd[7338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:14.009491 systemd-logind[1568]: New session 18 of user core. Jan 29 12:07:14.013654 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:07:14.762798 sshd[7338]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:14.768508 systemd[1]: sshd@21-188.34.178.132:22-139.178.89.65:51418.service: Deactivated successfully. Jan 29 12:07:14.774291 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:07:14.775495 systemd-logind[1568]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:07:14.777389 systemd-logind[1568]: Removed session 18. Jan 29 12:07:19.928761 systemd[1]: Started sshd@22-188.34.178.132:22-139.178.89.65:51432.service - OpenSSH per-connection server daemon (139.178.89.65:51432). Jan 29 12:07:20.911315 sshd[7355]: Accepted publickey for core from 139.178.89.65 port 51432 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:07:20.921667 sshd[7355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:20.939881 systemd-logind[1568]: New session 19 of user core. Jan 29 12:07:20.944582 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:07:21.668511 sshd[7355]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:21.675412 systemd[1]: sshd@22-188.34.178.132:22-139.178.89.65:51432.service: Deactivated successfully. Jan 29 12:07:21.680821 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:07:21.680984 systemd-logind[1568]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:07:21.682896 systemd-logind[1568]: Removed session 19. Jan 29 12:07:37.155759 containerd[1589]: time="2025-01-29T12:07:37.153852484Z" level=info msg="shim disconnected" id=9d56df801354d95dd24d6b7f9dba3cf8f2aaaa6318dd0afd3e7305732d241098 namespace=k8s.io Jan 29 12:07:37.155759 containerd[1589]: time="2025-01-29T12:07:37.153974806Z" level=warning msg="cleaning up after shim disconnected" id=9d56df801354d95dd24d6b7f9dba3cf8f2aaaa6318dd0afd3e7305732d241098 namespace=k8s.io Jan 29 12:07:37.155759 containerd[1589]: time="2025-01-29T12:07:37.153986726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:07:37.157627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d56df801354d95dd24d6b7f9dba3cf8f2aaaa6318dd0afd3e7305732d241098-rootfs.mount: Deactivated successfully. Jan 29 12:07:37.201993 containerd[1589]: time="2025-01-29T12:07:37.201707076Z" level=info msg="shim disconnected" id=2930cc7aa7778a65b312c62f23dd706d8beb8930bba84bdfbef25c78ab8180e7 namespace=k8s.io Jan 29 12:07:37.201993 containerd[1589]: time="2025-01-29T12:07:37.201790397Z" level=warning msg="cleaning up after shim disconnected" id=2930cc7aa7778a65b312c62f23dd706d8beb8930bba84bdfbef25c78ab8180e7 namespace=k8s.io Jan 29 12:07:37.201993 containerd[1589]: time="2025-01-29T12:07:37.201799477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:07:37.203987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2930cc7aa7778a65b312c62f23dd706d8beb8930bba84bdfbef25c78ab8180e7-rootfs.mount: Deactivated successfully. Jan 29 12:07:37.223109 containerd[1589]: time="2025-01-29T12:07:37.222989921Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:07:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:07:37.315267 kubelet[2982]: I0129 12:07:37.314792 2982 scope.go:117] "RemoveContainer" containerID="9d56df801354d95dd24d6b7f9dba3cf8f2aaaa6318dd0afd3e7305732d241098" Jan 29 12:07:37.318049 containerd[1589]: time="2025-01-29T12:07:37.318005775Z" level=info msg="CreateContainer within sandbox \"a29ffd3ea8c41a72b7e3ab544bfdaf842d7f4722c9dc41800f1c2b3a9968ee6f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 29 12:07:37.319733 kubelet[2982]: I0129 12:07:37.319183 2982 scope.go:117] "RemoveContainer" containerID="2930cc7aa7778a65b312c62f23dd706d8beb8930bba84bdfbef25c78ab8180e7" Jan 29 12:07:37.323156 containerd[1589]: time="2025-01-29T12:07:37.323088714Z" level=info msg="CreateContainer within sandbox \"cbc52ec1593faa9935418e16957abf3c12b4debeeaf2c02e048c9a69f314a890\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 12:07:37.351348 containerd[1589]: time="2025-01-29T12:07:37.351299319Z" level=info msg="CreateContainer within sandbox \"a29ffd3ea8c41a72b7e3ab544bfdaf842d7f4722c9dc41800f1c2b3a9968ee6f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"13e798387e5e1f8d26f19c87d50b61116fd2324b6884ef6dd3cd6695a69ef506\"" Jan 29 12:07:37.353721 containerd[1589]: time="2025-01-29T12:07:37.352721095Z" level=info msg="StartContainer for \"13e798387e5e1f8d26f19c87d50b61116fd2324b6884ef6dd3cd6695a69ef506\"" Jan 29 12:07:37.356625 containerd[1589]: time="2025-01-29T12:07:37.356576900Z" level=info msg="CreateContainer within sandbox \"cbc52ec1593faa9935418e16957abf3c12b4debeeaf2c02e048c9a69f314a890\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"855edb3dce86f72f59704b920aac6c113b574ef8756bab83954f40b61737baa3\"" Jan 29 12:07:37.357546 containerd[1589]: time="2025-01-29T12:07:37.357512790Z" level=info msg="StartContainer for \"855edb3dce86f72f59704b920aac6c113b574ef8756bab83954f40b61737baa3\"" Jan 29 12:07:37.422876 kubelet[2982]: E0129 12:07:37.422135 2982 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56774->10.0.0.2:2379: read: connection timed out" Jan 29 12:07:37.450309 containerd[1589]: time="2025-01-29T12:07:37.450240259Z" level=info msg="StartContainer for \"13e798387e5e1f8d26f19c87d50b61116fd2324b6884ef6dd3cd6695a69ef506\" returns successfully" Jan 29 12:07:37.455744 containerd[1589]: time="2025-01-29T12:07:37.455700522Z" level=info msg="StartContainer for \"855edb3dce86f72f59704b920aac6c113b574ef8756bab83954f40b61737baa3\" returns successfully" Jan 29 12:07:39.340469 kubelet[2982]: E0129 12:07:39.339752 2982 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56632->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-0-b-488529c6ca.181f28717075c861 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-0-b-488529c6ca,UID:413511ca7d4bf6f458ce79a2e24d0a70,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-b-488529c6ca,},FirstTimestamp:2025-01-29 12:07:28.871483489 +0000 UTC m=+350.434496277,LastTimestamp:2025-01-29 12:07:28.871483489 +0000 UTC m=+350.434496277,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-b-488529c6ca,}" Jan 29 12:07:41.993411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13e798387e5e1f8d26f19c87d50b61116fd2324b6884ef6dd3cd6695a69ef506-rootfs.mount: Deactivated successfully. Jan 29 12:07:42.000479 containerd[1589]: time="2025-01-29T12:07:42.000365359Z" level=info msg="shim disconnected" id=13e798387e5e1f8d26f19c87d50b61116fd2324b6884ef6dd3cd6695a69ef506 namespace=k8s.io Jan 29 12:07:42.000479 containerd[1589]: time="2025-01-29T12:07:42.000481760Z" level=warning msg="cleaning up after shim disconnected" id=13e798387e5e1f8d26f19c87d50b61116fd2324b6884ef6dd3cd6695a69ef506 namespace=k8s.io Jan 29 12:07:42.001246 containerd[1589]: time="2025-01-29T12:07:42.000495040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:07:42.342582 kubelet[2982]: I0129 12:07:42.341664 2982 scope.go:117] "RemoveContainer" containerID="9d56df801354d95dd24d6b7f9dba3cf8f2aaaa6318dd0afd3e7305732d241098" Jan 29 12:07:42.342582 kubelet[2982]: I0129 12:07:42.342140 2982 scope.go:117] "RemoveContainer" containerID="13e798387e5e1f8d26f19c87d50b61116fd2324b6884ef6dd3cd6695a69ef506" Jan 29 12:07:42.342582 kubelet[2982]: E0129 12:07:42.342470 2982 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7bc55997bb-wmqr5_tigera-operator(74cfde88-6828-4940-a16d-a02d42e9ebd6)\"" pod="tigera-operator/tigera-operator-7bc55997bb-wmqr5" podUID="74cfde88-6828-4940-a16d-a02d42e9ebd6" Jan 29 12:07:42.345375 containerd[1589]: time="2025-01-29T12:07:42.345276044Z" level=info msg="RemoveContainer for \"9d56df801354d95dd24d6b7f9dba3cf8f2aaaa6318dd0afd3e7305732d241098\"" Jan 29 12:07:42.352958 containerd[1589]: time="2025-01-29T12:07:42.352854689Z" level=info msg="RemoveContainer for \"9d56df801354d95dd24d6b7f9dba3cf8f2aaaa6318dd0afd3e7305732d241098\" returns successfully" Jan 29 12:07:42.640293 containerd[1589]: time="2025-01-29T12:07:42.637703858Z" level=info msg="shim disconnected" id=57d9a279ba65632826399ab9faa34a6541550788222c23ee94c66f59ddbab4c8 namespace=k8s.io Jan 29 12:07:42.640293 containerd[1589]: time="2025-01-29T12:07:42.637861179Z" level=warning msg="cleaning up after shim disconnected" id=57d9a279ba65632826399ab9faa34a6541550788222c23ee94c66f59ddbab4c8 namespace=k8s.io Jan 29 12:07:42.640293 containerd[1589]: time="2025-01-29T12:07:42.637917740Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:07:42.638691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57d9a279ba65632826399ab9faa34a6541550788222c23ee94c66f59ddbab4c8-rootfs.mount: Deactivated successfully.