Jan 30 14:10:40.919616 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 14:10:40.919652 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:10:40.919665 kernel: KASLR enabled Jan 30 14:10:40.919671 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 30 14:10:40.919678 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 30 14:10:40.919683 kernel: random: crng init done Jan 30 14:10:40.919691 kernel: ACPI: Early table checksum verification disabled Jan 30 14:10:40.919697 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 30 14:10:40.919703 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 30 14:10:40.919711 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:40.919718 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:40.919724 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:40.919730 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:40.919737 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:40.919744 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:40.919768 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:40.919776 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:40.919813 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:40.919845 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 14:10:40.919866 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 30 14:10:40.919888 kernel: NUMA: Failed to initialise from firmware Jan 30 14:10:40.919896 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 14:10:40.919903 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 30 14:10:40.919910 kernel: Zone ranges: Jan 30 14:10:40.919916 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 30 14:10:40.919927 kernel: DMA32 empty Jan 30 14:10:40.919934 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 30 14:10:40.919940 kernel: Movable zone start for each node Jan 30 14:10:40.919947 kernel: Early memory node ranges Jan 30 14:10:40.919954 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 30 14:10:40.919960 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 30 14:10:40.919967 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 30 14:10:40.919973 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 30 14:10:40.919980 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 30 14:10:40.919986 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 30 14:10:40.919993 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 30 14:10:40.919999 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 14:10:40.920020 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 30 14:10:40.920030 kernel: psci: probing for conduit method from ACPI. Jan 30 14:10:40.920037 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 14:10:40.920048 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:10:40.920072 kernel: psci: Trusted OS migration not required Jan 30 14:10:40.920081 kernel: psci: SMC Calling Convention v1.1 Jan 30 14:10:40.920092 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 14:10:40.920099 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:10:40.920106 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:10:40.920114 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:10:40.920121 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:10:40.920127 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:10:40.920137 kernel: CPU features: detected: Hardware dirty bit management Jan 30 14:10:40.920145 kernel: CPU features: detected: Spectre-v4 Jan 30 14:10:40.920153 kernel: CPU features: detected: Spectre-BHB Jan 30 14:10:40.920162 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 14:10:40.920173 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 14:10:40.920180 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 14:10:40.920188 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 14:10:40.920195 kernel: alternatives: applying boot alternatives Jan 30 14:10:40.920204 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:10:40.920212 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:10:40.920219 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:10:40.920227 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:10:40.920234 kernel: Fallback order for Node 0: 0 Jan 30 14:10:40.920241 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 30 14:10:40.920248 kernel: Policy zone: Normal Jan 30 14:10:40.920257 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:10:40.920264 kernel: software IO TLB: area num 2. Jan 30 14:10:40.920271 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 30 14:10:40.920278 kernel: Memory: 3882936K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 213064K reserved, 0K cma-reserved) Jan 30 14:10:40.920286 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:10:40.920293 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:10:40.920316 kernel: rcu: RCU event tracing is enabled. Jan 30 14:10:40.920324 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:10:40.920331 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:10:40.920338 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:10:40.920345 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:10:40.920355 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:10:40.920362 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:10:40.920369 kernel: GICv3: 256 SPIs implemented Jan 30 14:10:40.920376 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:10:40.920383 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:10:40.920390 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 14:10:40.920397 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 14:10:40.920404 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 14:10:40.920411 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 14:10:40.920419 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 14:10:40.920426 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 30 14:10:40.920433 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 30 14:10:40.920442 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:10:40.920449 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:10:40.920456 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 14:10:40.920463 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 14:10:40.920471 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 14:10:40.920478 kernel: Console: colour dummy device 80x25 Jan 30 14:10:40.920485 kernel: ACPI: Core revision 20230628 Jan 30 14:10:40.922543 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 14:10:40.922565 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:10:40.922573 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:10:40.922587 kernel: landlock: Up and running. Jan 30 14:10:40.922595 kernel: SELinux: Initializing. Jan 30 14:10:40.922602 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:10:40.922610 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:10:40.922617 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:10:40.922625 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:10:40.922632 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:10:40.922640 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:10:40.922648 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 14:10:40.922657 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 14:10:40.922664 kernel: Remapping and enabling EFI services. Jan 30 14:10:40.922671 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:10:40.922678 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:10:40.922685 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 14:10:40.922692 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 30 14:10:40.922700 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:10:40.922707 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 14:10:40.922714 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:10:40.922721 kernel: SMP: Total of 2 processors activated. Jan 30 14:10:40.922730 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:10:40.922738 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 14:10:40.922750 kernel: CPU features: detected: Common not Private translations Jan 30 14:10:40.922760 kernel: CPU features: detected: CRC32 instructions Jan 30 14:10:40.922768 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 14:10:40.922775 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 14:10:40.922783 kernel: CPU features: detected: LSE atomic instructions Jan 30 14:10:40.922790 kernel: CPU features: detected: Privileged Access Never Jan 30 14:10:40.922798 kernel: CPU features: detected: RAS Extension Support Jan 30 14:10:40.922808 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 14:10:40.922815 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:10:40.922822 kernel: alternatives: applying system-wide alternatives Jan 30 14:10:40.922830 kernel: devtmpfs: initialized Jan 30 14:10:40.922838 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:10:40.922845 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:10:40.922853 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:10:40.922862 kernel: SMBIOS 3.0.0 present. Jan 30 14:10:40.922870 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 30 14:10:40.922878 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:10:40.922885 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:10:40.922893 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:10:40.922901 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:10:40.922909 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:10:40.922917 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Jan 30 14:10:40.922924 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:10:40.922934 kernel: cpuidle: using governor menu Jan 30 14:10:40.922943 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:10:40.922951 kernel: ASID allocator initialised with 32768 entries Jan 30 14:10:40.922958 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:10:40.922966 kernel: Serial: AMBA PL011 UART driver Jan 30 14:10:40.922974 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 14:10:40.922981 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 14:10:40.922989 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:10:40.923003 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:10:40.923014 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:10:40.923022 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:10:40.923029 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:10:40.923037 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:10:40.923045 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:10:40.923055 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:10:40.923063 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:10:40.923071 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:10:40.923082 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:10:40.923094 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:10:40.923102 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:10:40.923109 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:10:40.923117 kernel: ACPI: Interpreter enabled Jan 30 14:10:40.923125 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:10:40.923132 kernel: ACPI: MCFG table detected, 1 entries Jan 30 14:10:40.923155 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 14:10:40.923163 kernel: printk: console [ttyAMA0] enabled Jan 30 14:10:40.923171 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 14:10:40.923381 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:10:40.923476 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 14:10:40.923653 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 14:10:40.923726 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 14:10:40.923792 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 14:10:40.923802 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 14:10:40.923810 kernel: PCI host bridge to bus 0000:00 Jan 30 14:10:40.923890 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 14:10:40.923951 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 14:10:40.924011 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 14:10:40.924070 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 14:10:40.924154 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 14:10:40.924234 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 30 14:10:40.925180 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 30 14:10:40.925342 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 14:10:40.925435 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:40.926624 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 30 14:10:40.926736 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:40.926807 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 30 14:10:40.926882 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:40.926960 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 30 14:10:40.927044 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:40.927111 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 30 14:10:40.927186 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:40.927253 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 30 14:10:40.927376 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:40.927455 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 30 14:10:40.927549 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:40.927619 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 30 14:10:40.927697 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:40.927764 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 30 14:10:40.927839 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:40.927910 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 30 14:10:40.927986 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 30 14:10:40.928054 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 30 14:10:40.928153 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 14:10:40.929155 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 30 14:10:40.929255 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 14:10:40.929355 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 14:10:40.929457 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 14:10:40.929557 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 30 14:10:40.929646 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 14:10:40.929725 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 30 14:10:40.929806 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 30 14:10:40.929907 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 14:10:40.929988 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 30 14:10:40.930072 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 14:10:40.930148 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 30 14:10:40.930222 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 30 14:10:40.930344 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 14:10:40.930430 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 30 14:10:40.930533 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 14:10:40.930620 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 14:10:40.930695 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 30 14:10:40.930770 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 30 14:10:40.930844 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 14:10:40.930920 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 30 14:10:40.930995 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 30 14:10:40.931069 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 30 14:10:40.931146 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 30 14:10:40.931217 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 30 14:10:40.931287 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 30 14:10:40.931377 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 14:10:40.931451 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 30 14:10:40.933056 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 30 14:10:40.933178 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 14:10:40.933251 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 30 14:10:40.933373 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 30 14:10:40.933455 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 14:10:40.933574 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 30 14:10:40.933651 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 30 14:10:40.933728 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 14:10:40.933797 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 30 14:10:40.933871 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 30 14:10:40.934060 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 14:10:40.934141 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 30 14:10:40.934209 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 30 14:10:40.934284 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 14:10:40.934371 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 30 14:10:40.934468 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 30 14:10:40.934660 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 14:10:40.934816 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 30 14:10:40.934896 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 30 14:10:40.934968 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 30 14:10:40.935036 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 14:10:40.935111 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 30 14:10:40.935179 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 14:10:40.935256 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 30 14:10:40.935432 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 14:10:40.935655 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 30 14:10:40.935733 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 14:10:40.935805 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 30 14:10:40.935873 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 14:10:40.935943 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 30 14:10:40.936021 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 14:10:40.936131 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 30 14:10:40.936208 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 14:10:40.936279 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 30 14:10:40.936369 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 14:10:40.936623 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 30 14:10:40.936767 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 14:10:40.936853 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 30 14:10:40.936938 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 30 14:10:40.937061 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 30 14:10:40.937134 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 14:10:40.937232 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 30 14:10:40.937352 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 14:10:40.937427 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 30 14:10:40.937517 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 14:10:40.937621 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 30 14:10:40.937689 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 14:10:40.937759 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 30 14:10:40.937826 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 14:10:40.937899 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 30 14:10:40.937966 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 14:10:40.938036 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 30 14:10:40.938102 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 14:10:40.938176 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 30 14:10:40.938243 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 30 14:10:40.938362 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 30 14:10:40.938444 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 30 14:10:40.938582 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 30 14:10:40.938683 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 30 14:10:40.938755 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 14:10:40.938831 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 30 14:10:40.938899 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 14:10:40.938966 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 30 14:10:40.939032 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 30 14:10:40.939097 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 14:10:40.939171 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 30 14:10:40.939242 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 14:10:40.939325 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 30 14:10:40.939396 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 30 14:10:40.939463 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 14:10:40.939571 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 14:10:40.939644 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 30 14:10:40.939720 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 14:10:40.939789 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 30 14:10:40.939857 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 30 14:10:40.939925 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 14:10:40.940001 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 14:10:40.940072 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 14:10:40.940142 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 30 14:10:40.940213 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 30 14:10:40.940307 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 14:10:40.940405 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 30 14:10:40.940481 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 30 14:10:40.940574 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 14:10:40.940648 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 30 14:10:40.940721 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 30 14:10:40.940791 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 14:10:40.940869 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 30 14:10:40.940950 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 30 14:10:40.941023 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 14:10:40.941098 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 30 14:10:40.941173 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 30 14:10:40.941243 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 14:10:40.941375 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 30 14:10:40.941466 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 30 14:10:40.941572 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 30 14:10:40.941650 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 14:10:40.941723 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 30 14:10:40.941794 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 30 14:10:40.941885 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 14:10:40.941968 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 14:10:40.942038 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 30 14:10:40.942109 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 30 14:10:40.942181 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 14:10:40.942287 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 14:10:40.942376 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 30 14:10:40.942445 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 30 14:10:40.944633 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 14:10:40.944739 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 14:10:40.944807 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 14:10:40.944867 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 14:10:40.944944 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 30 14:10:40.945014 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 30 14:10:40.945108 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 14:10:40.945197 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 30 14:10:40.945272 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 30 14:10:40.945359 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 14:10:40.945435 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 30 14:10:40.945522 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 30 14:10:40.945684 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 14:10:40.945804 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 14:10:40.945874 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 30 14:10:40.945937 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 14:10:40.946010 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 30 14:10:40.946076 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 30 14:10:40.946142 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 14:10:40.946217 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 30 14:10:40.946286 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 30 14:10:40.946415 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 14:10:40.947597 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 30 14:10:40.947743 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 30 14:10:40.947813 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 14:10:40.947891 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 30 14:10:40.947967 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 30 14:10:40.948041 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 14:10:40.948127 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 30 14:10:40.948208 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 30 14:10:40.948279 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 14:10:40.948291 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 14:10:40.948318 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 14:10:40.948329 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 14:10:40.948339 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 14:10:40.948347 kernel: iommu: Default domain type: Translated Jan 30 14:10:40.948356 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:10:40.948369 kernel: efivars: Registered efivars operations Jan 30 14:10:40.948378 kernel: vgaarb: loaded Jan 30 14:10:40.948386 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:10:40.948394 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:10:40.948402 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:10:40.948410 kernel: pnp: PnP ACPI init Jan 30 14:10:40.949619 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 14:10:40.949647 kernel: pnp: PnP ACPI: found 1 devices Jan 30 14:10:40.949663 kernel: NET: Registered PF_INET protocol family Jan 30 14:10:40.949672 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:10:40.949680 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:10:40.949689 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:10:40.949697 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:10:40.949705 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:10:40.949713 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:10:40.949721 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:10:40.949729 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:10:40.949739 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:10:40.949848 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 30 14:10:40.949861 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:10:40.949869 kernel: kvm [1]: HYP mode not available Jan 30 14:10:40.949878 kernel: Initialise system trusted keyrings Jan 30 14:10:40.949886 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:10:40.949894 kernel: Key type asymmetric registered Jan 30 14:10:40.949902 kernel: Asymmetric key parser 'x509' registered Jan 30 14:10:40.949910 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:10:40.949921 kernel: io scheduler mq-deadline registered Jan 30 14:10:40.949928 kernel: io scheduler kyber registered Jan 30 14:10:40.949937 kernel: io scheduler bfq registered Jan 30 14:10:40.949946 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 30 14:10:40.950021 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 30 14:10:40.950091 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 30 14:10:40.950160 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:40.950233 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 30 14:10:40.950343 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 30 14:10:40.950417 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:40.951552 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 30 14:10:40.951679 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 30 14:10:40.951751 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:40.951826 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 30 14:10:40.951907 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 30 14:10:40.951995 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:40.952075 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 30 14:10:40.952146 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 30 14:10:40.952213 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:40.952285 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 30 14:10:40.952414 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 30 14:10:40.953086 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:40.953181 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 30 14:10:40.953249 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 30 14:10:40.953339 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:40.953428 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 30 14:10:40.953517 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 30 14:10:40.953587 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:40.953613 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 30 14:10:40.953694 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 30 14:10:40.953766 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 30 14:10:40.953834 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:40.953849 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 14:10:40.953858 kernel: ACPI: button: Power Button [PWRB] Jan 30 14:10:40.953866 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 14:10:40.953943 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 30 14:10:40.954019 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 30 14:10:40.954031 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:10:40.954040 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 30 14:10:40.954108 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 30 14:10:40.954121 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 30 14:10:40.954130 kernel: thunder_xcv, ver 1.0 Jan 30 14:10:40.954138 kernel: thunder_bgx, ver 1.0 Jan 30 14:10:40.954146 kernel: nicpf, ver 1.0 Jan 30 14:10:40.954154 kernel: nicvf, ver 1.0 Jan 30 14:10:40.954243 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:10:40.954358 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:10:40 UTC (1738246240) Jan 30 14:10:40.954371 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:10:40.954385 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 14:10:40.954393 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:10:40.954401 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:10:40.954409 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:10:40.954417 kernel: Segment Routing with IPv6 Jan 30 14:10:40.954425 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:10:40.954433 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:10:40.954441 kernel: Key type dns_resolver registered Jan 30 14:10:40.954449 kernel: registered taskstats version 1 Jan 30 14:10:40.954459 kernel: Loading compiled-in X.509 certificates Jan 30 14:10:40.954467 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:10:40.954475 kernel: Key type .fscrypt registered Jan 30 14:10:40.954483 kernel: Key type fscrypt-provisioning registered Jan 30 14:10:40.954540 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:10:40.954550 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:10:40.954559 kernel: ima: No architecture policies found Jan 30 14:10:40.954567 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:10:40.954575 kernel: clk: Disabling unused clocks Jan 30 14:10:40.954586 kernel: Freeing unused kernel memory: 39360K Jan 30 14:10:40.954594 kernel: Run /init as init process Jan 30 14:10:40.954602 kernel: with arguments: Jan 30 14:10:40.954610 kernel: /init Jan 30 14:10:40.954618 kernel: with environment: Jan 30 14:10:40.954625 kernel: HOME=/ Jan 30 14:10:40.954633 kernel: TERM=linux Jan 30 14:10:40.954641 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:10:40.954652 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:40.954665 systemd[1]: Detected virtualization kvm. Jan 30 14:10:40.954673 systemd[1]: Detected architecture arm64. Jan 30 14:10:40.954682 systemd[1]: Running in initrd. Jan 30 14:10:40.954690 systemd[1]: No hostname configured, using default hostname. Jan 30 14:10:40.954698 systemd[1]: Hostname set to . Jan 30 14:10:40.954707 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:10:40.954719 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:10:40.954730 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:40.954739 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:40.954748 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:10:40.954756 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:40.954765 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:10:40.954774 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:10:40.954784 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:10:40.954794 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:10:40.954803 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:40.954811 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:40.954820 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:40.954835 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:40.954847 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:40.954856 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:40.954866 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:40.954877 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:40.954886 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:10:40.954895 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:10:40.954904 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:40.954912 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:40.954921 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:40.954929 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:40.954938 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:10:40.954947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:40.954958 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:10:40.954966 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:10:40.954974 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:40.954982 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:40.954991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:40.954999 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:40.955007 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:40.955018 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:10:40.955061 systemd-journald[235]: Collecting audit messages is disabled. Jan 30 14:10:40.955085 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:10:40.955095 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:10:40.955104 kernel: Bridge firewalling registered Jan 30 14:10:40.955113 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:40.955121 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:40.955130 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:40.955140 systemd-journald[235]: Journal started Jan 30 14:10:40.955162 systemd-journald[235]: Runtime Journal (/run/log/journal/c61da9f60cc54431a3f7573ee9bc123b) is 8.0M, max 76.6M, 68.6M free. Jan 30 14:10:40.910143 systemd-modules-load[237]: Inserted module 'overlay' Jan 30 14:10:40.932072 systemd-modules-load[237]: Inserted module 'br_netfilter' Jan 30 14:10:40.960875 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:40.960904 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:40.958676 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:40.967733 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:40.973755 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:40.984005 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:40.985044 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:40.986427 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:40.994970 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:10:41.013934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:41.019536 dracut-cmdline[271]: dracut-dracut-053 Jan 30 14:10:41.021972 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:41.027963 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:10:41.052438 systemd-resolved[280]: Positive Trust Anchors: Jan 30 14:10:41.053772 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:41.053813 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:41.058876 systemd-resolved[280]: Defaulting to hostname 'linux'. Jan 30 14:10:41.063928 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:41.064630 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:41.128562 kernel: SCSI subsystem initialized Jan 30 14:10:41.133556 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:10:41.141572 kernel: iscsi: registered transport (tcp) Jan 30 14:10:41.156634 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:10:41.156715 kernel: QLogic iSCSI HBA Driver Jan 30 14:10:41.212086 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:41.217857 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:10:41.249525 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:10:41.249598 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:10:41.250521 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:10:41.304556 kernel: raid6: neonx8 gen() 15429 MB/s Jan 30 14:10:41.321589 kernel: raid6: neonx4 gen() 15572 MB/s Jan 30 14:10:41.338577 kernel: raid6: neonx2 gen() 13023 MB/s Jan 30 14:10:41.355744 kernel: raid6: neonx1 gen() 10330 MB/s Jan 30 14:10:41.372560 kernel: raid6: int64x8 gen() 6887 MB/s Jan 30 14:10:41.389752 kernel: raid6: int64x4 gen() 7318 MB/s Jan 30 14:10:41.406579 kernel: raid6: int64x2 gen() 6084 MB/s Jan 30 14:10:41.423567 kernel: raid6: int64x1 gen() 4930 MB/s Jan 30 14:10:41.423656 kernel: raid6: using algorithm neonx4 gen() 15572 MB/s Jan 30 14:10:41.440635 kernel: raid6: .... xor() 11885 MB/s, rmw enabled Jan 30 14:10:41.440705 kernel: raid6: using neon recovery algorithm Jan 30 14:10:41.447722 kernel: xor: measuring software checksum speed Jan 30 14:10:41.447803 kernel: 8regs : 19802 MB/sec Jan 30 14:10:41.447820 kernel: 32regs : 17459 MB/sec Jan 30 14:10:41.447846 kernel: arm64_neon : 26874 MB/sec Jan 30 14:10:41.450830 kernel: xor: using function: arm64_neon (26874 MB/sec) Jan 30 14:10:41.501603 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:10:41.520926 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:41.527788 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:41.555151 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 30 14:10:41.559419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:41.569408 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:10:41.588090 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jan 30 14:10:41.629113 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:41.635834 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:41.692402 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:41.699797 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:10:41.726556 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:41.729287 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:41.732631 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:41.733219 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:41.740815 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:10:41.766553 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:41.823326 kernel: ACPI: bus type USB registered Jan 30 14:10:41.827746 kernel: scsi host0: Virtio SCSI HBA Jan 30 14:10:41.827840 kernel: usbcore: registered new interface driver usbfs Jan 30 14:10:41.827854 kernel: usbcore: registered new interface driver hub Jan 30 14:10:41.828377 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:41.832975 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 14:10:41.833051 kernel: usbcore: registered new device driver usb Jan 30 14:10:41.833076 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 14:10:41.830237 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:41.835658 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:41.836231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:41.836411 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:41.837945 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:41.845849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:41.860394 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:41.869841 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:41.881830 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 30 14:10:41.893433 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 30 14:10:41.893922 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 14:10:41.893936 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 30 14:10:41.888974 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:41.899931 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 14:10:41.909921 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 14:10:41.910041 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 14:10:41.910127 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 14:10:41.910207 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 14:10:41.910344 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 14:10:41.910463 kernel: hub 1-0:1.0: USB hub found Jan 30 14:10:41.910598 kernel: hub 1-0:1.0: 4 ports detected Jan 30 14:10:41.910681 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 14:10:41.910776 kernel: hub 2-0:1.0: USB hub found Jan 30 14:10:41.910869 kernel: hub 2-0:1.0: 4 ports detected Jan 30 14:10:41.910953 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 30 14:10:41.918132 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 14:10:41.918725 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 30 14:10:41.918842 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 30 14:10:41.918939 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:10:41.919037 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:10:41.919052 kernel: GPT:17805311 != 80003071 Jan 30 14:10:41.919063 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:10:41.919080 kernel: GPT:17805311 != 80003071 Jan 30 14:10:41.919091 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:10:41.919102 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:41.919115 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 30 14:10:41.961576 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (501) Jan 30 14:10:41.964570 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (514) Jan 30 14:10:41.970395 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 14:10:41.983120 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 14:10:41.990839 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 14:10:41.995330 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 14:10:41.996034 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 14:10:42.005412 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:10:42.016011 disk-uuid[572]: Primary Header is updated. Jan 30 14:10:42.016011 disk-uuid[572]: Secondary Entries is updated. Jan 30 14:10:42.016011 disk-uuid[572]: Secondary Header is updated. Jan 30 14:10:42.022592 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:42.026544 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:42.031546 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:42.147658 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 14:10:42.389603 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 30 14:10:42.525168 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 30 14:10:42.525218 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 14:10:42.527524 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 30 14:10:42.580804 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 30 14:10:42.582511 kernel: usbcore: registered new interface driver usbhid Jan 30 14:10:42.582550 kernel: usbhid: USB HID core driver Jan 30 14:10:43.035352 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:43.035570 disk-uuid[573]: The operation has completed successfully. Jan 30 14:10:43.089249 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:10:43.089385 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:10:43.104748 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:10:43.122541 sh[591]: Success Jan 30 14:10:43.136528 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:10:43.197342 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:10:43.207263 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:10:43.208192 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:10:43.237915 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:10:43.237986 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:43.239557 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:10:43.239616 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:10:43.240787 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:10:43.249566 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:10:43.251644 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:10:43.254356 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:10:43.259775 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:10:43.264256 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:10:43.277034 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:43.277089 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:43.277100 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:43.286575 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:10:43.286645 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:43.299935 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:10:43.300754 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:43.307846 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:10:43.314960 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:10:43.413365 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:43.419957 ignition[687]: Ignition 2.19.0 Jan 30 14:10:43.419973 ignition[687]: Stage: fetch-offline Jan 30 14:10:43.420174 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:43.420028 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:43.420038 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:43.420206 ignition[687]: parsed url from cmdline: "" Jan 30 14:10:43.420209 ignition[687]: no config URL provided Jan 30 14:10:43.420214 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:43.420220 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:10:43.420225 ignition[687]: failed to fetch config: resource requires networking Jan 30 14:10:43.425427 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:43.420595 ignition[687]: Ignition finished successfully Jan 30 14:10:43.455220 systemd-networkd[777]: lo: Link UP Jan 30 14:10:43.456057 systemd-networkd[777]: lo: Gained carrier Jan 30 14:10:43.458651 systemd-networkd[777]: Enumeration completed Jan 30 14:10:43.459175 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:43.459718 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:43.459721 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:43.462145 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:43.462148 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:43.462847 systemd-networkd[777]: eth0: Link UP Jan 30 14:10:43.462851 systemd-networkd[777]: eth0: Gained carrier Jan 30 14:10:43.462859 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:43.463304 systemd[1]: Reached target network.target - Network. Jan 30 14:10:43.466827 systemd-networkd[777]: eth1: Link UP Jan 30 14:10:43.466831 systemd-networkd[777]: eth1: Gained carrier Jan 30 14:10:43.466842 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:43.469736 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:10:43.486175 ignition[781]: Ignition 2.19.0 Jan 30 14:10:43.486192 ignition[781]: Stage: fetch Jan 30 14:10:43.486410 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:43.486421 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:43.486588 ignition[781]: parsed url from cmdline: "" Jan 30 14:10:43.486614 ignition[781]: no config URL provided Jan 30 14:10:43.486620 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:43.486629 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:10:43.486654 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 14:10:43.487242 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 14:10:43.501752 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 14:10:43.532663 systemd-networkd[777]: eth0: DHCPv4 address 168.119.241.96/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 14:10:43.688215 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 30 14:10:43.693945 ignition[781]: GET result: OK Jan 30 14:10:43.694135 ignition[781]: parsing config with SHA512: 47076d29adae2865eab506345478640fa93b9c5b6fd7efc11e7e5e831df7f58d2d1717ed1a2756974f5586022d1319c2df3c89d53596047dbf2ed9345d9558d3 Jan 30 14:10:43.702982 unknown[781]: fetched base config from "system" Jan 30 14:10:43.703401 ignition[781]: fetch: fetch complete Jan 30 14:10:43.702993 unknown[781]: fetched base config from "system" Jan 30 14:10:43.703406 ignition[781]: fetch: fetch passed Jan 30 14:10:43.702999 unknown[781]: fetched user config from "hetzner" Jan 30 14:10:43.703454 ignition[781]: Ignition finished successfully Jan 30 14:10:43.706484 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:10:43.710760 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:10:43.735485 ignition[788]: Ignition 2.19.0 Jan 30 14:10:43.735520 ignition[788]: Stage: kargs Jan 30 14:10:43.735734 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:43.735745 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:43.739534 ignition[788]: kargs: kargs passed Jan 30 14:10:43.740217 ignition[788]: Ignition finished successfully Jan 30 14:10:43.742475 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:10:43.749825 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:10:43.767780 ignition[795]: Ignition 2.19.0 Jan 30 14:10:43.767788 ignition[795]: Stage: disks Jan 30 14:10:43.771137 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:10:43.767980 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:43.767990 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:43.768986 ignition[795]: disks: disks passed Jan 30 14:10:43.769042 ignition[795]: Ignition finished successfully Jan 30 14:10:43.776000 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:43.778201 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:10:43.780232 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:43.781970 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:43.783055 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:43.790770 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:10:43.811043 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 14:10:43.815824 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:10:43.822647 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:10:43.871590 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:10:43.873708 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:10:43.874818 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:10:43.883728 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:43.886683 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:10:43.892825 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:10:43.895405 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:10:43.896823 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:43.906537 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (811) Jan 30 14:10:43.909568 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:43.909629 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:43.909641 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:43.910197 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:10:43.917799 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:10:43.924961 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:10:43.925028 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:43.932161 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:43.984219 coreos-metadata[813]: Jan 30 14:10:43.983 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 14:10:43.986946 coreos-metadata[813]: Jan 30 14:10:43.986 INFO Fetch successful Jan 30 14:10:43.989682 coreos-metadata[813]: Jan 30 14:10:43.988 INFO wrote hostname ci-4081-3-0-d-83a473bcbf to /sysroot/etc/hostname Jan 30 14:10:43.991199 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:10:43.992946 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:43.999210 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:10:44.004097 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:10:44.008832 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:10:44.121419 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:44.127701 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:10:44.130473 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:10:44.140580 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:44.171072 ignition[928]: INFO : Ignition 2.19.0 Jan 30 14:10:44.171072 ignition[928]: INFO : Stage: mount Jan 30 14:10:44.172658 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:44.172658 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:44.173195 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:10:44.175478 ignition[928]: INFO : mount: mount passed Jan 30 14:10:44.175478 ignition[928]: INFO : Ignition finished successfully Jan 30 14:10:44.176105 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:10:44.186694 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:10:44.239359 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:10:44.246804 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:44.258540 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Jan 30 14:10:44.260912 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:44.260983 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:44.260994 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:44.264550 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:10:44.264615 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:44.269223 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:44.295261 ignition[956]: INFO : Ignition 2.19.0 Jan 30 14:10:44.295261 ignition[956]: INFO : Stage: files Jan 30 14:10:44.296448 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:44.296448 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:44.298821 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:10:44.298821 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:10:44.300425 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:10:44.303125 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:10:44.304585 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:10:44.306043 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:10:44.304961 unknown[956]: wrote ssh authorized keys file for user: core Jan 30 14:10:44.307924 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:10:44.307924 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:10:44.590541 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:10:45.237690 systemd-networkd[777]: eth0: Gained IPv6LL Jan 30 14:10:45.238207 systemd-networkd[777]: eth1: Gained IPv6LL Jan 30 14:10:46.724164 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:10:46.724164 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:46.727860 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 14:10:47.295600 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:10:47.607104 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:47.607104 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:10:47.611110 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:47.614558 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:47.614558 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:10:47.614558 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 14:10:47.614558 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 14:10:47.614558 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 14:10:47.614558 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 14:10:47.614558 ignition[956]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:47.614558 ignition[956]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:47.614558 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:47.614558 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:47.614558 ignition[956]: INFO : files: files passed Jan 30 14:10:47.614558 ignition[956]: INFO : Ignition finished successfully Jan 30 14:10:47.616630 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:10:47.623755 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:10:47.630728 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:10:47.645563 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:10:47.645715 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:10:47.653742 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:47.653742 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:47.658800 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:47.659722 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:47.660940 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:10:47.670809 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:10:47.709811 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:10:47.709949 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:10:47.711621 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:10:47.712741 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:10:47.714347 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:10:47.724313 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:10:47.742957 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:47.747854 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:10:47.772299 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:47.775484 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:47.776166 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:10:47.776732 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:10:47.776867 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:47.779857 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:10:47.780465 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:10:47.781528 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:10:47.782639 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:47.783810 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:47.784956 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:10:47.785916 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:47.788438 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:10:47.789633 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:10:47.790488 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:10:47.791472 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:10:47.791624 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:47.792957 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:47.793605 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:47.794538 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:10:47.794612 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:47.795576 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:10:47.795705 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:47.797281 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:10:47.797412 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:47.798616 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:10:47.798718 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:10:47.799593 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:10:47.799690 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:47.806938 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:10:47.807473 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:10:47.807630 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:47.812871 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:10:47.813367 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:10:47.813609 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:47.817608 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:10:47.818600 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:47.827161 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:10:47.827339 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:10:47.832516 ignition[1010]: INFO : Ignition 2.19.0 Jan 30 14:10:47.832516 ignition[1010]: INFO : Stage: umount Jan 30 14:10:47.832516 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:47.832516 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:47.844148 ignition[1010]: INFO : umount: umount passed Jan 30 14:10:47.844148 ignition[1010]: INFO : Ignition finished successfully Jan 30 14:10:47.834858 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:10:47.835044 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:10:47.839241 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:10:47.839367 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:10:47.842746 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:10:47.842818 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:10:47.843364 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:10:47.843401 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:10:47.844652 systemd[1]: Stopped target network.target - Network. Jan 30 14:10:47.845524 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:10:47.845588 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:47.847698 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:10:47.848440 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:10:47.849574 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:47.850685 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:10:47.851687 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:10:47.853900 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:10:47.854008 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:47.855183 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:10:47.855226 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:47.856449 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:10:47.857339 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:10:47.859578 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:10:47.859686 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:47.861844 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:10:47.862946 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:10:47.866473 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:10:47.872384 systemd-networkd[777]: eth0: DHCPv6 lease lost Jan 30 14:10:47.874037 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:10:47.874435 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:10:47.875926 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:10:47.876062 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:10:47.878614 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:10:47.878692 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:47.879926 systemd-networkd[777]: eth1: DHCPv6 lease lost Jan 30 14:10:47.880753 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:10:47.880831 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:47.883960 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:10:47.884151 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:10:47.885654 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:10:47.885700 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:47.893689 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:10:47.894437 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:10:47.894542 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:47.895468 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:10:47.897131 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:47.897911 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:10:47.897982 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:47.899200 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:47.915322 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:10:47.915561 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:10:47.917942 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:10:47.918101 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:47.919786 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:10:47.919886 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:47.921029 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:10:47.921064 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:47.922105 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:10:47.922158 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:47.923629 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:10:47.923676 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:47.925113 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:47.925159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:47.931726 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:10:47.933683 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:10:47.934478 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:47.936358 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:47.936438 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:47.939513 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:10:47.939650 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:10:47.940647 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:10:47.948762 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:10:47.957146 systemd[1]: Switching root. Jan 30 14:10:47.993608 systemd-journald[235]: Journal stopped Jan 30 14:10:48.969877 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Jan 30 14:10:48.969983 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:10:48.969997 kernel: SELinux: policy capability open_perms=1 Jan 30 14:10:48.970007 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:10:48.970018 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:10:48.970032 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:10:48.970046 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:10:48.970055 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:10:48.970064 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:10:48.970075 systemd[1]: Successfully loaded SELinux policy in 37.945ms. Jan 30 14:10:48.970101 kernel: audit: type=1403 audit(1738246248.134:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:10:48.970112 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.228ms. Jan 30 14:10:48.970124 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:48.970136 systemd[1]: Detected virtualization kvm. Jan 30 14:10:48.970155 systemd[1]: Detected architecture arm64. Jan 30 14:10:48.970166 systemd[1]: Detected first boot. Jan 30 14:10:48.970176 systemd[1]: Hostname set to . Jan 30 14:10:48.970187 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:10:48.970198 zram_generator::config[1052]: No configuration found. Jan 30 14:10:48.970209 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:10:48.970220 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:10:48.970230 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:10:48.970265 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:10:48.970279 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:10:48.970290 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:10:48.970300 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:10:48.970311 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:10:48.970321 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:10:48.970332 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:10:48.970343 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:10:48.970353 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:10:48.970367 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:48.970378 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:48.970388 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:10:48.970399 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:10:48.970410 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:10:48.970420 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:48.970431 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 14:10:48.970441 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:48.970452 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:10:48.970465 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:10:48.970476 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:10:48.970487 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:10:48.970589 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:48.970603 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:48.970614 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:48.970628 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:48.970643 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:10:48.970653 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:10:48.970663 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:48.970674 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:48.970684 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:48.970696 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:10:48.970706 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:10:48.970716 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:10:48.970728 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:10:48.970739 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:10:48.970749 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:10:48.970759 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:10:48.970770 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:10:48.970780 systemd[1]: Reached target machines.target - Containers. Jan 30 14:10:48.970792 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:10:48.970808 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:48.970821 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:48.970834 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:10:48.970845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:48.970855 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:10:48.970866 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:48.970878 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:10:48.970890 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:48.970901 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:10:48.970912 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:10:48.970922 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:10:48.970933 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:10:48.970943 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:10:48.970955 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:48.970965 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:48.970980 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:10:48.970994 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:10:48.971004 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:48.971015 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:10:48.971025 systemd[1]: Stopped verity-setup.service. Jan 30 14:10:48.971036 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:10:48.971046 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:10:48.971058 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:10:48.971069 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:10:48.971080 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:10:48.971090 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:10:48.971101 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:48.971113 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:10:48.971124 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:10:48.971138 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:48.971149 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:48.971159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:48.971170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:48.971180 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:10:48.971191 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:48.971206 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:10:48.971217 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:10:48.971228 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:10:48.971250 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:48.971263 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:10:48.971274 kernel: fuse: init (API version 7.39) Jan 30 14:10:48.971285 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:10:48.971296 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:10:48.971308 kernel: loop: module loaded Jan 30 14:10:48.971318 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:48.971329 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:10:48.971381 systemd-journald[1129]: Collecting audit messages is disabled. Jan 30 14:10:48.971412 systemd-journald[1129]: Journal started Jan 30 14:10:48.971438 systemd-journald[1129]: Runtime Journal (/run/log/journal/c61da9f60cc54431a3f7573ee9bc123b) is 8.0M, max 76.6M, 68.6M free. Jan 30 14:10:48.980686 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:10:48.665271 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:10:48.690302 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 14:10:48.690860 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:10:48.985008 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:10:48.985081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:49.003520 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:10:49.003627 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:49.003643 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:10:49.009796 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:10:49.021260 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:10:49.021338 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:49.018450 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:10:49.020561 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:10:49.021932 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:49.022060 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:49.025057 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:10:49.041559 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:10:49.071344 kernel: ACPI: bus type drm_connector registered Jan 30 14:10:49.066888 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:10:49.072782 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:10:49.081713 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:10:49.090538 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:10:49.091782 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:49.094946 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:10:49.096917 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:10:49.100542 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:49.106552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:49.107736 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:10:49.124877 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:10:49.127426 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:10:49.129461 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:10:49.131961 kernel: loop0: detected capacity change from 0 to 8 Jan 30 14:10:49.137916 systemd-journald[1129]: Time spent on flushing to /var/log/journal/c61da9f60cc54431a3f7573ee9bc123b is 47.009ms for 1136 entries. Jan 30 14:10:49.137916 systemd-journald[1129]: System Journal (/var/log/journal/c61da9f60cc54431a3f7573ee9bc123b) is 8.0M, max 584.8M, 576.8M free. Jan 30 14:10:49.208682 systemd-journald[1129]: Received client request to flush runtime journal. Jan 30 14:10:49.208752 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:10:49.208777 kernel: loop1: detected capacity change from 0 to 114328 Jan 30 14:10:49.160750 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 14:10:49.210552 kernel: loop2: detected capacity change from 0 to 114432 Jan 30 14:10:49.173657 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:10:49.181032 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:49.211739 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:10:49.243201 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 30 14:10:49.243221 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 30 14:10:49.253012 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:49.259223 kernel: loop3: detected capacity change from 0 to 194096 Jan 30 14:10:49.309540 kernel: loop4: detected capacity change from 0 to 8 Jan 30 14:10:49.316560 kernel: loop5: detected capacity change from 0 to 114328 Jan 30 14:10:49.336538 kernel: loop6: detected capacity change from 0 to 114432 Jan 30 14:10:49.367055 kernel: loop7: detected capacity change from 0 to 194096 Jan 30 14:10:49.392071 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 14:10:49.393074 (sd-merge)[1193]: Merged extensions into '/usr'. Jan 30 14:10:49.400540 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:10:49.400802 systemd[1]: Reloading... Jan 30 14:10:49.496522 zram_generator::config[1216]: No configuration found. Jan 30 14:10:49.614545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:49.661456 systemd[1]: Reloading finished in 259 ms. Jan 30 14:10:49.682567 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:10:49.688930 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:10:49.690158 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:10:49.697826 systemd[1]: Starting ensure-sysext.service... Jan 30 14:10:49.701729 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:49.720643 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:10:49.720667 systemd[1]: Reloading... Jan 30 14:10:49.752070 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:10:49.752365 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:10:49.753038 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:10:49.753263 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 30 14:10:49.753314 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 30 14:10:49.758120 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:10:49.758136 systemd-tmpfiles[1257]: Skipping /boot Jan 30 14:10:49.780703 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:10:49.780859 systemd-tmpfiles[1257]: Skipping /boot Jan 30 14:10:49.816551 zram_generator::config[1283]: No configuration found. Jan 30 14:10:49.939425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:49.986186 systemd[1]: Reloading finished in 264 ms. Jan 30 14:10:50.003114 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:10:50.004297 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:50.021738 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:10:50.028711 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:10:50.033761 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:10:50.037870 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:50.047729 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:50.060810 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:10:50.080085 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:10:50.084297 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:50.092785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:50.097813 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:50.101895 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:50.102940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:50.104711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:50.104875 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:50.108041 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Jan 30 14:10:50.112368 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:50.117417 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:10:50.118573 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:50.120570 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:10:50.128609 systemd[1]: Finished ensure-sysext.service. Jan 30 14:10:50.151754 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 14:10:50.152793 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:10:50.153776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:50.154199 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:50.156425 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:50.156902 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:50.163847 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:50.167201 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:10:50.188997 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:50.190685 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:50.196764 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:10:50.198161 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:10:50.198598 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:50.199563 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:50.206622 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:10:50.207114 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:10:50.212920 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:50.233918 augenrules[1381]: No rules Jan 30 14:10:50.235784 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:10:50.244216 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 14:10:50.246641 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:10:50.267477 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:10:50.390359 systemd-networkd[1372]: lo: Link UP Jan 30 14:10:50.390370 systemd-networkd[1372]: lo: Gained carrier Jan 30 14:10:50.392558 systemd-networkd[1372]: Enumeration completed Jan 30 14:10:50.393262 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:50.394175 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:50.394508 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:50.395847 systemd-networkd[1372]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:50.395953 systemd-networkd[1372]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:50.396589 systemd-networkd[1372]: eth0: Link UP Jan 30 14:10:50.396596 systemd-networkd[1372]: eth0: Gained carrier Jan 30 14:10:50.396614 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:50.403903 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:10:50.404653 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 14:10:50.406052 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:10:50.406091 systemd-resolved[1326]: Positive Trust Anchors: Jan 30 14:10:50.406103 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:50.406134 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:50.407575 systemd-networkd[1372]: eth1: Link UP Jan 30 14:10:50.407582 systemd-networkd[1372]: eth1: Gained carrier Jan 30 14:10:50.407603 systemd-networkd[1372]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:50.418559 systemd-resolved[1326]: Using system hostname 'ci-4081-3-0-d-83a473bcbf'. Jan 30 14:10:50.422906 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:50.423508 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:50.423982 systemd[1]: Reached target network.target - Network. Jan 30 14:10:50.424947 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:50.441601 systemd-networkd[1372]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 14:10:50.443194 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jan 30 14:10:50.458549 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1358) Jan 30 14:10:50.465708 systemd-networkd[1372]: eth0: DHCPv4 address 168.119.241.96/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 14:10:50.466914 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jan 30 14:10:50.467208 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jan 30 14:10:50.499571 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:10:50.537566 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 30 14:10:50.537696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:50.563921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:50.567438 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:50.573335 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:50.574198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:50.574262 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:10:50.574648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:50.574810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:50.579197 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 14:10:50.588596 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:10:50.591567 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:50.591734 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:50.594338 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:50.598269 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:50.598511 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:50.600111 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:50.614523 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 30 14:10:50.614628 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 14:10:50.614648 kernel: [drm] features: -context_init Jan 30 14:10:50.617688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:50.618733 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:10:50.624609 kernel: [drm] number of scanouts: 1 Jan 30 14:10:50.624737 kernel: [drm] number of cap sets: 0 Jan 30 14:10:50.628594 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 14:10:50.634609 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 14:10:50.647785 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 14:10:50.648078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:50.648311 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:50.654853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:50.729534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:50.770424 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:10:50.777877 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:10:50.791177 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:10:50.818329 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:10:50.820439 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:50.821267 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:50.821997 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:10:50.822825 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:10:50.823809 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:10:50.824484 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:10:50.825128 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:10:50.825824 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:10:50.825864 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:50.826524 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:50.830423 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:10:50.833098 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:10:50.839103 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:10:50.843202 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:10:50.845597 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:10:50.846570 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:50.847075 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:50.847809 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:10:50.847843 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:10:50.853809 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:10:50.860387 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:10:50.861290 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:10:50.868784 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:10:50.874720 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:10:50.878796 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:10:50.880152 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:10:50.884802 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:10:50.890726 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:10:50.892895 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 14:10:50.898416 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:10:50.914745 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:10:50.919709 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:10:50.923849 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:10:50.924460 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:10:50.927787 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:10:50.933486 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:10:50.936123 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:10:50.949420 jq[1445]: false Jan 30 14:10:50.951607 extend-filesystems[1446]: Found loop4 Jan 30 14:10:50.956018 extend-filesystems[1446]: Found loop5 Jan 30 14:10:50.956018 extend-filesystems[1446]: Found loop6 Jan 30 14:10:50.956018 extend-filesystems[1446]: Found loop7 Jan 30 14:10:50.956018 extend-filesystems[1446]: Found sda Jan 30 14:10:50.956018 extend-filesystems[1446]: Found sda1 Jan 30 14:10:50.956018 extend-filesystems[1446]: Found sda2 Jan 30 14:10:50.956018 extend-filesystems[1446]: Found sda3 Jan 30 14:10:50.956018 extend-filesystems[1446]: Found usr Jan 30 14:10:50.956018 extend-filesystems[1446]: Found sda4 Jan 30 14:10:50.956018 extend-filesystems[1446]: Found sda6 Jan 30 14:10:50.956018 extend-filesystems[1446]: Found sda7 Jan 30 14:10:50.956018 extend-filesystems[1446]: Found sda9 Jan 30 14:10:50.956018 extend-filesystems[1446]: Checking size of /dev/sda9 Jan 30 14:10:50.958667 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:10:50.957802 dbus-daemon[1444]: [system] SELinux support is enabled Jan 30 14:10:50.963955 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:10:50.964168 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:10:50.964538 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:10:50.964568 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:10:50.965856 coreos-metadata[1443]: Jan 30 14:10:50.965 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 14:10:50.967330 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:10:50.967399 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:10:50.969654 coreos-metadata[1443]: Jan 30 14:10:50.969 INFO Fetch successful Jan 30 14:10:50.970064 coreos-metadata[1443]: Jan 30 14:10:50.969 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 14:10:50.973700 coreos-metadata[1443]: Jan 30 14:10:50.973 INFO Fetch successful Jan 30 14:10:50.977532 jq[1457]: true Jan 30 14:10:51.001257 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:10:51.002823 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:10:51.025794 extend-filesystems[1446]: Resized partition /dev/sda9 Jan 30 14:10:51.037240 extend-filesystems[1486]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:10:51.038121 jq[1470]: true Jan 30 14:10:51.048539 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 14:10:51.054016 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:10:51.069937 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:10:51.070159 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:10:51.091252 update_engine[1456]: I20250130 14:10:51.090482 1456 main.cc:92] Flatcar Update Engine starting Jan 30 14:10:51.096119 tar[1467]: linux-arm64/helm Jan 30 14:10:51.103772 update_engine[1456]: I20250130 14:10:51.103129 1456 update_check_scheduler.cc:74] Next update check in 2m6s Jan 30 14:10:51.103632 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:10:51.108788 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:10:51.201085 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1367) Jan 30 14:10:51.209195 systemd-logind[1454]: New seat seat0. Jan 30 14:10:51.210375 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 14:10:51.210392 systemd-logind[1454]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 30 14:10:51.214073 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:10:51.230772 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:10:51.234396 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:10:51.278178 bash[1514]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:10:51.279903 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:10:51.311995 systemd[1]: Starting sshkeys.service... Jan 30 14:10:51.314529 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 14:10:51.334419 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:10:51.335424 extend-filesystems[1486]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 14:10:51.335424 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 14:10:51.335424 extend-filesystems[1486]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 14:10:51.347346 extend-filesystems[1446]: Resized filesystem in /dev/sda9 Jan 30 14:10:51.347346 extend-filesystems[1446]: Found sr0 Jan 30 14:10:51.348345 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:10:51.350014 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:10:51.350305 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:10:51.357973 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:10:51.410011 coreos-metadata[1528]: Jan 30 14:10:51.409 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 14:10:51.413338 coreos-metadata[1528]: Jan 30 14:10:51.412 INFO Fetch successful Jan 30 14:10:51.416346 unknown[1528]: wrote ssh authorized keys file for user: core Jan 30 14:10:51.438985 containerd[1481]: time="2025-01-30T14:10:51.438859400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:10:51.456264 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:10:51.457757 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:10:51.461962 systemd[1]: Finished sshkeys.service. Jan 30 14:10:51.507332 containerd[1481]: time="2025-01-30T14:10:51.507264680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:51.512856 containerd[1481]: time="2025-01-30T14:10:51.512066600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:51.512856 containerd[1481]: time="2025-01-30T14:10:51.512117960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:10:51.512856 containerd[1481]: time="2025-01-30T14:10:51.512137760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:10:51.512856 containerd[1481]: time="2025-01-30T14:10:51.512335880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:10:51.512856 containerd[1481]: time="2025-01-30T14:10:51.512354840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:51.512856 containerd[1481]: time="2025-01-30T14:10:51.512425040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:51.512856 containerd[1481]: time="2025-01-30T14:10:51.512437880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:51.515522 containerd[1481]: time="2025-01-30T14:10:51.513899280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:51.515522 containerd[1481]: time="2025-01-30T14:10:51.513939800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:51.515522 containerd[1481]: time="2025-01-30T14:10:51.513956560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:51.515522 containerd[1481]: time="2025-01-30T14:10:51.513966720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:51.515522 containerd[1481]: time="2025-01-30T14:10:51.514093480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:51.515522 containerd[1481]: time="2025-01-30T14:10:51.514326320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:51.515522 containerd[1481]: time="2025-01-30T14:10:51.514440320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:51.515522 containerd[1481]: time="2025-01-30T14:10:51.514456520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:10:51.516715 containerd[1481]: time="2025-01-30T14:10:51.516672320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:10:51.517103 containerd[1481]: time="2025-01-30T14:10:51.517084480Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:10:51.526291 containerd[1481]: time="2025-01-30T14:10:51.526208720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:10:51.526659 containerd[1481]: time="2025-01-30T14:10:51.526639480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:10:51.527556 containerd[1481]: time="2025-01-30T14:10:51.527531400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:10:51.528025 containerd[1481]: time="2025-01-30T14:10:51.527630760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:10:51.528025 containerd[1481]: time="2025-01-30T14:10:51.527651400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:10:51.528025 containerd[1481]: time="2025-01-30T14:10:51.527833920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:10:51.528308 containerd[1481]: time="2025-01-30T14:10:51.528250160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:10:51.528908 containerd[1481]: time="2025-01-30T14:10:51.528886760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:10:51.529340 containerd[1481]: time="2025-01-30T14:10:51.529319200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:10:51.529411 containerd[1481]: time="2025-01-30T14:10:51.529398640Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:10:51.529477 containerd[1481]: time="2025-01-30T14:10:51.529463920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.529852240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.529877440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.529894400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.529910760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.529924760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.529938760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.529961680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.529989120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.530004840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.530018160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.530034320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.530053720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.530067400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530508 containerd[1481]: time="2025-01-30T14:10:51.530081080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530796 containerd[1481]: time="2025-01-30T14:10:51.530096480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530796 containerd[1481]: time="2025-01-30T14:10:51.530110160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530796 containerd[1481]: time="2025-01-30T14:10:51.530127200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530796 containerd[1481]: time="2025-01-30T14:10:51.530139760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530796 containerd[1481]: time="2025-01-30T14:10:51.530152080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530796 containerd[1481]: time="2025-01-30T14:10:51.530169160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530796 containerd[1481]: time="2025-01-30T14:10:51.530192480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:10:51.530796 containerd[1481]: time="2025-01-30T14:10:51.530262040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530796 containerd[1481]: time="2025-01-30T14:10:51.530277360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.530796 containerd[1481]: time="2025-01-30T14:10:51.530288880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:10:51.535390 containerd[1481]: time="2025-01-30T14:10:51.533019840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:10:51.535390 containerd[1481]: time="2025-01-30T14:10:51.533082960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:10:51.535390 containerd[1481]: time="2025-01-30T14:10:51.533097800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:10:51.535390 containerd[1481]: time="2025-01-30T14:10:51.533112200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:10:51.535390 containerd[1481]: time="2025-01-30T14:10:51.533122800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.535390 containerd[1481]: time="2025-01-30T14:10:51.533138120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:10:51.535390 containerd[1481]: time="2025-01-30T14:10:51.533149040Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:10:51.535390 containerd[1481]: time="2025-01-30T14:10:51.533160800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:10:51.535673 containerd[1481]: time="2025-01-30T14:10:51.533587120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:10:51.535673 containerd[1481]: time="2025-01-30T14:10:51.533653040Z" level=info msg="Connect containerd service" Jan 30 14:10:51.535673 containerd[1481]: time="2025-01-30T14:10:51.533697000Z" level=info msg="using legacy CRI server" Jan 30 14:10:51.535673 containerd[1481]: time="2025-01-30T14:10:51.533706520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:10:51.535673 containerd[1481]: time="2025-01-30T14:10:51.533826160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:10:51.538624 containerd[1481]: time="2025-01-30T14:10:51.538578680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:10:51.539653 containerd[1481]: time="2025-01-30T14:10:51.539595760Z" level=info msg="Start subscribing containerd event" Jan 30 14:10:51.539729 containerd[1481]: time="2025-01-30T14:10:51.539673720Z" level=info msg="Start recovering state" Jan 30 14:10:51.539777 containerd[1481]: time="2025-01-30T14:10:51.539761960Z" level=info msg="Start event monitor" Jan 30 14:10:51.539800 containerd[1481]: time="2025-01-30T14:10:51.539777280Z" level=info msg="Start snapshots syncer" Jan 30 14:10:51.539800 containerd[1481]: time="2025-01-30T14:10:51.539788720Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:10:51.539800 containerd[1481]: time="2025-01-30T14:10:51.539797400Z" level=info msg="Start streaming server" Jan 30 14:10:51.542416 containerd[1481]: time="2025-01-30T14:10:51.542371240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:10:51.544689 containerd[1481]: time="2025-01-30T14:10:51.543196120Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:10:51.543477 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:10:51.546311 containerd[1481]: time="2025-01-30T14:10:51.546264840Z" level=info msg="containerd successfully booted in 0.112389s" Jan 30 14:10:51.727975 tar[1467]: linux-arm64/LICENSE Jan 30 14:10:51.728230 tar[1467]: linux-arm64/README.md Jan 30 14:10:51.739953 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:10:51.978703 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:10:52.004285 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:10:52.011882 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:10:52.020948 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:10:52.021347 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:10:52.033100 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:10:52.047770 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:10:52.053900 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:10:52.057911 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 14:10:52.059840 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:10:52.085705 systemd-networkd[1372]: eth1: Gained IPv6LL Jan 30 14:10:52.087032 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jan 30 14:10:52.090592 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:10:52.092010 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:10:52.098812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:52.101858 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:10:52.134105 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:10:52.406093 systemd-networkd[1372]: eth0: Gained IPv6LL Jan 30 14:10:52.406904 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Jan 30 14:10:53.031838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:53.032012 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:10:53.034563 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:10:53.038617 systemd[1]: Startup finished in 826ms (kernel) + 7.440s (initrd) + 4.942s (userspace) = 13.209s. Jan 30 14:10:53.791016 kubelet[1576]: E0130 14:10:53.790893 1576 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:10:53.794385 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:10:53.794588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:04.045316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:11:04.052959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:04.182789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:04.182990 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:04.241621 kubelet[1596]: E0130 14:11:04.241544 1596 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:04.246281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:04.246621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:14.497397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:11:14.503977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:14.615106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:14.628253 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:14.678039 kubelet[1611]: E0130 14:11:14.677959 1611 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:14.682191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:14.682460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:22.151249 systemd-timesyncd[1347]: Contacted time server 195.201.107.151:123 (2.flatcar.pool.ntp.org). Jan 30 14:11:22.151323 systemd-timesyncd[1347]: Initial clock synchronization to Thu 2025-01-30 14:11:22.151056 UTC. Jan 30 14:11:22.152032 systemd-resolved[1326]: Clock change detected. Flushing caches. Jan 30 14:11:24.328192 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 14:11:24.338409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:24.480408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:24.495367 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:24.554078 kubelet[1627]: E0130 14:11:24.554012 1627 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:24.558005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:24.559042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:34.578704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 14:11:34.589250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:34.717114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:34.729531 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:34.782372 kubelet[1644]: E0130 14:11:34.782275 1644 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:34.785181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:34.785376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:36.400975 update_engine[1456]: I20250130 14:11:36.400671 1456 update_attempter.cc:509] Updating boot flags... Jan 30 14:11:36.452950 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1661) Jan 30 14:11:36.502004 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1664) Jan 30 14:11:44.828841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 14:11:44.845591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:44.976144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:44.976915 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:45.025489 kubelet[1678]: E0130 14:11:45.025370 1678 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:45.029744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:45.030190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:55.078218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 14:11:55.091324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:55.222548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:55.234459 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:55.289533 kubelet[1694]: E0130 14:11:55.289466 1694 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:55.292528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:55.292759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:05.328607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 14:12:05.335273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:05.456149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:05.474427 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:05.529071 kubelet[1710]: E0130 14:12:05.528939 1710 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:05.532008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:05.532169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:15.578662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 14:12:15.590282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:15.746295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:15.746545 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:15.798908 kubelet[1726]: E0130 14:12:15.798826 1726 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:15.801128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:15.801302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:25.828667 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 30 14:12:25.836456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:25.985107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:25.997666 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:26.044615 kubelet[1742]: E0130 14:12:26.044450 1742 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:26.047503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:26.047672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:36.078620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 30 14:12:36.086192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:36.224049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:36.236566 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:36.287013 kubelet[1758]: E0130 14:12:36.286942 1758 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:36.289177 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:36.289368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:41.835086 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:12:41.842466 systemd[1]: Started sshd@0-168.119.241.96:22-139.178.68.195:34216.service - OpenSSH per-connection server daemon (139.178.68.195:34216). Jan 30 14:12:42.821444 sshd[1767]: Accepted publickey for core from 139.178.68.195 port 34216 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:42.825401 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:42.835816 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:12:42.847918 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:12:42.852461 systemd-logind[1454]: New session 1 of user core. Jan 30 14:12:42.861483 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:12:42.874364 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:12:42.878122 (systemd)[1771]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:12:42.985332 systemd[1771]: Queued start job for default target default.target. Jan 30 14:12:42.997831 systemd[1771]: Created slice app.slice - User Application Slice. Jan 30 14:12:42.997924 systemd[1771]: Reached target paths.target - Paths. Jan 30 14:12:42.997955 systemd[1771]: Reached target timers.target - Timers. Jan 30 14:12:43.004224 systemd[1771]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:12:43.021784 systemd[1771]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:12:43.021857 systemd[1771]: Reached target sockets.target - Sockets. Jan 30 14:12:43.021869 systemd[1771]: Reached target basic.target - Basic System. Jan 30 14:12:43.021962 systemd[1771]: Reached target default.target - Main User Target. Jan 30 14:12:43.021993 systemd[1771]: Startup finished in 136ms. Jan 30 14:12:43.022684 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:12:43.033205 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:12:43.732379 systemd[1]: Started sshd@1-168.119.241.96:22-139.178.68.195:34222.service - OpenSSH per-connection server daemon (139.178.68.195:34222). Jan 30 14:12:44.722213 sshd[1782]: Accepted publickey for core from 139.178.68.195 port 34222 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:44.724544 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:44.729728 systemd-logind[1454]: New session 2 of user core. Jan 30 14:12:44.738451 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:12:45.411745 sshd[1782]: pam_unix(sshd:session): session closed for user core Jan 30 14:12:45.416467 systemd[1]: sshd@1-168.119.241.96:22-139.178.68.195:34222.service: Deactivated successfully. Jan 30 14:12:45.418530 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:12:45.420135 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:12:45.421478 systemd-logind[1454]: Removed session 2. Jan 30 14:12:45.588391 systemd[1]: Started sshd@2-168.119.241.96:22-139.178.68.195:45318.service - OpenSSH per-connection server daemon (139.178.68.195:45318). Jan 30 14:12:46.328091 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 30 14:12:46.338313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:46.475189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:46.480392 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:46.531739 kubelet[1799]: E0130 14:12:46.531568 1799 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:46.535450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:46.535661 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:46.563745 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 45318 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:46.566225 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:46.572418 systemd-logind[1454]: New session 3 of user core. Jan 30 14:12:46.579285 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:12:47.239213 sshd[1789]: pam_unix(sshd:session): session closed for user core Jan 30 14:12:47.244621 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:12:47.244810 systemd[1]: sshd@2-168.119.241.96:22-139.178.68.195:45318.service: Deactivated successfully. Jan 30 14:12:47.247566 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:12:47.252181 systemd-logind[1454]: Removed session 3. Jan 30 14:12:47.419440 systemd[1]: Started sshd@3-168.119.241.96:22-139.178.68.195:45334.service - OpenSSH per-connection server daemon (139.178.68.195:45334). Jan 30 14:12:48.405378 sshd[1812]: Accepted publickey for core from 139.178.68.195 port 45334 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:48.407430 sshd[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:48.414495 systemd-logind[1454]: New session 4 of user core. Jan 30 14:12:48.425237 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:12:49.095541 sshd[1812]: pam_unix(sshd:session): session closed for user core Jan 30 14:12:49.100675 systemd[1]: sshd@3-168.119.241.96:22-139.178.68.195:45334.service: Deactivated successfully. Jan 30 14:12:49.103581 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:12:49.106787 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:12:49.109197 systemd-logind[1454]: Removed session 4. Jan 30 14:12:49.270800 systemd[1]: Started sshd@4-168.119.241.96:22-139.178.68.195:45338.service - OpenSSH per-connection server daemon (139.178.68.195:45338). Jan 30 14:12:50.247022 sshd[1819]: Accepted publickey for core from 139.178.68.195 port 45338 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:50.248524 sshd[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:50.253621 systemd-logind[1454]: New session 5 of user core. Jan 30 14:12:50.260787 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:12:50.774283 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:12:50.774595 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:12:50.793007 sudo[1822]: pam_unix(sudo:session): session closed for user root Jan 30 14:12:50.952749 sshd[1819]: pam_unix(sshd:session): session closed for user core Jan 30 14:12:50.958962 systemd[1]: sshd@4-168.119.241.96:22-139.178.68.195:45338.service: Deactivated successfully. Jan 30 14:12:50.961666 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:12:50.964221 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:12:50.965683 systemd-logind[1454]: Removed session 5. Jan 30 14:12:51.129453 systemd[1]: Started sshd@5-168.119.241.96:22-139.178.68.195:45352.service - OpenSSH per-connection server daemon (139.178.68.195:45352). Jan 30 14:12:52.111911 sshd[1827]: Accepted publickey for core from 139.178.68.195 port 45352 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:52.113845 sshd[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:52.119126 systemd-logind[1454]: New session 6 of user core. Jan 30 14:12:52.125103 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:12:52.638357 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:12:52.638690 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:12:52.644166 sudo[1831]: pam_unix(sudo:session): session closed for user root Jan 30 14:12:52.650813 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:12:52.651177 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:12:52.671334 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:12:52.673300 auditctl[1834]: No rules Jan 30 14:12:52.673758 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:12:52.674011 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:12:52.677686 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:12:52.710698 augenrules[1852]: No rules Jan 30 14:12:52.712947 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:12:52.714367 sudo[1830]: pam_unix(sudo:session): session closed for user root Jan 30 14:12:52.874959 sshd[1827]: pam_unix(sshd:session): session closed for user core Jan 30 14:12:52.879951 systemd[1]: sshd@5-168.119.241.96:22-139.178.68.195:45352.service: Deactivated successfully. Jan 30 14:12:52.886171 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:12:52.887928 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:12:52.894274 systemd-logind[1454]: Removed session 6. Jan 30 14:12:53.050218 systemd[1]: Started sshd@6-168.119.241.96:22-139.178.68.195:45356.service - OpenSSH per-connection server daemon (139.178.68.195:45356). Jan 30 14:12:54.034115 sshd[1860]: Accepted publickey for core from 139.178.68.195 port 45356 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:54.036108 sshd[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:54.042336 systemd-logind[1454]: New session 7 of user core. Jan 30 14:12:54.053349 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:12:54.556328 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:12:54.556662 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:12:54.874804 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:12:54.877067 (dockerd)[1878]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:12:55.134967 dockerd[1878]: time="2025-01-30T14:12:55.134631996Z" level=info msg="Starting up" Jan 30 14:12:55.219847 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1816422525-merged.mount: Deactivated successfully. Jan 30 14:12:55.249839 dockerd[1878]: time="2025-01-30T14:12:55.249783694Z" level=info msg="Loading containers: start." Jan 30 14:12:55.363935 kernel: Initializing XFRM netlink socket Jan 30 14:12:55.467342 systemd-networkd[1372]: docker0: Link UP Jan 30 14:12:55.490302 dockerd[1878]: time="2025-01-30T14:12:55.490217256Z" level=info msg="Loading containers: done." Jan 30 14:12:55.507928 dockerd[1878]: time="2025-01-30T14:12:55.507585345Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:12:55.507928 dockerd[1878]: time="2025-01-30T14:12:55.507753585Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:12:55.508254 dockerd[1878]: time="2025-01-30T14:12:55.507972585Z" level=info msg="Daemon has completed initialization" Jan 30 14:12:55.548702 dockerd[1878]: time="2025-01-30T14:12:55.548138926Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:12:55.549456 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:12:56.217274 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3131233473-merged.mount: Deactivated successfully. Jan 30 14:12:56.578571 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 30 14:12:56.593044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:56.765095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:56.775482 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:56.784644 containerd[1481]: time="2025-01-30T14:12:56.784581168Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 14:12:56.854261 kubelet[2032]: E0130 14:12:56.853859 2032 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:56.857645 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:56.857795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:57.357384 update_engine[1456]: I20250130 14:12:57.356107 1456 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 14:12:57.357384 update_engine[1456]: I20250130 14:12:57.356191 1456 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 14:12:57.357384 update_engine[1456]: I20250130 14:12:57.356561 1456 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 14:12:57.357384 update_engine[1456]: I20250130 14:12:57.357339 1456 omaha_request_params.cc:62] Current group set to lts Jan 30 14:12:57.359903 locksmithd[1495]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 14:12:57.360253 update_engine[1456]: I20250130 14:12:57.358865 1456 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 14:12:57.360253 update_engine[1456]: I20250130 14:12:57.358966 1456 update_attempter.cc:643] Scheduling an action processor start. Jan 30 14:12:57.360253 update_engine[1456]: I20250130 14:12:57.359642 1456 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 14:12:57.360253 update_engine[1456]: I20250130 14:12:57.359862 1456 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 14:12:57.360253 update_engine[1456]: I20250130 14:12:57.359998 1456 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 14:12:57.360253 update_engine[1456]: I20250130 14:12:57.360013 1456 omaha_request_action.cc:272] Request: Jan 30 14:12:57.360253 update_engine[1456]: Jan 30 14:12:57.360253 update_engine[1456]: Jan 30 14:12:57.360253 update_engine[1456]: Jan 30 14:12:57.360253 update_engine[1456]: Jan 30 14:12:57.360253 update_engine[1456]: Jan 30 14:12:57.360253 update_engine[1456]: Jan 30 14:12:57.360253 update_engine[1456]: Jan 30 14:12:57.360253 update_engine[1456]: Jan 30 14:12:57.360253 update_engine[1456]: I20250130 14:12:57.360018 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:12:57.364453 update_engine[1456]: I20250130 14:12:57.364415 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:12:57.365321 update_engine[1456]: I20250130 14:12:57.365291 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:12:57.366202 update_engine[1456]: E20250130 14:12:57.366173 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:12:57.366360 update_engine[1456]: I20250130 14:12:57.366341 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 14:12:57.448111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3156695272.mount: Deactivated successfully. Jan 30 14:12:58.472227 containerd[1481]: time="2025-01-30T14:12:58.472136753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:58.473942 containerd[1481]: time="2025-01-30T14:12:58.473683194Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29865027" Jan 30 14:12:58.476042 containerd[1481]: time="2025-01-30T14:12:58.475945274Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:58.479922 containerd[1481]: time="2025-01-30T14:12:58.479304316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:58.481868 containerd[1481]: time="2025-01-30T14:12:58.480852157Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 1.696211709s" Jan 30 14:12:58.481868 containerd[1481]: time="2025-01-30T14:12:58.480931517Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 14:12:58.508411 containerd[1481]: time="2025-01-30T14:12:58.508349248Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 14:12:59.816946 containerd[1481]: time="2025-01-30T14:12:59.816655698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:59.818995 containerd[1481]: time="2025-01-30T14:12:59.818875458Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901581" Jan 30 14:12:59.819912 containerd[1481]: time="2025-01-30T14:12:59.819828236Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:59.827038 containerd[1481]: time="2025-01-30T14:12:59.826956645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:59.828534 containerd[1481]: time="2025-01-30T14:12:59.828484433Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.320061505s" Jan 30 14:12:59.828697 containerd[1481]: time="2025-01-30T14:12:59.828679997Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 14:12:59.857731 containerd[1481]: time="2025-01-30T14:12:59.857665565Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 14:13:00.771500 containerd[1481]: time="2025-01-30T14:13:00.769811010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:00.771500 containerd[1481]: time="2025-01-30T14:13:00.771433558Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164358" Jan 30 14:13:00.772112 containerd[1481]: time="2025-01-30T14:13:00.772058170Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:00.776858 containerd[1481]: time="2025-01-30T14:13:00.776792013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:00.778139 containerd[1481]: time="2025-01-30T14:13:00.778084956Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 919.866301ms" Jan 30 14:13:00.778139 containerd[1481]: time="2025-01-30T14:13:00.778132037Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 14:13:00.806415 containerd[1481]: time="2025-01-30T14:13:00.806378418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 14:13:01.865116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1401363243.mount: Deactivated successfully. Jan 30 14:13:02.263693 containerd[1481]: time="2025-01-30T14:13:02.263548820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:02.265350 containerd[1481]: time="2025-01-30T14:13:02.265099766Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662738" Jan 30 14:13:02.267752 containerd[1481]: time="2025-01-30T14:13:02.266640992Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:02.269963 containerd[1481]: time="2025-01-30T14:13:02.269913527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:02.270911 containerd[1481]: time="2025-01-30T14:13:02.270816102Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.464383924s" Jan 30 14:13:02.271162 containerd[1481]: time="2025-01-30T14:13:02.271125027Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 14:13:02.297426 containerd[1481]: time="2025-01-30T14:13:02.297377867Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:13:02.915094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3950546388.mount: Deactivated successfully. Jan 30 14:13:03.556087 containerd[1481]: time="2025-01-30T14:13:03.554995207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:03.556614 containerd[1481]: time="2025-01-30T14:13:03.556579953Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 30 14:13:03.557733 containerd[1481]: time="2025-01-30T14:13:03.557697051Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:03.561224 containerd[1481]: time="2025-01-30T14:13:03.561170028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:03.562721 containerd[1481]: time="2025-01-30T14:13:03.562675573Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.265249864s" Jan 30 14:13:03.562721 containerd[1481]: time="2025-01-30T14:13:03.562716893Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 14:13:03.585470 containerd[1481]: time="2025-01-30T14:13:03.585425263Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 14:13:04.147626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374793038.mount: Deactivated successfully. Jan 30 14:13:04.157159 containerd[1481]: time="2025-01-30T14:13:04.156215618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:04.158134 containerd[1481]: time="2025-01-30T14:13:04.158085647Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 30 14:13:04.159599 containerd[1481]: time="2025-01-30T14:13:04.159529070Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:04.163155 containerd[1481]: time="2025-01-30T14:13:04.162706361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:04.163754 containerd[1481]: time="2025-01-30T14:13:04.163715297Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 577.975588ms" Jan 30 14:13:04.163754 containerd[1481]: time="2025-01-30T14:13:04.163753377Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 14:13:04.191518 containerd[1481]: time="2025-01-30T14:13:04.191465496Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 14:13:04.775316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2272846256.mount: Deactivated successfully. Jan 30 14:13:06.199942 containerd[1481]: time="2025-01-30T14:13:06.199300243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:06.201654 containerd[1481]: time="2025-01-30T14:13:06.201607358Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Jan 30 14:13:06.203179 containerd[1481]: time="2025-01-30T14:13:06.203088460Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:06.211109 containerd[1481]: time="2025-01-30T14:13:06.211023139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:06.213309 containerd[1481]: time="2025-01-30T14:13:06.213242332Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.021727595s" Jan 30 14:13:06.213309 containerd[1481]: time="2025-01-30T14:13:06.213297373Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 14:13:06.889454 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 30 14:13:06.901463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:13:07.053745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:07.067631 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:13:07.126592 kubelet[2266]: E0130 14:13:07.125626 2266 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:13:07.128064 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:13:07.128210 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:13:07.365516 update_engine[1456]: I20250130 14:13:07.364935 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:13:07.365516 update_engine[1456]: I20250130 14:13:07.365234 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:13:07.365516 update_engine[1456]: I20250130 14:13:07.365465 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:13:07.366583 update_engine[1456]: E20250130 14:13:07.366544 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:13:07.366751 update_engine[1456]: I20250130 14:13:07.366729 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 14:13:12.260002 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:12.268327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:13:12.305321 systemd[1]: Reloading requested from client PID 2313 ('systemctl') (unit session-7.scope)... Jan 30 14:13:12.305342 systemd[1]: Reloading... Jan 30 14:13:12.452064 zram_generator::config[2371]: No configuration found. Jan 30 14:13:12.526735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:13:12.596137 systemd[1]: Reloading finished in 290 ms. Jan 30 14:13:12.650167 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:13:12.650270 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:13:12.650789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:12.658436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:13:12.789251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:12.789389 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:13:12.839585 kubelet[2401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:13:12.840008 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:13:12.840058 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:13:12.841935 kubelet[2401]: I0130 14:13:12.841594 2401 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:13:14.063777 kubelet[2401]: I0130 14:13:14.063689 2401 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:13:14.063777 kubelet[2401]: I0130 14:13:14.063747 2401 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:13:14.064282 kubelet[2401]: I0130 14:13:14.064102 2401 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:13:14.086720 kubelet[2401]: I0130 14:13:14.086671 2401 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:13:14.087650 kubelet[2401]: E0130 14:13:14.086995 2401 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://168.119.241.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:14.101135 kubelet[2401]: I0130 14:13:14.101065 2401 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:13:14.103278 kubelet[2401]: I0130 14:13:14.103207 2401 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:13:14.103632 kubelet[2401]: I0130 14:13:14.103416 2401 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-d-83a473bcbf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:13:14.103978 kubelet[2401]: I0130 14:13:14.103837 2401 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:13:14.103978 kubelet[2401]: I0130 14:13:14.103855 2401 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:13:14.104290 kubelet[2401]: I0130 14:13:14.104227 2401 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:13:14.105819 kubelet[2401]: I0130 14:13:14.105538 2401 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:13:14.105819 kubelet[2401]: I0130 14:13:14.105566 2401 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:13:14.105819 kubelet[2401]: I0130 14:13:14.105787 2401 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:13:14.106182 kubelet[2401]: I0130 14:13:14.105941 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:13:14.109759 kubelet[2401]: I0130 14:13:14.109697 2401 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:13:14.110185 kubelet[2401]: I0130 14:13:14.110159 2401 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:13:14.110250 kubelet[2401]: W0130 14:13:14.110227 2401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:13:14.111187 kubelet[2401]: I0130 14:13:14.111153 2401 server.go:1264] "Started kubelet" Jan 30 14:13:14.111898 kubelet[2401]: W0130 14:13:14.111319 2401 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.241.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:14.111898 kubelet[2401]: E0130 14:13:14.111385 2401 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://168.119.241.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:14.111898 kubelet[2401]: W0130 14:13:14.111446 2401 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.241.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-d-83a473bcbf&limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:14.111898 kubelet[2401]: E0130 14:13:14.111471 2401 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://168.119.241.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-d-83a473bcbf&limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:14.115139 kubelet[2401]: I0130 14:13:14.115025 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:13:14.124946 kubelet[2401]: I0130 14:13:14.124902 2401 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:13:14.126835 kubelet[2401]: I0130 14:13:14.126020 2401 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:13:14.127324 kubelet[2401]: I0130 14:13:14.127291 2401 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:13:14.128270 kubelet[2401]: I0130 14:13:14.128218 2401 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:13:14.128362 kubelet[2401]: I0130 14:13:14.128286 2401 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:13:14.129554 kubelet[2401]: I0130 14:13:14.128431 2401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:13:14.129554 kubelet[2401]: I0130 14:13:14.128678 2401 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:13:14.132940 kubelet[2401]: E0130 14:13:14.131785 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.241.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-d-83a473bcbf?timeout=10s\": dial tcp 168.119.241.96:6443: connect: connection refused" interval="200ms" Jan 30 14:13:14.132940 kubelet[2401]: E0130 14:13:14.132302 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.241.96:6443/api/v1/namespaces/default/events\": dial tcp 168.119.241.96:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-d-83a473bcbf.181f7de2c518e6e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-d-83a473bcbf,UID:ci-4081-3-0-d-83a473bcbf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-d-83a473bcbf,},FirstTimestamp:2025-01-30 14:13:14.111125218 +0000 UTC m=+1.315860411,LastTimestamp:2025-01-30 14:13:14.111125218 +0000 UTC m=+1.315860411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-d-83a473bcbf,}" Jan 30 14:13:14.134236 kubelet[2401]: W0130 14:13:14.133793 2401 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.241.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:14.134236 kubelet[2401]: E0130 14:13:14.133837 2401 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://168.119.241.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:14.134518 kubelet[2401]: I0130 14:13:14.134410 2401 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:13:14.134518 kubelet[2401]: I0130 14:13:14.134492 2401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:13:14.140034 kubelet[2401]: I0130 14:13:14.138985 2401 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:13:14.142863 kubelet[2401]: I0130 14:13:14.141813 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:13:14.144532 kubelet[2401]: I0130 14:13:14.144439 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:13:14.144532 kubelet[2401]: I0130 14:13:14.144489 2401 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:13:14.144532 kubelet[2401]: I0130 14:13:14.144508 2401 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:13:14.144705 kubelet[2401]: E0130 14:13:14.144545 2401 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:13:14.153899 kubelet[2401]: E0130 14:13:14.153854 2401 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:13:14.154219 kubelet[2401]: W0130 14:13:14.154165 2401 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.241.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:14.154267 kubelet[2401]: E0130 14:13:14.154235 2401 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://168.119.241.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:14.176629 kubelet[2401]: I0130 14:13:14.176318 2401 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:13:14.176629 kubelet[2401]: I0130 14:13:14.176337 2401 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:13:14.176629 kubelet[2401]: I0130 14:13:14.176359 2401 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:13:14.179948 kubelet[2401]: I0130 14:13:14.179757 2401 policy_none.go:49] "None policy: Start" Jan 30 14:13:14.180673 kubelet[2401]: I0130 14:13:14.180651 2401 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:13:14.181279 kubelet[2401]: I0130 14:13:14.180858 2401 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:13:14.188446 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:13:14.205791 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:13:14.221490 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:13:14.224400 kubelet[2401]: I0130 14:13:14.223218 2401 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:13:14.224400 kubelet[2401]: I0130 14:13:14.223436 2401 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:13:14.224400 kubelet[2401]: I0130 14:13:14.223556 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:13:14.227473 kubelet[2401]: E0130 14:13:14.225812 2401 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-d-83a473bcbf\" not found" Jan 30 14:13:14.227473 kubelet[2401]: I0130 14:13:14.226977 2401 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.227473 kubelet[2401]: E0130 14:13:14.227461 2401 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.241.96:6443/api/v1/nodes\": dial tcp 168.119.241.96:6443: connect: connection refused" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.245003 kubelet[2401]: I0130 14:13:14.244902 2401 topology_manager.go:215] "Topology Admit Handler" podUID="505ad0fa7003b8295f07f17f7874f541" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.247844 kubelet[2401]: I0130 14:13:14.247751 2401 topology_manager.go:215] "Topology Admit Handler" podUID="272db568993b3146ac22a9e072b7129b" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.251351 kubelet[2401]: I0130 14:13:14.251219 2401 topology_manager.go:215] "Topology Admit Handler" podUID="fb3f4e2d72bd69c9020f9181c9184b2b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.261538 systemd[1]: Created slice kubepods-burstable-pod505ad0fa7003b8295f07f17f7874f541.slice - libcontainer container kubepods-burstable-pod505ad0fa7003b8295f07f17f7874f541.slice. Jan 30 14:13:14.285919 systemd[1]: Created slice kubepods-burstable-pod272db568993b3146ac22a9e072b7129b.slice - libcontainer container kubepods-burstable-pod272db568993b3146ac22a9e072b7129b.slice. Jan 30 14:13:14.291828 systemd[1]: Created slice kubepods-burstable-podfb3f4e2d72bd69c9020f9181c9184b2b.slice - libcontainer container kubepods-burstable-podfb3f4e2d72bd69c9020f9181c9184b2b.slice. Jan 30 14:13:14.332690 kubelet[2401]: E0130 14:13:14.332607 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.241.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-d-83a473bcbf?timeout=10s\": dial tcp 168.119.241.96:6443: connect: connection refused" interval="400ms" Jan 30 14:13:14.429956 kubelet[2401]: I0130 14:13:14.429633 2401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/505ad0fa7003b8295f07f17f7874f541-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-d-83a473bcbf\" (UID: \"505ad0fa7003b8295f07f17f7874f541\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.429956 kubelet[2401]: I0130 14:13:14.429711 2401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/272db568993b3146ac22a9e072b7129b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-d-83a473bcbf\" (UID: \"272db568993b3146ac22a9e072b7129b\") " pod="kube-system/kube-scheduler-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.429956 kubelet[2401]: I0130 14:13:14.429750 2401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb3f4e2d72bd69c9020f9181c9184b2b-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-d-83a473bcbf\" (UID: \"fb3f4e2d72bd69c9020f9181c9184b2b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.429956 kubelet[2401]: I0130 14:13:14.429783 2401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb3f4e2d72bd69c9020f9181c9184b2b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-d-83a473bcbf\" (UID: \"fb3f4e2d72bd69c9020f9181c9184b2b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.429956 kubelet[2401]: I0130 14:13:14.429838 2401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/505ad0fa7003b8295f07f17f7874f541-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-d-83a473bcbf\" (UID: \"505ad0fa7003b8295f07f17f7874f541\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.430413 kubelet[2401]: I0130 14:13:14.429926 2401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/505ad0fa7003b8295f07f17f7874f541-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-d-83a473bcbf\" (UID: \"505ad0fa7003b8295f07f17f7874f541\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.430413 kubelet[2401]: I0130 14:13:14.429975 2401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb3f4e2d72bd69c9020f9181c9184b2b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-d-83a473bcbf\" (UID: \"fb3f4e2d72bd69c9020f9181c9184b2b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.430413 kubelet[2401]: I0130 14:13:14.430012 2401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/505ad0fa7003b8295f07f17f7874f541-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-d-83a473bcbf\" (UID: \"505ad0fa7003b8295f07f17f7874f541\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.430413 kubelet[2401]: I0130 14:13:14.430060 2401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/505ad0fa7003b8295f07f17f7874f541-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-d-83a473bcbf\" (UID: \"505ad0fa7003b8295f07f17f7874f541\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.432877 kubelet[2401]: I0130 14:13:14.432368 2401 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.432877 kubelet[2401]: E0130 14:13:14.432772 2401 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.241.96:6443/api/v1/nodes\": dial tcp 168.119.241.96:6443: connect: connection refused" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.583514 containerd[1481]: time="2025-01-30T14:13:14.583331327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-d-83a473bcbf,Uid:505ad0fa7003b8295f07f17f7874f541,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:14.590044 containerd[1481]: time="2025-01-30T14:13:14.589985207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-d-83a473bcbf,Uid:272db568993b3146ac22a9e072b7129b,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:14.596516 containerd[1481]: time="2025-01-30T14:13:14.596440165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-d-83a473bcbf,Uid:fb3f4e2d72bd69c9020f9181c9184b2b,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:14.734197 kubelet[2401]: E0130 14:13:14.734034 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.241.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-d-83a473bcbf?timeout=10s\": dial tcp 168.119.241.96:6443: connect: connection refused" interval="800ms" Jan 30 14:13:14.835355 kubelet[2401]: I0130 14:13:14.835162 2401 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:14.835735 kubelet[2401]: E0130 14:13:14.835552 2401 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.241.96:6443/api/v1/nodes\": dial tcp 168.119.241.96:6443: connect: connection refused" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:15.038842 kubelet[2401]: W0130 14:13:15.038589 2401 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.241.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:15.038842 kubelet[2401]: E0130 14:13:15.038691 2401 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://168.119.241.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:15.061480 kubelet[2401]: W0130 14:13:15.061300 2401 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.241.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-d-83a473bcbf&limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:15.061480 kubelet[2401]: E0130 14:13:15.061367 2401 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://168.119.241.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-d-83a473bcbf&limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:15.103812 kubelet[2401]: W0130 14:13:15.102816 2401 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.241.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:15.103812 kubelet[2401]: E0130 14:13:15.102967 2401 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://168.119.241.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:15.121181 kubelet[2401]: W0130 14:13:15.120842 2401 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.241.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:15.121181 kubelet[2401]: E0130 14:13:15.120990 2401 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://168.119.241.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.241.96:6443: connect: connection refused Jan 30 14:13:15.149976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2163164897.mount: Deactivated successfully. Jan 30 14:13:15.160970 containerd[1481]: time="2025-01-30T14:13:15.160147211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:13:15.162231 containerd[1481]: time="2025-01-30T14:13:15.162167115Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 30 14:13:15.171924 containerd[1481]: time="2025-01-30T14:13:15.171264582Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:13:15.172077 containerd[1481]: time="2025-01-30T14:13:15.172046391Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:13:15.172582 containerd[1481]: time="2025-01-30T14:13:15.172548837Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:13:15.174360 containerd[1481]: time="2025-01-30T14:13:15.174306978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:13:15.174468 containerd[1481]: time="2025-01-30T14:13:15.174433699Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:13:15.178237 containerd[1481]: time="2025-01-30T14:13:15.178174983Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 594.702134ms" Jan 30 14:13:15.179413 containerd[1481]: time="2025-01-30T14:13:15.179353957Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 582.81359ms" Jan 30 14:13:15.179590 containerd[1481]: time="2025-01-30T14:13:15.179546560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:13:15.185764 containerd[1481]: time="2025-01-30T14:13:15.185466669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 595.393941ms" Jan 30 14:13:15.319270 containerd[1481]: time="2025-01-30T14:13:15.316344970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:15.319270 containerd[1481]: time="2025-01-30T14:13:15.318925721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:15.319270 containerd[1481]: time="2025-01-30T14:13:15.318944681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:15.319270 containerd[1481]: time="2025-01-30T14:13:15.319199724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:15.321218 containerd[1481]: time="2025-01-30T14:13:15.320390578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:15.321218 containerd[1481]: time="2025-01-30T14:13:15.320452819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:15.321218 containerd[1481]: time="2025-01-30T14:13:15.320469059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:15.321218 containerd[1481]: time="2025-01-30T14:13:15.320547060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:15.321592 containerd[1481]: time="2025-01-30T14:13:15.321395510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:15.321592 containerd[1481]: time="2025-01-30T14:13:15.321438350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:15.321592 containerd[1481]: time="2025-01-30T14:13:15.321449951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:15.321592 containerd[1481]: time="2025-01-30T14:13:15.321527072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:15.348175 systemd[1]: Started cri-containerd-217d00a61a8dcf672ede8192ceb952fb59447704df483c5d602abe9b674a20a8.scope - libcontainer container 217d00a61a8dcf672ede8192ceb952fb59447704df483c5d602abe9b674a20a8. Jan 30 14:13:15.349413 systemd[1]: Started cri-containerd-f79e1a1b9564568ab86416d02c6818b11415ad77d6f7ee6b4f63cbb253299971.scope - libcontainer container f79e1a1b9564568ab86416d02c6818b11415ad77d6f7ee6b4f63cbb253299971. Jan 30 14:13:15.354748 systemd[1]: Started cri-containerd-99bd1cf433d0ced84065d701fda3c045a41dc0d0df29c8a7a337a2dc5be45b79.scope - libcontainer container 99bd1cf433d0ced84065d701fda3c045a41dc0d0df29c8a7a337a2dc5be45b79. Jan 30 14:13:15.400949 containerd[1481]: time="2025-01-30T14:13:15.400492081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-d-83a473bcbf,Uid:505ad0fa7003b8295f07f17f7874f541,Namespace:kube-system,Attempt:0,} returns sandbox id \"217d00a61a8dcf672ede8192ceb952fb59447704df483c5d602abe9b674a20a8\"" Jan 30 14:13:15.410122 containerd[1481]: time="2025-01-30T14:13:15.409752110Z" level=info msg="CreateContainer within sandbox \"217d00a61a8dcf672ede8192ceb952fb59447704df483c5d602abe9b674a20a8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:13:15.422740 containerd[1481]: time="2025-01-30T14:13:15.422215057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-d-83a473bcbf,Uid:272db568993b3146ac22a9e072b7129b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f79e1a1b9564568ab86416d02c6818b11415ad77d6f7ee6b4f63cbb253299971\"" Jan 30 14:13:15.427866 containerd[1481]: time="2025-01-30T14:13:15.427640401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-d-83a473bcbf,Uid:fb3f4e2d72bd69c9020f9181c9184b2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"99bd1cf433d0ced84065d701fda3c045a41dc0d0df29c8a7a337a2dc5be45b79\"" Jan 30 14:13:15.428246 containerd[1481]: time="2025-01-30T14:13:15.428065206Z" level=info msg="CreateContainer within sandbox \"f79e1a1b9564568ab86416d02c6818b11415ad77d6f7ee6b4f63cbb253299971\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:13:15.432823 containerd[1481]: time="2025-01-30T14:13:15.432771022Z" level=info msg="CreateContainer within sandbox \"217d00a61a8dcf672ede8192ceb952fb59447704df483c5d602abe9b674a20a8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8763a3a33471074af9d2712562b75bfd6c1e5e5448b09e3cfc41b3276b5f8d7d\"" Jan 30 14:13:15.443815 containerd[1481]: time="2025-01-30T14:13:15.441971730Z" level=info msg="StartContainer for \"8763a3a33471074af9d2712562b75bfd6c1e5e5448b09e3cfc41b3276b5f8d7d\"" Jan 30 14:13:15.448547 containerd[1481]: time="2025-01-30T14:13:15.448452326Z" level=info msg="CreateContainer within sandbox \"99bd1cf433d0ced84065d701fda3c045a41dc0d0df29c8a7a337a2dc5be45b79\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:13:15.449209 containerd[1481]: time="2025-01-30T14:13:15.449149374Z" level=info msg="CreateContainer within sandbox \"f79e1a1b9564568ab86416d02c6818b11415ad77d6f7ee6b4f63cbb253299971\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"04a5f039c4a9a12bdc3cff5c4a55443433ce63b9c423700bce7fb930381c5313\"" Jan 30 14:13:15.452109 containerd[1481]: time="2025-01-30T14:13:15.451969408Z" level=info msg="StartContainer for \"04a5f039c4a9a12bdc3cff5c4a55443433ce63b9c423700bce7fb930381c5313\"" Jan 30 14:13:15.474618 containerd[1481]: time="2025-01-30T14:13:15.474244030Z" level=info msg="CreateContainer within sandbox \"99bd1cf433d0ced84065d701fda3c045a41dc0d0df29c8a7a337a2dc5be45b79\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2d825b34074bffb01d061dd6747521f6a7b0d85de69e2f50e65e8cbcfe6fb63e\"" Jan 30 14:13:15.475511 containerd[1481]: time="2025-01-30T14:13:15.475324443Z" level=info msg="StartContainer for \"2d825b34074bffb01d061dd6747521f6a7b0d85de69e2f50e65e8cbcfe6fb63e\"" Jan 30 14:13:15.483163 systemd[1]: Started cri-containerd-8763a3a33471074af9d2712562b75bfd6c1e5e5448b09e3cfc41b3276b5f8d7d.scope - libcontainer container 8763a3a33471074af9d2712562b75bfd6c1e5e5448b09e3cfc41b3276b5f8d7d. Jan 30 14:13:15.493328 systemd[1]: Started cri-containerd-04a5f039c4a9a12bdc3cff5c4a55443433ce63b9c423700bce7fb930381c5313.scope - libcontainer container 04a5f039c4a9a12bdc3cff5c4a55443433ce63b9c423700bce7fb930381c5313. Jan 30 14:13:15.520362 systemd[1]: Started cri-containerd-2d825b34074bffb01d061dd6747521f6a7b0d85de69e2f50e65e8cbcfe6fb63e.scope - libcontainer container 2d825b34074bffb01d061dd6747521f6a7b0d85de69e2f50e65e8cbcfe6fb63e. Jan 30 14:13:15.535448 kubelet[2401]: E0130 14:13:15.534828 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.241.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-d-83a473bcbf?timeout=10s\": dial tcp 168.119.241.96:6443: connect: connection refused" interval="1.6s" Jan 30 14:13:15.549845 containerd[1481]: time="2025-01-30T14:13:15.549791680Z" level=info msg="StartContainer for \"8763a3a33471074af9d2712562b75bfd6c1e5e5448b09e3cfc41b3276b5f8d7d\" returns successfully" Jan 30 14:13:15.571762 containerd[1481]: time="2025-01-30T14:13:15.571527616Z" level=info msg="StartContainer for \"04a5f039c4a9a12bdc3cff5c4a55443433ce63b9c423700bce7fb930381c5313\" returns successfully" Jan 30 14:13:15.596439 containerd[1481]: time="2025-01-30T14:13:15.596385748Z" level=info msg="StartContainer for \"2d825b34074bffb01d061dd6747521f6a7b0d85de69e2f50e65e8cbcfe6fb63e\" returns successfully" Jan 30 14:13:15.639452 kubelet[2401]: I0130 14:13:15.639229 2401 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:15.640440 kubelet[2401]: E0130 14:13:15.639818 2401 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.241.96:6443/api/v1/nodes\": dial tcp 168.119.241.96:6443: connect: connection refused" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:17.242666 kubelet[2401]: I0130 14:13:17.242612 2401 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:17.357433 update_engine[1456]: I20250130 14:13:17.356910 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:13:17.357433 update_engine[1456]: I20250130 14:13:17.357193 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:13:17.357433 update_engine[1456]: I20250130 14:13:17.357388 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:13:17.358270 update_engine[1456]: E20250130 14:13:17.358238 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:13:17.358399 update_engine[1456]: I20250130 14:13:17.358381 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 14:13:18.585922 kubelet[2401]: E0130 14:13:18.585866 2401 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-d-83a473bcbf\" not found" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:18.628909 kubelet[2401]: E0130 14:13:18.628139 2401 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-d-83a473bcbf.181f7de2c518e6e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-d-83a473bcbf,UID:ci-4081-3-0-d-83a473bcbf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-d-83a473bcbf,},FirstTimestamp:2025-01-30 14:13:14.111125218 +0000 UTC m=+1.315860411,LastTimestamp:2025-01-30 14:13:14.111125218 +0000 UTC m=+1.315860411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-d-83a473bcbf,}" Jan 30 14:13:18.679020 kubelet[2401]: I0130 14:13:18.677914 2401 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:18.695198 kubelet[2401]: E0130 14:13:18.694912 2401 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-d-83a473bcbf.181f7de2c7a49c36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-d-83a473bcbf,UID:ci-4081-3-0-d-83a473bcbf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-d-83a473bcbf,},FirstTimestamp:2025-01-30 14:13:14.153835574 +0000 UTC m=+1.358570767,LastTimestamp:2025-01-30 14:13:14.153835574 +0000 UTC m=+1.358570767,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-d-83a473bcbf,}" Jan 30 14:13:19.108993 kubelet[2401]: I0130 14:13:19.108854 2401 apiserver.go:52] "Watching apiserver" Jan 30 14:13:19.129173 kubelet[2401]: I0130 14:13:19.129110 2401 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:13:20.752500 systemd[1]: Reloading requested from client PID 2671 ('systemctl') (unit session-7.scope)... Jan 30 14:13:20.752523 systemd[1]: Reloading... Jan 30 14:13:20.844930 zram_generator::config[2708]: No configuration found. Jan 30 14:13:20.959653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:13:21.041259 systemd[1]: Reloading finished in 288 ms. Jan 30 14:13:21.085289 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:13:21.099386 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:13:21.099717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:21.099801 systemd[1]: kubelet.service: Consumed 1.775s CPU time, 113.8M memory peak, 0B memory swap peak. Jan 30 14:13:21.105231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:13:21.236559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:21.249270 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:13:21.313765 kubelet[2755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:13:21.316663 kubelet[2755]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:13:21.316663 kubelet[2755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:13:21.316663 kubelet[2755]: I0130 14:13:21.314291 2755 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:13:21.322249 kubelet[2755]: I0130 14:13:21.322184 2755 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:13:21.322249 kubelet[2755]: I0130 14:13:21.322227 2755 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:13:21.322583 kubelet[2755]: I0130 14:13:21.322561 2755 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:13:21.324430 kubelet[2755]: I0130 14:13:21.324388 2755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:13:21.326592 kubelet[2755]: I0130 14:13:21.326370 2755 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:13:21.333108 kubelet[2755]: I0130 14:13:21.333054 2755 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:13:21.333351 kubelet[2755]: I0130 14:13:21.333279 2755 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:13:21.333523 kubelet[2755]: I0130 14:13:21.333315 2755 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-d-83a473bcbf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:13:21.333523 kubelet[2755]: I0130 14:13:21.333504 2755 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:13:21.333523 kubelet[2755]: I0130 14:13:21.333516 2755 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:13:21.333734 kubelet[2755]: I0130 14:13:21.333549 2755 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:13:21.333734 kubelet[2755]: I0130 14:13:21.333672 2755 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:13:21.333734 kubelet[2755]: I0130 14:13:21.333684 2755 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:13:21.335489 kubelet[2755]: I0130 14:13:21.333711 2755 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:13:21.335591 kubelet[2755]: I0130 14:13:21.335502 2755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:13:21.337114 kubelet[2755]: I0130 14:13:21.337079 2755 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:13:21.337303 kubelet[2755]: I0130 14:13:21.337270 2755 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:13:21.339205 kubelet[2755]: I0130 14:13:21.337689 2755 server.go:1264] "Started kubelet" Jan 30 14:13:21.342195 kubelet[2755]: I0130 14:13:21.342166 2755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:13:21.346546 kubelet[2755]: I0130 14:13:21.344849 2755 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:13:21.347711 kubelet[2755]: I0130 14:13:21.347470 2755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:13:21.363926 kubelet[2755]: I0130 14:13:21.359036 2755 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:13:21.363926 kubelet[2755]: I0130 14:13:21.360329 2755 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:13:21.364164 kubelet[2755]: I0130 14:13:21.360869 2755 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:13:21.365970 kubelet[2755]: I0130 14:13:21.364393 2755 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:13:21.366509 kubelet[2755]: I0130 14:13:21.361042 2755 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:13:21.366974 kubelet[2755]: I0130 14:13:21.366953 2755 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:13:21.368070 kubelet[2755]: I0130 14:13:21.368042 2755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:13:21.371653 kubelet[2755]: I0130 14:13:21.370486 2755 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:13:21.375309 kubelet[2755]: I0130 14:13:21.374524 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:13:21.375714 kubelet[2755]: I0130 14:13:21.375561 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:13:21.375714 kubelet[2755]: I0130 14:13:21.375610 2755 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:13:21.375714 kubelet[2755]: I0130 14:13:21.375634 2755 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:13:21.375714 kubelet[2755]: E0130 14:13:21.375681 2755 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:13:21.388149 kubelet[2755]: E0130 14:13:21.388065 2755 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:13:21.434512 kubelet[2755]: I0130 14:13:21.434457 2755 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:13:21.435292 kubelet[2755]: I0130 14:13:21.434479 2755 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:13:21.435292 kubelet[2755]: I0130 14:13:21.434659 2755 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:13:21.435292 kubelet[2755]: I0130 14:13:21.434846 2755 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:13:21.435292 kubelet[2755]: I0130 14:13:21.434858 2755 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:13:21.435292 kubelet[2755]: I0130 14:13:21.434877 2755 policy_none.go:49] "None policy: Start" Jan 30 14:13:21.435675 kubelet[2755]: I0130 14:13:21.435622 2755 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:13:21.435675 kubelet[2755]: I0130 14:13:21.435658 2755 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:13:21.436623 kubelet[2755]: I0130 14:13:21.435830 2755 state_mem.go:75] "Updated machine memory state" Jan 30 14:13:21.440969 kubelet[2755]: I0130 14:13:21.440936 2755 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:13:21.443053 kubelet[2755]: I0130 14:13:21.442843 2755 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:13:21.444027 kubelet[2755]: I0130 14:13:21.443996 2755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:13:21.463755 kubelet[2755]: I0130 14:13:21.463645 2755 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.473674 kubelet[2755]: I0130 14:13:21.473643 2755 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.473808 kubelet[2755]: I0130 14:13:21.473741 2755 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.477250 kubelet[2755]: I0130 14:13:21.476679 2755 topology_manager.go:215] "Topology Admit Handler" podUID="fb3f4e2d72bd69c9020f9181c9184b2b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.477672 kubelet[2755]: I0130 14:13:21.477566 2755 topology_manager.go:215] "Topology Admit Handler" podUID="505ad0fa7003b8295f07f17f7874f541" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.478209 kubelet[2755]: I0130 14:13:21.478028 2755 topology_manager.go:215] "Topology Admit Handler" podUID="272db568993b3146ac22a9e072b7129b" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.489501 kubelet[2755]: E0130 14:13:21.489340 2755 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-0-d-83a473bcbf\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.568930 kubelet[2755]: I0130 14:13:21.568555 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/505ad0fa7003b8295f07f17f7874f541-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-d-83a473bcbf\" (UID: \"505ad0fa7003b8295f07f17f7874f541\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.568930 kubelet[2755]: I0130 14:13:21.568607 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/505ad0fa7003b8295f07f17f7874f541-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-d-83a473bcbf\" (UID: \"505ad0fa7003b8295f07f17f7874f541\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.568930 kubelet[2755]: I0130 14:13:21.568632 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/272db568993b3146ac22a9e072b7129b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-d-83a473bcbf\" (UID: \"272db568993b3146ac22a9e072b7129b\") " pod="kube-system/kube-scheduler-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.568930 kubelet[2755]: I0130 14:13:21.568651 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/505ad0fa7003b8295f07f17f7874f541-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-d-83a473bcbf\" (UID: \"505ad0fa7003b8295f07f17f7874f541\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.568930 kubelet[2755]: I0130 14:13:21.568673 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/505ad0fa7003b8295f07f17f7874f541-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-d-83a473bcbf\" (UID: \"505ad0fa7003b8295f07f17f7874f541\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.569201 kubelet[2755]: I0130 14:13:21.568694 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb3f4e2d72bd69c9020f9181c9184b2b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-d-83a473bcbf\" (UID: \"fb3f4e2d72bd69c9020f9181c9184b2b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.569201 kubelet[2755]: I0130 14:13:21.568713 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/505ad0fa7003b8295f07f17f7874f541-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-d-83a473bcbf\" (UID: \"505ad0fa7003b8295f07f17f7874f541\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.569201 kubelet[2755]: I0130 14:13:21.568729 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb3f4e2d72bd69c9020f9181c9184b2b-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-d-83a473bcbf\" (UID: \"fb3f4e2d72bd69c9020f9181c9184b2b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:21.569201 kubelet[2755]: I0130 14:13:21.568746 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb3f4e2d72bd69c9020f9181c9184b2b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-d-83a473bcbf\" (UID: \"fb3f4e2d72bd69c9020f9181c9184b2b\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:22.337096 kubelet[2755]: I0130 14:13:22.336214 2755 apiserver.go:52] "Watching apiserver" Jan 30 14:13:22.365061 kubelet[2755]: I0130 14:13:22.364911 2755 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:13:22.426913 kubelet[2755]: E0130 14:13:22.424611 2755 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-d-83a473bcbf\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-d-83a473bcbf" Jan 30 14:13:22.456663 kubelet[2755]: I0130 14:13:22.456563 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-d-83a473bcbf" podStartSLOduration=1.456539134 podStartE2EDuration="1.456539134s" podCreationTimestamp="2025-01-30 14:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:22.44186639 +0000 UTC m=+1.186776659" watchObservedRunningTime="2025-01-30 14:13:22.456539134 +0000 UTC m=+1.201449443" Jan 30 14:13:22.476472 kubelet[2755]: I0130 14:13:22.476418 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-d-83a473bcbf" podStartSLOduration=1.476397849 podStartE2EDuration="1.476397849s" podCreationTimestamp="2025-01-30 14:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:22.458902917 +0000 UTC m=+1.203813226" watchObservedRunningTime="2025-01-30 14:13:22.476397849 +0000 UTC m=+1.221308118" Jan 30 14:13:22.498496 kubelet[2755]: I0130 14:13:22.498324 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-d-83a473bcbf" podStartSLOduration=2.498306345 podStartE2EDuration="2.498306345s" podCreationTimestamp="2025-01-30 14:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:22.477970425 +0000 UTC m=+1.222880734" watchObservedRunningTime="2025-01-30 14:13:22.498306345 +0000 UTC m=+1.243216614" Jan 30 14:13:26.962100 sudo[1863]: pam_unix(sudo:session): session closed for user root Jan 30 14:13:27.123667 sshd[1860]: pam_unix(sshd:session): session closed for user core Jan 30 14:13:27.130063 systemd[1]: sshd@6-168.119.241.96:22-139.178.68.195:45356.service: Deactivated successfully. Jan 30 14:13:27.134653 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:13:27.137062 systemd[1]: session-7.scope: Consumed 7.897s CPU time, 188.3M memory peak, 0B memory swap peak. Jan 30 14:13:27.140755 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:13:27.142267 systemd-logind[1454]: Removed session 7. Jan 30 14:13:27.355782 update_engine[1456]: I20250130 14:13:27.355192 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:13:27.355782 update_engine[1456]: I20250130 14:13:27.355475 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:13:27.355782 update_engine[1456]: I20250130 14:13:27.355718 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:13:27.359418 update_engine[1456]: E20250130 14:13:27.358364 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:13:27.359418 update_engine[1456]: I20250130 14:13:27.358479 1456 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 14:13:27.359418 update_engine[1456]: I20250130 14:13:27.358493 1456 omaha_request_action.cc:617] Omaha request response: Jan 30 14:13:27.359418 update_engine[1456]: E20250130 14:13:27.358623 1456 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 14:13:27.359418 update_engine[1456]: I20250130 14:13:27.358652 1456 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 14:13:27.359418 update_engine[1456]: I20250130 14:13:27.358662 1456 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:13:27.359418 update_engine[1456]: I20250130 14:13:27.358671 1456 update_attempter.cc:306] Processing Done. Jan 30 14:13:27.359418 update_engine[1456]: E20250130 14:13:27.358690 1456 update_attempter.cc:619] Update failed. Jan 30 14:13:27.359418 update_engine[1456]: I20250130 14:13:27.358702 1456 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 14:13:27.359418 update_engine[1456]: I20250130 14:13:27.358711 1456 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 14:13:27.359418 update_engine[1456]: I20250130 14:13:27.358721 1456 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 14:13:27.359418 update_engine[1456]: I20250130 14:13:27.358817 1456 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 14:13:27.359418 update_engine[1456]: I20250130 14:13:27.358850 1456 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 14:13:27.359418 update_engine[1456]: I20250130 14:13:27.358858 1456 omaha_request_action.cc:272] Request: Jan 30 14:13:27.359418 update_engine[1456]: Jan 30 14:13:27.359418 update_engine[1456]: Jan 30 14:13:27.359418 update_engine[1456]: Jan 30 14:13:27.360269 locksmithd[1495]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 14:13:27.360626 update_engine[1456]: Jan 30 14:13:27.360626 update_engine[1456]: Jan 30 14:13:27.360626 update_engine[1456]: Jan 30 14:13:27.360626 update_engine[1456]: I20250130 14:13:27.358868 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:13:27.360626 update_engine[1456]: I20250130 14:13:27.359192 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:13:27.360626 update_engine[1456]: I20250130 14:13:27.359433 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:13:27.360626 update_engine[1456]: E20250130 14:13:27.360280 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:13:27.360626 update_engine[1456]: I20250130 14:13:27.360361 1456 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 14:13:27.360626 update_engine[1456]: I20250130 14:13:27.360376 1456 omaha_request_action.cc:617] Omaha request response: Jan 30 14:13:27.360626 update_engine[1456]: I20250130 14:13:27.360391 1456 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:13:27.360626 update_engine[1456]: I20250130 14:13:27.360399 1456 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:13:27.360626 update_engine[1456]: I20250130 14:13:27.360407 1456 update_attempter.cc:306] Processing Done. Jan 30 14:13:27.360626 update_engine[1456]: I20250130 14:13:27.360417 1456 update_attempter.cc:310] Error event sent. Jan 30 14:13:27.360626 update_engine[1456]: I20250130 14:13:27.360433 1456 update_check_scheduler.cc:74] Next update check in 48m34s Jan 30 14:13:27.361280 locksmithd[1495]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 14:13:36.739710 kubelet[2755]: I0130 14:13:36.739655 2755 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:13:36.741251 containerd[1481]: time="2025-01-30T14:13:36.741186072Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:13:36.745333 kubelet[2755]: I0130 14:13:36.742930 2755 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:13:36.882589 kubelet[2755]: I0130 14:13:36.882530 2755 topology_manager.go:215] "Topology Admit Handler" podUID="29cce218-3858-49d9-b043-1d6d700abb3f" podNamespace="kube-system" podName="kube-proxy-wdkjx" Jan 30 14:13:36.892416 systemd[1]: Created slice kubepods-besteffort-pod29cce218_3858_49d9_b043_1d6d700abb3f.slice - libcontainer container kubepods-besteffort-pod29cce218_3858_49d9_b043_1d6d700abb3f.slice. Jan 30 14:13:36.972650 kubelet[2755]: I0130 14:13:36.972545 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/29cce218-3858-49d9-b043-1d6d700abb3f-kube-proxy\") pod \"kube-proxy-wdkjx\" (UID: \"29cce218-3858-49d9-b043-1d6d700abb3f\") " pod="kube-system/kube-proxy-wdkjx" Jan 30 14:13:36.972650 kubelet[2755]: I0130 14:13:36.972656 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29cce218-3858-49d9-b043-1d6d700abb3f-lib-modules\") pod \"kube-proxy-wdkjx\" (UID: \"29cce218-3858-49d9-b043-1d6d700abb3f\") " pod="kube-system/kube-proxy-wdkjx" Jan 30 14:13:36.972865 kubelet[2755]: I0130 14:13:36.972681 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29cce218-3858-49d9-b043-1d6d700abb3f-xtables-lock\") pod \"kube-proxy-wdkjx\" (UID: \"29cce218-3858-49d9-b043-1d6d700abb3f\") " pod="kube-system/kube-proxy-wdkjx" Jan 30 14:13:36.972865 kubelet[2755]: I0130 14:13:36.972705 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5rzn\" (UniqueName: \"kubernetes.io/projected/29cce218-3858-49d9-b043-1d6d700abb3f-kube-api-access-d5rzn\") pod \"kube-proxy-wdkjx\" (UID: \"29cce218-3858-49d9-b043-1d6d700abb3f\") " pod="kube-system/kube-proxy-wdkjx" Jan 30 14:13:37.069533 kubelet[2755]: I0130 14:13:37.068388 2755 topology_manager.go:215] "Topology Admit Handler" podUID="07ab5799-4c33-4763-9470-c64ae3d0bd28" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-rgtdg" Jan 30 14:13:37.073558 kubelet[2755]: I0130 14:13:37.073513 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/07ab5799-4c33-4763-9470-c64ae3d0bd28-var-lib-calico\") pod \"tigera-operator-7bc55997bb-rgtdg\" (UID: \"07ab5799-4c33-4763-9470-c64ae3d0bd28\") " pod="tigera-operator/tigera-operator-7bc55997bb-rgtdg" Jan 30 14:13:37.074050 kubelet[2755]: I0130 14:13:37.073846 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8fd\" (UniqueName: \"kubernetes.io/projected/07ab5799-4c33-4763-9470-c64ae3d0bd28-kube-api-access-7n8fd\") pod \"tigera-operator-7bc55997bb-rgtdg\" (UID: \"07ab5799-4c33-4763-9470-c64ae3d0bd28\") " pod="tigera-operator/tigera-operator-7bc55997bb-rgtdg" Jan 30 14:13:37.079837 systemd[1]: Created slice kubepods-besteffort-pod07ab5799_4c33_4763_9470_c64ae3d0bd28.slice - libcontainer container kubepods-besteffort-pod07ab5799_4c33_4763_9470_c64ae3d0bd28.slice. Jan 30 14:13:37.202466 containerd[1481]: time="2025-01-30T14:13:37.201974761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdkjx,Uid:29cce218-3858-49d9-b043-1d6d700abb3f,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:37.228071 containerd[1481]: time="2025-01-30T14:13:37.227777418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:37.228569 containerd[1481]: time="2025-01-30T14:13:37.228515943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:37.228569 containerd[1481]: time="2025-01-30T14:13:37.228542503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:37.228744 containerd[1481]: time="2025-01-30T14:13:37.228663064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:37.255139 systemd[1]: Started cri-containerd-2be5403aa74b34df643ddb489dfe3b473a70f8c9c1a859cff456e9a1faf6d6de.scope - libcontainer container 2be5403aa74b34df643ddb489dfe3b473a70f8c9c1a859cff456e9a1faf6d6de. Jan 30 14:13:37.286573 containerd[1481]: time="2025-01-30T14:13:37.286504862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdkjx,Uid:29cce218-3858-49d9-b043-1d6d700abb3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2be5403aa74b34df643ddb489dfe3b473a70f8c9c1a859cff456e9a1faf6d6de\"" Jan 30 14:13:37.292972 containerd[1481]: time="2025-01-30T14:13:37.292800745Z" level=info msg="CreateContainer within sandbox \"2be5403aa74b34df643ddb489dfe3b473a70f8c9c1a859cff456e9a1faf6d6de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:13:37.311013 containerd[1481]: time="2025-01-30T14:13:37.310689428Z" level=info msg="CreateContainer within sandbox \"2be5403aa74b34df643ddb489dfe3b473a70f8c9c1a859cff456e9a1faf6d6de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ed7fb6065a149a1414393b35cdca454693b7664567d4a72b92b0c8a8c603c6f2\"" Jan 30 14:13:37.313707 containerd[1481]: time="2025-01-30T14:13:37.311957797Z" level=info msg="StartContainer for \"ed7fb6065a149a1414393b35cdca454693b7664567d4a72b92b0c8a8c603c6f2\"" Jan 30 14:13:37.361233 systemd[1]: Started cri-containerd-ed7fb6065a149a1414393b35cdca454693b7664567d4a72b92b0c8a8c603c6f2.scope - libcontainer container ed7fb6065a149a1414393b35cdca454693b7664567d4a72b92b0c8a8c603c6f2. Jan 30 14:13:37.386554 containerd[1481]: time="2025-01-30T14:13:37.386132707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-rgtdg,Uid:07ab5799-4c33-4763-9470-c64ae3d0bd28,Namespace:tigera-operator,Attempt:0,}" Jan 30 14:13:37.400522 containerd[1481]: time="2025-01-30T14:13:37.400276804Z" level=info msg="StartContainer for \"ed7fb6065a149a1414393b35cdca454693b7664567d4a72b92b0c8a8c603c6f2\" returns successfully" Jan 30 14:13:37.419065 containerd[1481]: time="2025-01-30T14:13:37.418225527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:37.419065 containerd[1481]: time="2025-01-30T14:13:37.419030453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:37.419065 containerd[1481]: time="2025-01-30T14:13:37.419043733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:37.419572 containerd[1481]: time="2025-01-30T14:13:37.419505896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:37.443227 systemd[1]: Started cri-containerd-3774ff2ec4bfdcad139f7ca085bbecd99313ef92111ac5be85f591f46de762a2.scope - libcontainer container 3774ff2ec4bfdcad139f7ca085bbecd99313ef92111ac5be85f591f46de762a2. Jan 30 14:13:37.504506 containerd[1481]: time="2025-01-30T14:13:37.504436880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-rgtdg,Uid:07ab5799-4c33-4763-9470-c64ae3d0bd28,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3774ff2ec4bfdcad139f7ca085bbecd99313ef92111ac5be85f591f46de762a2\"" Jan 30 14:13:37.510071 containerd[1481]: time="2025-01-30T14:13:37.509859517Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 14:13:39.256068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3848057099.mount: Deactivated successfully. Jan 30 14:13:39.584270 containerd[1481]: time="2025-01-30T14:13:39.584189453Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:39.587650 containerd[1481]: time="2025-01-30T14:13:39.587609916Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 30 14:13:39.589019 containerd[1481]: time="2025-01-30T14:13:39.588980965Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:39.592343 containerd[1481]: time="2025-01-30T14:13:39.592303266Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:39.594032 containerd[1481]: time="2025-01-30T14:13:39.593999638Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.083924679s" Jan 30 14:13:39.594175 containerd[1481]: time="2025-01-30T14:13:39.594157759Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 30 14:13:39.598798 containerd[1481]: time="2025-01-30T14:13:39.598760989Z" level=info msg="CreateContainer within sandbox \"3774ff2ec4bfdcad139f7ca085bbecd99313ef92111ac5be85f591f46de762a2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 14:13:39.612813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2209518425.mount: Deactivated successfully. Jan 30 14:13:39.617797 containerd[1481]: time="2025-01-30T14:13:39.617533712Z" level=info msg="CreateContainer within sandbox \"3774ff2ec4bfdcad139f7ca085bbecd99313ef92111ac5be85f591f46de762a2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bb5cd39d2866d127bb959ef1ca8ba2e6dd8d220ece4c8f19622d4b12e831a316\"" Jan 30 14:13:39.619588 containerd[1481]: time="2025-01-30T14:13:39.618870081Z" level=info msg="StartContainer for \"bb5cd39d2866d127bb959ef1ca8ba2e6dd8d220ece4c8f19622d4b12e831a316\"" Jan 30 14:13:39.653609 systemd[1]: Started cri-containerd-bb5cd39d2866d127bb959ef1ca8ba2e6dd8d220ece4c8f19622d4b12e831a316.scope - libcontainer container bb5cd39d2866d127bb959ef1ca8ba2e6dd8d220ece4c8f19622d4b12e831a316. Jan 30 14:13:39.688820 containerd[1481]: time="2025-01-30T14:13:39.687292171Z" level=info msg="StartContainer for \"bb5cd39d2866d127bb959ef1ca8ba2e6dd8d220ece4c8f19622d4b12e831a316\" returns successfully" Jan 30 14:13:40.480240 kubelet[2755]: I0130 14:13:40.479929 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wdkjx" podStartSLOduration=4.479903396 podStartE2EDuration="4.479903396s" podCreationTimestamp="2025-01-30 14:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:37.473270946 +0000 UTC m=+16.218181255" watchObservedRunningTime="2025-01-30 14:13:40.479903396 +0000 UTC m=+19.224813665" Jan 30 14:13:40.480738 kubelet[2755]: I0130 14:13:40.480359 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-rgtdg" podStartSLOduration=1.394241146 podStartE2EDuration="3.480345599s" podCreationTimestamp="2025-01-30 14:13:37 +0000 UTC" firstStartedPulling="2025-01-30 14:13:37.508936151 +0000 UTC m=+16.253846420" lastFinishedPulling="2025-01-30 14:13:39.595040564 +0000 UTC m=+18.339950873" observedRunningTime="2025-01-30 14:13:40.479667394 +0000 UTC m=+19.224577663" watchObservedRunningTime="2025-01-30 14:13:40.480345599 +0000 UTC m=+19.225255908" Jan 30 14:13:44.009970 kubelet[2755]: I0130 14:13:44.009909 2755 topology_manager.go:215] "Topology Admit Handler" podUID="848c484f-a674-460c-84c3-14c446592a71" podNamespace="calico-system" podName="calico-typha-8b577f789-d9l9j" Jan 30 14:13:44.018533 systemd[1]: Created slice kubepods-besteffort-pod848c484f_a674_460c_84c3_14c446592a71.slice - libcontainer container kubepods-besteffort-pod848c484f_a674_460c_84c3_14c446592a71.slice. Jan 30 14:13:44.115173 kubelet[2755]: I0130 14:13:44.115132 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/848c484f-a674-460c-84c3-14c446592a71-tigera-ca-bundle\") pod \"calico-typha-8b577f789-d9l9j\" (UID: \"848c484f-a674-460c-84c3-14c446592a71\") " pod="calico-system/calico-typha-8b577f789-d9l9j" Jan 30 14:13:44.115173 kubelet[2755]: I0130 14:13:44.115175 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/848c484f-a674-460c-84c3-14c446592a71-typha-certs\") pod \"calico-typha-8b577f789-d9l9j\" (UID: \"848c484f-a674-460c-84c3-14c446592a71\") " pod="calico-system/calico-typha-8b577f789-d9l9j" Jan 30 14:13:44.115173 kubelet[2755]: I0130 14:13:44.115198 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zn8q\" (UniqueName: \"kubernetes.io/projected/848c484f-a674-460c-84c3-14c446592a71-kube-api-access-4zn8q\") pod \"calico-typha-8b577f789-d9l9j\" (UID: \"848c484f-a674-460c-84c3-14c446592a71\") " pod="calico-system/calico-typha-8b577f789-d9l9j" Jan 30 14:13:44.198914 kubelet[2755]: I0130 14:13:44.197445 2755 topology_manager.go:215] "Topology Admit Handler" podUID="b3231313-cfff-408b-8f75-552167f5d55e" podNamespace="calico-system" podName="calico-node-wl6ms" Jan 30 14:13:44.209730 systemd[1]: Created slice kubepods-besteffort-podb3231313_cfff_408b_8f75_552167f5d55e.slice - libcontainer container kubepods-besteffort-podb3231313_cfff_408b_8f75_552167f5d55e.slice. Jan 30 14:13:44.316080 kubelet[2755]: I0130 14:13:44.315945 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b3231313-cfff-408b-8f75-552167f5d55e-var-run-calico\") pod \"calico-node-wl6ms\" (UID: \"b3231313-cfff-408b-8f75-552167f5d55e\") " pod="calico-system/calico-node-wl6ms" Jan 30 14:13:44.316080 kubelet[2755]: I0130 14:13:44.316006 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b3231313-cfff-408b-8f75-552167f5d55e-var-lib-calico\") pod \"calico-node-wl6ms\" (UID: \"b3231313-cfff-408b-8f75-552167f5d55e\") " pod="calico-system/calico-node-wl6ms" Jan 30 14:13:44.316080 kubelet[2755]: I0130 14:13:44.316031 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b3231313-cfff-408b-8f75-552167f5d55e-cni-net-dir\") pod \"calico-node-wl6ms\" (UID: \"b3231313-cfff-408b-8f75-552167f5d55e\") " pod="calico-system/calico-node-wl6ms" Jan 30 14:13:44.316080 kubelet[2755]: I0130 14:13:44.316054 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b3231313-cfff-408b-8f75-552167f5d55e-node-certs\") pod \"calico-node-wl6ms\" (UID: \"b3231313-cfff-408b-8f75-552167f5d55e\") " pod="calico-system/calico-node-wl6ms" Jan 30 14:13:44.316309 kubelet[2755]: I0130 14:13:44.316088 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b3231313-cfff-408b-8f75-552167f5d55e-flexvol-driver-host\") pod \"calico-node-wl6ms\" (UID: \"b3231313-cfff-408b-8f75-552167f5d55e\") " pod="calico-system/calico-node-wl6ms" Jan 30 14:13:44.316309 kubelet[2755]: I0130 14:13:44.316148 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3231313-cfff-408b-8f75-552167f5d55e-lib-modules\") pod \"calico-node-wl6ms\" (UID: \"b3231313-cfff-408b-8f75-552167f5d55e\") " pod="calico-system/calico-node-wl6ms" Jan 30 14:13:44.316309 kubelet[2755]: I0130 14:13:44.316168 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3231313-cfff-408b-8f75-552167f5d55e-xtables-lock\") pod \"calico-node-wl6ms\" (UID: \"b3231313-cfff-408b-8f75-552167f5d55e\") " pod="calico-system/calico-node-wl6ms" Jan 30 14:13:44.316309 kubelet[2755]: I0130 14:13:44.316189 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3231313-cfff-408b-8f75-552167f5d55e-tigera-ca-bundle\") pod \"calico-node-wl6ms\" (UID: \"b3231313-cfff-408b-8f75-552167f5d55e\") " pod="calico-system/calico-node-wl6ms" Jan 30 14:13:44.316309 kubelet[2755]: I0130 14:13:44.316208 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b3231313-cfff-408b-8f75-552167f5d55e-cni-log-dir\") pod \"calico-node-wl6ms\" (UID: \"b3231313-cfff-408b-8f75-552167f5d55e\") " pod="calico-system/calico-node-wl6ms" Jan 30 14:13:44.316417 kubelet[2755]: I0130 14:13:44.316224 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95vzz\" (UniqueName: \"kubernetes.io/projected/b3231313-cfff-408b-8f75-552167f5d55e-kube-api-access-95vzz\") pod \"calico-node-wl6ms\" (UID: \"b3231313-cfff-408b-8f75-552167f5d55e\") " pod="calico-system/calico-node-wl6ms" Jan 30 14:13:44.316417 kubelet[2755]: I0130 14:13:44.316244 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b3231313-cfff-408b-8f75-552167f5d55e-policysync\") pod \"calico-node-wl6ms\" (UID: \"b3231313-cfff-408b-8f75-552167f5d55e\") " pod="calico-system/calico-node-wl6ms" Jan 30 14:13:44.316417 kubelet[2755]: I0130 14:13:44.316260 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b3231313-cfff-408b-8f75-552167f5d55e-cni-bin-dir\") pod \"calico-node-wl6ms\" (UID: \"b3231313-cfff-408b-8f75-552167f5d55e\") " pod="calico-system/calico-node-wl6ms" Jan 30 14:13:44.327692 containerd[1481]: time="2025-01-30T14:13:44.327394372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8b577f789-d9l9j,Uid:848c484f-a674-460c-84c3-14c446592a71,Namespace:calico-system,Attempt:0,}" Jan 30 14:13:44.341454 kubelet[2755]: I0130 14:13:44.341396 2755 topology_manager.go:215] "Topology Admit Handler" podUID="2389e6aa-aa58-48fd-bacc-def6ddcc0f86" podNamespace="calico-system" podName="csi-node-driver-tbm79" Jan 30 14:13:44.342561 kubelet[2755]: E0130 14:13:44.342483 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbm79" podUID="2389e6aa-aa58-48fd-bacc-def6ddcc0f86" Jan 30 14:13:44.376983 containerd[1481]: time="2025-01-30T14:13:44.376704263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:44.376983 containerd[1481]: time="2025-01-30T14:13:44.376768183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:44.376983 containerd[1481]: time="2025-01-30T14:13:44.376783664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:44.376983 containerd[1481]: time="2025-01-30T14:13:44.376876704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:44.406249 systemd[1]: Started cri-containerd-3040dee7b7fe5f6a15022c5bc4eb4ccd43ba3a1ab37d55b676c63531f8cf7450.scope - libcontainer container 3040dee7b7fe5f6a15022c5bc4eb4ccd43ba3a1ab37d55b676c63531f8cf7450. Jan 30 14:13:44.423349 kubelet[2755]: E0130 14:13:44.423295 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.423349 kubelet[2755]: W0130 14:13:44.423339 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.423572 kubelet[2755]: E0130 14:13:44.423439 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.424674 kubelet[2755]: E0130 14:13:44.424639 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.424674 kubelet[2755]: W0130 14:13:44.424665 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.425983 kubelet[2755]: E0130 14:13:44.424686 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.425983 kubelet[2755]: E0130 14:13:44.424980 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.425983 kubelet[2755]: W0130 14:13:44.424991 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.425983 kubelet[2755]: E0130 14:13:44.425934 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.426212 kubelet[2755]: E0130 14:13:44.426146 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.426212 kubelet[2755]: W0130 14:13:44.426157 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.426212 kubelet[2755]: E0130 14:13:44.426173 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.426401 kubelet[2755]: E0130 14:13:44.426385 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.426401 kubelet[2755]: W0130 14:13:44.426398 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.426480 kubelet[2755]: E0130 14:13:44.426415 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.426664 kubelet[2755]: E0130 14:13:44.426640 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.426664 kubelet[2755]: W0130 14:13:44.426658 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.427017 kubelet[2755]: E0130 14:13:44.426990 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.427202 kubelet[2755]: E0130 14:13:44.427180 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.427202 kubelet[2755]: W0130 14:13:44.427198 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.427298 kubelet[2755]: E0130 14:13:44.427281 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.427703 kubelet[2755]: E0130 14:13:44.427675 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.427703 kubelet[2755]: W0130 14:13:44.427695 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.427951 kubelet[2755]: E0130 14:13:44.427899 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.428175 kubelet[2755]: E0130 14:13:44.428152 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.428175 kubelet[2755]: W0130 14:13:44.428170 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.428257 kubelet[2755]: E0130 14:13:44.428181 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.428749 kubelet[2755]: E0130 14:13:44.428727 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.428749 kubelet[2755]: W0130 14:13:44.428743 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.428852 kubelet[2755]: E0130 14:13:44.428753 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.444171 kubelet[2755]: E0130 14:13:44.444078 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.444171 kubelet[2755]: W0130 14:13:44.444140 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.444171 kubelet[2755]: E0130 14:13:44.444168 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.517360 kubelet[2755]: E0130 14:13:44.517311 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.517360 kubelet[2755]: W0130 14:13:44.517344 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.517360 kubelet[2755]: E0130 14:13:44.517365 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.517844 kubelet[2755]: I0130 14:13:44.517395 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2389e6aa-aa58-48fd-bacc-def6ddcc0f86-varrun\") pod \"csi-node-driver-tbm79\" (UID: \"2389e6aa-aa58-48fd-bacc-def6ddcc0f86\") " pod="calico-system/csi-node-driver-tbm79" Jan 30 14:13:44.518977 containerd[1481]: time="2025-01-30T14:13:44.518374380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wl6ms,Uid:b3231313-cfff-408b-8f75-552167f5d55e,Namespace:calico-system,Attempt:0,}" Jan 30 14:13:44.519089 kubelet[2755]: E0130 14:13:44.518498 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.519089 kubelet[2755]: W0130 14:13:44.518515 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.519089 kubelet[2755]: E0130 14:13:44.518721 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.519089 kubelet[2755]: I0130 14:13:44.518746 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxlz7\" (UniqueName: \"kubernetes.io/projected/2389e6aa-aa58-48fd-bacc-def6ddcc0f86-kube-api-access-zxlz7\") pod \"csi-node-driver-tbm79\" (UID: \"2389e6aa-aa58-48fd-bacc-def6ddcc0f86\") " pod="calico-system/csi-node-driver-tbm79" Jan 30 14:13:44.519839 kubelet[2755]: E0130 14:13:44.519658 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.519839 kubelet[2755]: W0130 14:13:44.519838 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.520732 kubelet[2755]: E0130 14:13:44.520687 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.520825 kubelet[2755]: I0130 14:13:44.520730 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2389e6aa-aa58-48fd-bacc-def6ddcc0f86-kubelet-dir\") pod \"csi-node-driver-tbm79\" (UID: \"2389e6aa-aa58-48fd-bacc-def6ddcc0f86\") " pod="calico-system/csi-node-driver-tbm79" Jan 30 14:13:44.521359 kubelet[2755]: E0130 14:13:44.521333 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.521412 kubelet[2755]: W0130 14:13:44.521383 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.521449 kubelet[2755]: E0130 14:13:44.521413 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.521838 kubelet[2755]: E0130 14:13:44.521704 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.521838 kubelet[2755]: W0130 14:13:44.521722 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.521838 kubelet[2755]: E0130 14:13:44.521737 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.522519 kubelet[2755]: E0130 14:13:44.522182 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.522519 kubelet[2755]: W0130 14:13:44.522228 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.522519 kubelet[2755]: E0130 14:13:44.522250 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.522621 kubelet[2755]: E0130 14:13:44.522527 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.522621 kubelet[2755]: W0130 14:13:44.522538 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.522621 kubelet[2755]: E0130 14:13:44.522596 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.523245 kubelet[2755]: E0130 14:13:44.522765 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.523245 kubelet[2755]: W0130 14:13:44.522792 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.523245 kubelet[2755]: E0130 14:13:44.522869 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.523245 kubelet[2755]: I0130 14:13:44.522974 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2389e6aa-aa58-48fd-bacc-def6ddcc0f86-registration-dir\") pod \"csi-node-driver-tbm79\" (UID: \"2389e6aa-aa58-48fd-bacc-def6ddcc0f86\") " pod="calico-system/csi-node-driver-tbm79" Jan 30 14:13:44.523705 kubelet[2755]: E0130 14:13:44.523405 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.523705 kubelet[2755]: W0130 14:13:44.523419 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.523705 kubelet[2755]: E0130 14:13:44.523438 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.523705 kubelet[2755]: E0130 14:13:44.523658 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.523705 kubelet[2755]: W0130 14:13:44.523667 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.523705 kubelet[2755]: E0130 14:13:44.523679 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.524168 kubelet[2755]: E0130 14:13:44.523899 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.524168 kubelet[2755]: W0130 14:13:44.523909 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.525192 kubelet[2755]: E0130 14:13:44.524617 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.525192 kubelet[2755]: I0130 14:13:44.524653 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2389e6aa-aa58-48fd-bacc-def6ddcc0f86-socket-dir\") pod \"csi-node-driver-tbm79\" (UID: \"2389e6aa-aa58-48fd-bacc-def6ddcc0f86\") " pod="calico-system/csi-node-driver-tbm79" Jan 30 14:13:44.525271 kubelet[2755]: E0130 14:13:44.525255 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.525303 kubelet[2755]: W0130 14:13:44.525271 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.525303 kubelet[2755]: E0130 14:13:44.525287 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.525501 kubelet[2755]: E0130 14:13:44.525481 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.525501 kubelet[2755]: W0130 14:13:44.525500 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.526830 kubelet[2755]: E0130 14:13:44.526771 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.528815 kubelet[2755]: E0130 14:13:44.527176 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.528815 kubelet[2755]: W0130 14:13:44.527190 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.528815 kubelet[2755]: E0130 14:13:44.527201 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.528815 kubelet[2755]: E0130 14:13:44.527367 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.528815 kubelet[2755]: W0130 14:13:44.527374 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.528815 kubelet[2755]: E0130 14:13:44.527383 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.545669 containerd[1481]: time="2025-01-30T14:13:44.545355900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8b577f789-d9l9j,Uid:848c484f-a674-460c-84c3-14c446592a71,Namespace:calico-system,Attempt:0,} returns sandbox id \"3040dee7b7fe5f6a15022c5bc4eb4ccd43ba3a1ab37d55b676c63531f8cf7450\"" Jan 30 14:13:44.554341 containerd[1481]: time="2025-01-30T14:13:44.554138591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 14:13:44.568523 containerd[1481]: time="2025-01-30T14:13:44.568169474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:44.568523 containerd[1481]: time="2025-01-30T14:13:44.568393476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:44.568523 containerd[1481]: time="2025-01-30T14:13:44.568413116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:44.570827 containerd[1481]: time="2025-01-30T14:13:44.570736770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:44.594278 systemd[1]: Started cri-containerd-a501a4019acb5db54d6e9eab90929fec4881759ddb6844650af091b7644b99b4.scope - libcontainer container a501a4019acb5db54d6e9eab90929fec4881759ddb6844650af091b7644b99b4. Jan 30 14:13:44.625413 containerd[1481]: time="2025-01-30T14:13:44.625286212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wl6ms,Uid:b3231313-cfff-408b-8f75-552167f5d55e,Namespace:calico-system,Attempt:0,} returns sandbox id \"a501a4019acb5db54d6e9eab90929fec4881759ddb6844650af091b7644b99b4\"" Jan 30 14:13:44.626311 kubelet[2755]: E0130 14:13:44.626280 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.626311 kubelet[2755]: W0130 14:13:44.626303 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.626874 kubelet[2755]: E0130 14:13:44.626341 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.627401 kubelet[2755]: E0130 14:13:44.627362 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.627401 kubelet[2755]: W0130 14:13:44.627398 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.627496 kubelet[2755]: E0130 14:13:44.627418 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.627770 kubelet[2755]: E0130 14:13:44.627638 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.627770 kubelet[2755]: W0130 14:13:44.627656 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.627770 kubelet[2755]: E0130 14:13:44.627666 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.628414 kubelet[2755]: E0130 14:13:44.628205 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.628414 kubelet[2755]: W0130 14:13:44.628225 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.628414 kubelet[2755]: E0130 14:13:44.628236 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.629301 kubelet[2755]: E0130 14:13:44.629183 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.629301 kubelet[2755]: W0130 14:13:44.629204 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.629301 kubelet[2755]: E0130 14:13:44.629254 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.630231 kubelet[2755]: E0130 14:13:44.630146 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.630231 kubelet[2755]: W0130 14:13:44.630167 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.630231 kubelet[2755]: E0130 14:13:44.630189 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.630650 kubelet[2755]: E0130 14:13:44.630486 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.630650 kubelet[2755]: W0130 14:13:44.630497 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.630877 kubelet[2755]: E0130 14:13:44.630802 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.631475 kubelet[2755]: E0130 14:13:44.631449 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.631475 kubelet[2755]: W0130 14:13:44.631470 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.631766 kubelet[2755]: E0130 14:13:44.631615 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.632526 kubelet[2755]: E0130 14:13:44.632390 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.632526 kubelet[2755]: W0130 14:13:44.632417 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.632526 kubelet[2755]: E0130 14:13:44.632470 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.633564 kubelet[2755]: E0130 14:13:44.633526 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.633564 kubelet[2755]: W0130 14:13:44.633560 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.633760 kubelet[2755]: E0130 14:13:44.633713 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.634197 kubelet[2755]: E0130 14:13:44.634173 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.634250 kubelet[2755]: W0130 14:13:44.634202 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.634336 kubelet[2755]: E0130 14:13:44.634306 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.636612 kubelet[2755]: E0130 14:13:44.636572 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.636612 kubelet[2755]: W0130 14:13:44.636610 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.636942 kubelet[2755]: E0130 14:13:44.636757 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.637413 kubelet[2755]: E0130 14:13:44.637392 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.637461 kubelet[2755]: W0130 14:13:44.637414 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.637547 kubelet[2755]: E0130 14:13:44.637507 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.637968 kubelet[2755]: E0130 14:13:44.637871 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.638032 kubelet[2755]: W0130 14:13:44.637969 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.638141 kubelet[2755]: E0130 14:13:44.638073 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.638663 kubelet[2755]: E0130 14:13:44.638625 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.640578 kubelet[2755]: W0130 14:13:44.638659 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.640796 kubelet[2755]: E0130 14:13:44.640740 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.642815 kubelet[2755]: E0130 14:13:44.642737 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.643232 kubelet[2755]: W0130 14:13:44.642913 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.643232 kubelet[2755]: E0130 14:13:44.643083 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.644551 kubelet[2755]: E0130 14:13:44.643982 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.644551 kubelet[2755]: W0130 14:13:44.644022 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.644699 kubelet[2755]: E0130 14:13:44.644586 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.644699 kubelet[2755]: W0130 14:13:44.644605 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.645527 kubelet[2755]: E0130 14:13:44.645169 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.645527 kubelet[2755]: W0130 14:13:44.645203 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.645527 kubelet[2755]: E0130 14:13:44.645421 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.645527 kubelet[2755]: E0130 14:13:44.645469 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.645527 kubelet[2755]: E0130 14:13:44.645498 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.646257 kubelet[2755]: E0130 14:13:44.646185 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.646257 kubelet[2755]: W0130 14:13:44.646213 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.646669 kubelet[2755]: E0130 14:13:44.646440 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.646669 kubelet[2755]: E0130 14:13:44.646509 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.646669 kubelet[2755]: W0130 14:13:44.646521 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.646669 kubelet[2755]: E0130 14:13:44.646618 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.647232 kubelet[2755]: E0130 14:13:44.646959 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.647232 kubelet[2755]: W0130 14:13:44.646984 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.647412 kubelet[2755]: E0130 14:13:44.647192 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.647519 kubelet[2755]: E0130 14:13:44.647352 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.647721 kubelet[2755]: W0130 14:13:44.647593 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.647835 kubelet[2755]: E0130 14:13:44.647817 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.647983 kubelet[2755]: E0130 14:13:44.647961 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.648306 kubelet[2755]: W0130 14:13:44.648046 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.648702 kubelet[2755]: E0130 14:13:44.648397 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.648702 kubelet[2755]: E0130 14:13:44.648467 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.648702 kubelet[2755]: W0130 14:13:44.648571 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.648702 kubelet[2755]: E0130 14:13:44.648588 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:44.668610 kubelet[2755]: E0130 14:13:44.668552 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:44.668610 kubelet[2755]: W0130 14:13:44.668576 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:44.668610 kubelet[2755]: E0130 14:13:44.668602 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:45.880727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165104804.mount: Deactivated successfully. Jan 30 14:13:46.347402 containerd[1481]: time="2025-01-30T14:13:46.346350937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:46.347402 containerd[1481]: time="2025-01-30T14:13:46.347351063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 30 14:13:46.347980 containerd[1481]: time="2025-01-30T14:13:46.347948946Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:46.350389 containerd[1481]: time="2025-01-30T14:13:46.350330840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:46.351344 containerd[1481]: time="2025-01-30T14:13:46.351302765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.797108813s" Jan 30 14:13:46.351344 containerd[1481]: time="2025-01-30T14:13:46.351339046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 30 14:13:46.352679 containerd[1481]: time="2025-01-30T14:13:46.352646773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 14:13:46.370365 containerd[1481]: time="2025-01-30T14:13:46.370279193Z" level=info msg="CreateContainer within sandbox \"3040dee7b7fe5f6a15022c5bc4eb4ccd43ba3a1ab37d55b676c63531f8cf7450\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 14:13:46.376257 kubelet[2755]: E0130 14:13:46.376203 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbm79" podUID="2389e6aa-aa58-48fd-bacc-def6ddcc0f86" Jan 30 14:13:46.390950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount707425200.mount: Deactivated successfully. Jan 30 14:13:46.392990 containerd[1481]: time="2025-01-30T14:13:46.391633754Z" level=info msg="CreateContainer within sandbox \"3040dee7b7fe5f6a15022c5bc4eb4ccd43ba3a1ab37d55b676c63531f8cf7450\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f7c0169d259d8f196f545603f9afa0ea5b87e6b23b543f59ab563d01c6f61311\"" Jan 30 14:13:46.395985 containerd[1481]: time="2025-01-30T14:13:46.395287735Z" level=info msg="StartContainer for \"f7c0169d259d8f196f545603f9afa0ea5b87e6b23b543f59ab563d01c6f61311\"" Jan 30 14:13:46.429134 systemd[1]: Started cri-containerd-f7c0169d259d8f196f545603f9afa0ea5b87e6b23b543f59ab563d01c6f61311.scope - libcontainer container f7c0169d259d8f196f545603f9afa0ea5b87e6b23b543f59ab563d01c6f61311. Jan 30 14:13:46.470726 containerd[1481]: time="2025-01-30T14:13:46.470474161Z" level=info msg="StartContainer for \"f7c0169d259d8f196f545603f9afa0ea5b87e6b23b543f59ab563d01c6f61311\" returns successfully" Jan 30 14:13:46.534646 kubelet[2755]: E0130 14:13:46.534342 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.534646 kubelet[2755]: W0130 14:13:46.534365 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.534646 kubelet[2755]: E0130 14:13:46.534394 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.536557 kubelet[2755]: E0130 14:13:46.536125 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.536557 kubelet[2755]: W0130 14:13:46.536235 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.536557 kubelet[2755]: E0130 14:13:46.536255 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.536842 kubelet[2755]: E0130 14:13:46.536696 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.536842 kubelet[2755]: W0130 14:13:46.536707 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.536842 kubelet[2755]: E0130 14:13:46.536719 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.537750 kubelet[2755]: E0130 14:13:46.537606 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.537750 kubelet[2755]: W0130 14:13:46.537622 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.537750 kubelet[2755]: E0130 14:13:46.537635 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.538972 kubelet[2755]: E0130 14:13:46.538513 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.538972 kubelet[2755]: W0130 14:13:46.538699 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.538972 kubelet[2755]: E0130 14:13:46.538714 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.540048 kubelet[2755]: E0130 14:13:46.539747 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.540048 kubelet[2755]: W0130 14:13:46.539763 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.540048 kubelet[2755]: E0130 14:13:46.539776 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.540667 kubelet[2755]: E0130 14:13:46.540502 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.540667 kubelet[2755]: W0130 14:13:46.540515 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.540667 kubelet[2755]: E0130 14:13:46.540546 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.541371 kubelet[2755]: E0130 14:13:46.541130 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.541371 kubelet[2755]: W0130 14:13:46.541146 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.541371 kubelet[2755]: E0130 14:13:46.541157 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.541557 kubelet[2755]: E0130 14:13:46.541480 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.541557 kubelet[2755]: W0130 14:13:46.541500 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.541557 kubelet[2755]: E0130 14:13:46.541514 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.542494 kubelet[2755]: E0130 14:13:46.542471 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.542494 kubelet[2755]: W0130 14:13:46.542489 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.542858 kubelet[2755]: E0130 14:13:46.542501 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.542858 kubelet[2755]: E0130 14:13:46.542695 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.542858 kubelet[2755]: W0130 14:13:46.542704 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.542858 kubelet[2755]: E0130 14:13:46.542714 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.543791 kubelet[2755]: E0130 14:13:46.543757 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.543791 kubelet[2755]: W0130 14:13:46.543781 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.544079 kubelet[2755]: E0130 14:13:46.543802 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.544079 kubelet[2755]: E0130 14:13:46.544041 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.544079 kubelet[2755]: W0130 14:13:46.544053 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.544079 kubelet[2755]: E0130 14:13:46.544064 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.544383 kubelet[2755]: E0130 14:13:46.544243 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.544383 kubelet[2755]: W0130 14:13:46.544255 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.544383 kubelet[2755]: E0130 14:13:46.544265 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.544483 kubelet[2755]: E0130 14:13:46.544390 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.544483 kubelet[2755]: W0130 14:13:46.544397 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.544483 kubelet[2755]: E0130 14:13:46.544405 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.544781 kubelet[2755]: E0130 14:13:46.544629 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.544781 kubelet[2755]: W0130 14:13:46.544640 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.544781 kubelet[2755]: E0130 14:13:46.544648 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.544781 kubelet[2755]: E0130 14:13:46.544808 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.544781 kubelet[2755]: W0130 14:13:46.544816 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.544781 kubelet[2755]: E0130 14:13:46.544825 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.545165 kubelet[2755]: E0130 14:13:46.545054 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.545165 kubelet[2755]: W0130 14:13:46.545064 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.545165 kubelet[2755]: E0130 14:13:46.545073 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.545716 kubelet[2755]: E0130 14:13:46.545289 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.545716 kubelet[2755]: W0130 14:13:46.545299 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.545716 kubelet[2755]: E0130 14:13:46.545307 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.545716 kubelet[2755]: E0130 14:13:46.545441 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.545716 kubelet[2755]: W0130 14:13:46.545448 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.545716 kubelet[2755]: E0130 14:13:46.545455 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.545716 kubelet[2755]: E0130 14:13:46.545588 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.545716 kubelet[2755]: W0130 14:13:46.545595 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.545716 kubelet[2755]: E0130 14:13:46.545609 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.546510 kubelet[2755]: E0130 14:13:46.545783 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.546510 kubelet[2755]: W0130 14:13:46.545791 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.546510 kubelet[2755]: E0130 14:13:46.545804 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.547049 kubelet[2755]: E0130 14:13:46.547030 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.547109 kubelet[2755]: W0130 14:13:46.547049 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.547109 kubelet[2755]: E0130 14:13:46.547074 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.547504 kubelet[2755]: E0130 14:13:46.547487 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.547504 kubelet[2755]: W0130 14:13:46.547501 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.547568 kubelet[2755]: E0130 14:13:46.547514 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.548063 kubelet[2755]: E0130 14:13:46.548045 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.548063 kubelet[2755]: W0130 14:13:46.548060 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.548256 kubelet[2755]: E0130 14:13:46.548179 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.548609 kubelet[2755]: E0130 14:13:46.548589 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.548609 kubelet[2755]: W0130 14:13:46.548604 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.548739 kubelet[2755]: E0130 14:13:46.548700 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.549196 kubelet[2755]: E0130 14:13:46.549155 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.549196 kubelet[2755]: W0130 14:13:46.549190 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.549367 kubelet[2755]: E0130 14:13:46.549317 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.549653 kubelet[2755]: E0130 14:13:46.549636 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.549653 kubelet[2755]: W0130 14:13:46.549650 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.549778 kubelet[2755]: E0130 14:13:46.549759 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.550482 kubelet[2755]: E0130 14:13:46.550455 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.550482 kubelet[2755]: W0130 14:13:46.550480 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.550570 kubelet[2755]: E0130 14:13:46.550494 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.551337 kubelet[2755]: E0130 14:13:46.551315 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.551337 kubelet[2755]: W0130 14:13:46.551337 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.551419 kubelet[2755]: E0130 14:13:46.551355 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.551980 kubelet[2755]: E0130 14:13:46.551847 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.551980 kubelet[2755]: W0130 14:13:46.551864 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.551980 kubelet[2755]: E0130 14:13:46.551892 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.552798 kubelet[2755]: E0130 14:13:46.552568 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.552798 kubelet[2755]: W0130 14:13:46.552587 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.552798 kubelet[2755]: E0130 14:13:46.552601 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:46.553338 kubelet[2755]: E0130 14:13:46.553292 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:46.553338 kubelet[2755]: W0130 14:13:46.553310 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:46.553338 kubelet[2755]: E0130 14:13:46.553322 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.488668 kubelet[2755]: I0130 14:13:47.488605 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:13:47.553627 kubelet[2755]: E0130 14:13:47.553491 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.554163 kubelet[2755]: W0130 14:13:47.553523 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.554163 kubelet[2755]: E0130 14:13:47.553829 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.554478 kubelet[2755]: E0130 14:13:47.554375 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.554478 kubelet[2755]: W0130 14:13:47.554417 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.554478 kubelet[2755]: E0130 14:13:47.554432 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.555293 kubelet[2755]: E0130 14:13:47.554877 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.555293 kubelet[2755]: W0130 14:13:47.555021 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.555293 kubelet[2755]: E0130 14:13:47.555038 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.555680 kubelet[2755]: E0130 14:13:47.555659 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.555784 kubelet[2755]: W0130 14:13:47.555769 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.556069 kubelet[2755]: E0130 14:13:47.555841 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.556327 kubelet[2755]: E0130 14:13:47.556309 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.556735 kubelet[2755]: W0130 14:13:47.556596 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.556735 kubelet[2755]: E0130 14:13:47.556619 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.557258 kubelet[2755]: E0130 14:13:47.557057 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.557258 kubelet[2755]: W0130 14:13:47.557160 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.557258 kubelet[2755]: E0130 14:13:47.557176 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.557942 kubelet[2755]: E0130 14:13:47.557734 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.557942 kubelet[2755]: W0130 14:13:47.557750 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.557942 kubelet[2755]: E0130 14:13:47.557763 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.558362 kubelet[2755]: E0130 14:13:47.558227 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.558362 kubelet[2755]: W0130 14:13:47.558243 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.558362 kubelet[2755]: E0130 14:13:47.558256 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.558568 kubelet[2755]: E0130 14:13:47.558554 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.558768 kubelet[2755]: W0130 14:13:47.558741 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.558904 kubelet[2755]: E0130 14:13:47.558847 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.559837 kubelet[2755]: E0130 14:13:47.559816 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.560145 kubelet[2755]: W0130 14:13:47.559962 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.560145 kubelet[2755]: E0130 14:13:47.559986 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.560563 kubelet[2755]: E0130 14:13:47.560403 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.560563 kubelet[2755]: W0130 14:13:47.560419 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.560563 kubelet[2755]: E0130 14:13:47.560437 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.561387 kubelet[2755]: E0130 14:13:47.561178 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.561387 kubelet[2755]: W0130 14:13:47.561212 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.561387 kubelet[2755]: E0130 14:13:47.561226 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.561679 kubelet[2755]: E0130 14:13:47.561660 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.561754 kubelet[2755]: W0130 14:13:47.561741 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.561840 kubelet[2755]: E0130 14:13:47.561826 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.562320 kubelet[2755]: E0130 14:13:47.562285 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.562608 kubelet[2755]: W0130 14:13:47.562421 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.562826 kubelet[2755]: E0130 14:13:47.562689 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.563207 kubelet[2755]: E0130 14:13:47.563185 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.563501 kubelet[2755]: W0130 14:13:47.563276 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.563501 kubelet[2755]: E0130 14:13:47.563294 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.654138 kubelet[2755]: E0130 14:13:47.654097 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.655707 kubelet[2755]: W0130 14:13:47.654743 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.655707 kubelet[2755]: E0130 14:13:47.654793 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.655707 kubelet[2755]: E0130 14:13:47.655570 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.655707 kubelet[2755]: W0130 14:13:47.655592 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.655707 kubelet[2755]: E0130 14:13:47.655627 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.656820 kubelet[2755]: E0130 14:13:47.656655 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.656820 kubelet[2755]: W0130 14:13:47.656680 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.656820 kubelet[2755]: E0130 14:13:47.656710 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.657580 kubelet[2755]: E0130 14:13:47.657522 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.657580 kubelet[2755]: W0130 14:13:47.657573 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.657790 kubelet[2755]: E0130 14:13:47.657743 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.658381 kubelet[2755]: E0130 14:13:47.658193 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.658381 kubelet[2755]: W0130 14:13:47.658222 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.658381 kubelet[2755]: E0130 14:13:47.658264 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.658702 kubelet[2755]: E0130 14:13:47.658471 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.658702 kubelet[2755]: W0130 14:13:47.658482 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.658702 kubelet[2755]: E0130 14:13:47.658571 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.659247 kubelet[2755]: E0130 14:13:47.659064 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.659247 kubelet[2755]: W0130 14:13:47.659099 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.659247 kubelet[2755]: E0130 14:13:47.659129 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.659633 kubelet[2755]: E0130 14:13:47.659537 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.659633 kubelet[2755]: W0130 14:13:47.659568 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.660185 kubelet[2755]: E0130 14:13:47.659918 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.660185 kubelet[2755]: E0130 14:13:47.660008 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.660185 kubelet[2755]: W0130 14:13:47.660021 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.660185 kubelet[2755]: E0130 14:13:47.660149 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.660322 kubelet[2755]: E0130 14:13:47.660214 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.660322 kubelet[2755]: W0130 14:13:47.660225 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.660322 kubelet[2755]: E0130 14:13:47.660255 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.660442 kubelet[2755]: E0130 14:13:47.660417 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.660442 kubelet[2755]: W0130 14:13:47.660438 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.660442 kubelet[2755]: E0130 14:13:47.660459 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.661010 kubelet[2755]: E0130 14:13:47.660975 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.661010 kubelet[2755]: W0130 14:13:47.660990 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.661066 kubelet[2755]: E0130 14:13:47.661016 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.662190 kubelet[2755]: E0130 14:13:47.662037 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.662190 kubelet[2755]: W0130 14:13:47.662064 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.662190 kubelet[2755]: E0130 14:13:47.662106 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.663160 kubelet[2755]: E0130 14:13:47.662866 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.663160 kubelet[2755]: W0130 14:13:47.662986 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.663160 kubelet[2755]: E0130 14:13:47.663020 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.663673 kubelet[2755]: E0130 14:13:47.663527 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.663673 kubelet[2755]: W0130 14:13:47.663545 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.663673 kubelet[2755]: E0130 14:13:47.663632 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.664215 kubelet[2755]: E0130 14:13:47.664097 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.664215 kubelet[2755]: W0130 14:13:47.664115 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.664215 kubelet[2755]: E0130 14:13:47.664140 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.664448 kubelet[2755]: E0130 14:13:47.664423 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.664448 kubelet[2755]: W0130 14:13:47.664443 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.664516 kubelet[2755]: E0130 14:13:47.664458 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.665002 kubelet[2755]: E0130 14:13:47.664954 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:13:47.665002 kubelet[2755]: W0130 14:13:47.664970 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:13:47.665002 kubelet[2755]: E0130 14:13:47.664984 2755 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:13:47.689008 containerd[1481]: time="2025-01-30T14:13:47.688010586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:47.689719 containerd[1481]: time="2025-01-30T14:13:47.689572475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 30 14:13:47.690727 containerd[1481]: time="2025-01-30T14:13:47.690670041Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:47.695678 containerd[1481]: time="2025-01-30T14:13:47.695620309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:47.698589 containerd[1481]: time="2025-01-30T14:13:47.696787435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.343949261s" Jan 30 14:13:47.698589 containerd[1481]: time="2025-01-30T14:13:47.698439644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 30 14:13:47.704099 containerd[1481]: time="2025-01-30T14:13:47.703855994Z" level=info msg="CreateContainer within sandbox \"a501a4019acb5db54d6e9eab90929fec4881759ddb6844650af091b7644b99b4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 14:13:47.720463 containerd[1481]: time="2025-01-30T14:13:47.720416926Z" level=info msg="CreateContainer within sandbox \"a501a4019acb5db54d6e9eab90929fec4881759ddb6844650af091b7644b99b4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2c3519a2b1e234f1fd083b7ec87df43c33605d48e2052c1830a0839006ef8712\"" Jan 30 14:13:47.722723 containerd[1481]: time="2025-01-30T14:13:47.721439772Z" level=info msg="StartContainer for \"2c3519a2b1e234f1fd083b7ec87df43c33605d48e2052c1830a0839006ef8712\"" Jan 30 14:13:47.766193 systemd[1]: Started cri-containerd-2c3519a2b1e234f1fd083b7ec87df43c33605d48e2052c1830a0839006ef8712.scope - libcontainer container 2c3519a2b1e234f1fd083b7ec87df43c33605d48e2052c1830a0839006ef8712. Jan 30 14:13:47.810343 containerd[1481]: time="2025-01-30T14:13:47.810274746Z" level=info msg="StartContainer for \"2c3519a2b1e234f1fd083b7ec87df43c33605d48e2052c1830a0839006ef8712\" returns successfully" Jan 30 14:13:47.833130 systemd[1]: cri-containerd-2c3519a2b1e234f1fd083b7ec87df43c33605d48e2052c1830a0839006ef8712.scope: Deactivated successfully. Jan 30 14:13:48.065925 containerd[1481]: time="2025-01-30T14:13:48.065658957Z" level=info msg="shim disconnected" id=2c3519a2b1e234f1fd083b7ec87df43c33605d48e2052c1830a0839006ef8712 namespace=k8s.io Jan 30 14:13:48.065925 containerd[1481]: time="2025-01-30T14:13:48.065731798Z" level=warning msg="cleaning up after shim disconnected" id=2c3519a2b1e234f1fd083b7ec87df43c33605d48e2052c1830a0839006ef8712 namespace=k8s.io Jan 30 14:13:48.065925 containerd[1481]: time="2025-01-30T14:13:48.065746438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:13:48.360989 systemd[1]: run-containerd-runc-k8s.io-2c3519a2b1e234f1fd083b7ec87df43c33605d48e2052c1830a0839006ef8712-runc.dqea2P.mount: Deactivated successfully. Jan 30 14:13:48.361262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c3519a2b1e234f1fd083b7ec87df43c33605d48e2052c1830a0839006ef8712-rootfs.mount: Deactivated successfully. Jan 30 14:13:48.376005 kubelet[2755]: E0130 14:13:48.375941 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbm79" podUID="2389e6aa-aa58-48fd-bacc-def6ddcc0f86" Jan 30 14:13:48.504402 containerd[1481]: time="2025-01-30T14:13:48.503057219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 14:13:48.533276 kubelet[2755]: I0130 14:13:48.532610 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8b577f789-d9l9j" podStartSLOduration=3.7298307729999998 podStartE2EDuration="5.53258734s" podCreationTimestamp="2025-01-30 14:13:43 +0000 UTC" firstStartedPulling="2025-01-30 14:13:44.549711045 +0000 UTC m=+23.294621314" lastFinishedPulling="2025-01-30 14:13:46.352467612 +0000 UTC m=+25.097377881" observedRunningTime="2025-01-30 14:13:46.506619646 +0000 UTC m=+25.251529915" watchObservedRunningTime="2025-01-30 14:13:48.53258734 +0000 UTC m=+27.277497609" Jan 30 14:13:50.376308 kubelet[2755]: E0130 14:13:50.376254 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbm79" podUID="2389e6aa-aa58-48fd-bacc-def6ddcc0f86" Jan 30 14:13:51.053925 containerd[1481]: time="2025-01-30T14:13:51.053476374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:51.055103 containerd[1481]: time="2025-01-30T14:13:51.054797381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 30 14:13:51.056913 containerd[1481]: time="2025-01-30T14:13:51.055840986Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:51.060498 containerd[1481]: time="2025-01-30T14:13:51.059423405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:51.060498 containerd[1481]: time="2025-01-30T14:13:51.060337690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.55711439s" Jan 30 14:13:51.060498 containerd[1481]: time="2025-01-30T14:13:51.060375610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 30 14:13:51.067221 containerd[1481]: time="2025-01-30T14:13:51.067035804Z" level=info msg="CreateContainer within sandbox \"a501a4019acb5db54d6e9eab90929fec4881759ddb6844650af091b7644b99b4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:13:51.094134 containerd[1481]: time="2025-01-30T14:13:51.094040623Z" level=info msg="CreateContainer within sandbox \"a501a4019acb5db54d6e9eab90929fec4881759ddb6844650af091b7644b99b4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"01bfaa7829506973070e1e92a4a7b7dae0859a8a6eac51d858a9350bd8e36546\"" Jan 30 14:13:51.095809 containerd[1481]: time="2025-01-30T14:13:51.095752591Z" level=info msg="StartContainer for \"01bfaa7829506973070e1e92a4a7b7dae0859a8a6eac51d858a9350bd8e36546\"" Jan 30 14:13:51.130287 systemd[1]: Started cri-containerd-01bfaa7829506973070e1e92a4a7b7dae0859a8a6eac51d858a9350bd8e36546.scope - libcontainer container 01bfaa7829506973070e1e92a4a7b7dae0859a8a6eac51d858a9350bd8e36546. Jan 30 14:13:51.164061 containerd[1481]: time="2025-01-30T14:13:51.164009622Z" level=info msg="StartContainer for \"01bfaa7829506973070e1e92a4a7b7dae0859a8a6eac51d858a9350bd8e36546\" returns successfully" Jan 30 14:13:51.746000 containerd[1481]: time="2025-01-30T14:13:51.745934370Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:13:51.750033 systemd[1]: cri-containerd-01bfaa7829506973070e1e92a4a7b7dae0859a8a6eac51d858a9350bd8e36546.scope: Deactivated successfully. Jan 30 14:13:51.785663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01bfaa7829506973070e1e92a4a7b7dae0859a8a6eac51d858a9350bd8e36546-rootfs.mount: Deactivated successfully. Jan 30 14:13:51.793305 kubelet[2755]: I0130 14:13:51.793267 2755 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:13:51.838104 kubelet[2755]: I0130 14:13:51.836712 2755 topology_manager.go:215] "Topology Admit Handler" podUID="5a3593c1-ac5b-4526-8f21-4d2d404a8f63" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mh58t" Jan 30 14:13:51.854032 kubelet[2755]: I0130 14:13:51.845011 2755 topology_manager.go:215] "Topology Admit Handler" podUID="5d2bba59-863e-409a-b8ef-65a000a343fa" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6zngw" Jan 30 14:13:51.857038 systemd[1]: Created slice kubepods-burstable-pod5a3593c1_ac5b_4526_8f21_4d2d404a8f63.slice - libcontainer container kubepods-burstable-pod5a3593c1_ac5b_4526_8f21_4d2d404a8f63.slice. Jan 30 14:13:51.867211 kubelet[2755]: I0130 14:13:51.867155 2755 topology_manager.go:215] "Topology Admit Handler" podUID="13a8dc64-168c-4ade-949f-18933b9e3810" podNamespace="calico-system" podName="calico-kube-controllers-5b56ddf557-t64pl" Jan 30 14:13:51.880473 systemd[1]: Created slice kubepods-burstable-pod5d2bba59_863e_409a_b8ef_65a000a343fa.slice - libcontainer container kubepods-burstable-pod5d2bba59_863e_409a_b8ef_65a000a343fa.slice. Jan 30 14:13:51.888279 kubelet[2755]: I0130 14:13:51.888106 2755 topology_manager.go:215] "Topology Admit Handler" podUID="eda514dd-0d84-4ae8-92bf-c1abd6012d3c" podNamespace="calico-apiserver" podName="calico-apiserver-5ff87d744d-d22n9" Jan 30 14:13:51.889239 kubelet[2755]: I0130 14:13:51.888949 2755 topology_manager.go:215] "Topology Admit Handler" podUID="f55cf8b0-d7a7-4bae-bf93-c673ed2dc528" podNamespace="calico-apiserver" podName="calico-apiserver-5ff87d744d-8lc5m" Jan 30 14:13:51.901549 systemd[1]: Created slice kubepods-besteffort-pod13a8dc64_168c_4ade_949f_18933b9e3810.slice - libcontainer container kubepods-besteffort-pod13a8dc64_168c_4ade_949f_18933b9e3810.slice. Jan 30 14:13:51.917320 systemd[1]: Created slice kubepods-besteffort-podeda514dd_0d84_4ae8_92bf_c1abd6012d3c.slice - libcontainer container kubepods-besteffort-podeda514dd_0d84_4ae8_92bf_c1abd6012d3c.slice. Jan 30 14:13:51.937357 systemd[1]: Created slice kubepods-besteffort-podf55cf8b0_d7a7_4bae_bf93_c673ed2dc528.slice - libcontainer container kubepods-besteffort-podf55cf8b0_d7a7_4bae_bf93_c673ed2dc528.slice. Jan 30 14:13:51.979978 containerd[1481]: time="2025-01-30T14:13:51.979911131Z" level=info msg="shim disconnected" id=01bfaa7829506973070e1e92a4a7b7dae0859a8a6eac51d858a9350bd8e36546 namespace=k8s.io Jan 30 14:13:51.980322 containerd[1481]: time="2025-01-30T14:13:51.980292333Z" level=warning msg="cleaning up after shim disconnected" id=01bfaa7829506973070e1e92a4a7b7dae0859a8a6eac51d858a9350bd8e36546 namespace=k8s.io Jan 30 14:13:51.980440 containerd[1481]: time="2025-01-30T14:13:51.980424294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:13:51.989159 kubelet[2755]: I0130 14:13:51.989034 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z4cg\" (UniqueName: \"kubernetes.io/projected/eda514dd-0d84-4ae8-92bf-c1abd6012d3c-kube-api-access-6z4cg\") pod \"calico-apiserver-5ff87d744d-d22n9\" (UID: \"eda514dd-0d84-4ae8-92bf-c1abd6012d3c\") " pod="calico-apiserver/calico-apiserver-5ff87d744d-d22n9" Jan 30 14:13:51.991010 kubelet[2755]: I0130 14:13:51.989477 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d2bba59-863e-409a-b8ef-65a000a343fa-config-volume\") pod \"coredns-7db6d8ff4d-6zngw\" (UID: \"5d2bba59-863e-409a-b8ef-65a000a343fa\") " pod="kube-system/coredns-7db6d8ff4d-6zngw" Jan 30 14:13:51.991546 kubelet[2755]: I0130 14:13:51.989517 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a3593c1-ac5b-4526-8f21-4d2d404a8f63-config-volume\") pod \"coredns-7db6d8ff4d-mh58t\" (UID: \"5a3593c1-ac5b-4526-8f21-4d2d404a8f63\") " pod="kube-system/coredns-7db6d8ff4d-mh58t" Jan 30 14:13:51.991546 kubelet[2755]: I0130 14:13:51.991379 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx9pj\" (UniqueName: \"kubernetes.io/projected/5a3593c1-ac5b-4526-8f21-4d2d404a8f63-kube-api-access-rx9pj\") pod \"coredns-7db6d8ff4d-mh58t\" (UID: \"5a3593c1-ac5b-4526-8f21-4d2d404a8f63\") " pod="kube-system/coredns-7db6d8ff4d-mh58t" Jan 30 14:13:51.991546 kubelet[2755]: I0130 14:13:51.991439 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp8hd\" (UniqueName: \"kubernetes.io/projected/5d2bba59-863e-409a-b8ef-65a000a343fa-kube-api-access-bp8hd\") pod \"coredns-7db6d8ff4d-6zngw\" (UID: \"5d2bba59-863e-409a-b8ef-65a000a343fa\") " pod="kube-system/coredns-7db6d8ff4d-6zngw" Jan 30 14:13:51.991546 kubelet[2755]: I0130 14:13:51.991479 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13a8dc64-168c-4ade-949f-18933b9e3810-tigera-ca-bundle\") pod \"calico-kube-controllers-5b56ddf557-t64pl\" (UID: \"13a8dc64-168c-4ade-949f-18933b9e3810\") " pod="calico-system/calico-kube-controllers-5b56ddf557-t64pl" Jan 30 14:13:51.991546 kubelet[2755]: I0130 14:13:51.991498 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcmfj\" (UniqueName: \"kubernetes.io/projected/f55cf8b0-d7a7-4bae-bf93-c673ed2dc528-kube-api-access-mcmfj\") pod \"calico-apiserver-5ff87d744d-8lc5m\" (UID: \"f55cf8b0-d7a7-4bae-bf93-c673ed2dc528\") " pod="calico-apiserver/calico-apiserver-5ff87d744d-8lc5m" Jan 30 14:13:51.992223 kubelet[2755]: I0130 14:13:51.991948 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dls6\" (UniqueName: \"kubernetes.io/projected/13a8dc64-168c-4ade-949f-18933b9e3810-kube-api-access-9dls6\") pod \"calico-kube-controllers-5b56ddf557-t64pl\" (UID: \"13a8dc64-168c-4ade-949f-18933b9e3810\") " pod="calico-system/calico-kube-controllers-5b56ddf557-t64pl" Jan 30 14:13:51.992223 kubelet[2755]: I0130 14:13:51.992006 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eda514dd-0d84-4ae8-92bf-c1abd6012d3c-calico-apiserver-certs\") pod \"calico-apiserver-5ff87d744d-d22n9\" (UID: \"eda514dd-0d84-4ae8-92bf-c1abd6012d3c\") " pod="calico-apiserver/calico-apiserver-5ff87d744d-d22n9" Jan 30 14:13:51.992223 kubelet[2755]: I0130 14:13:51.992050 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f55cf8b0-d7a7-4bae-bf93-c673ed2dc528-calico-apiserver-certs\") pod \"calico-apiserver-5ff87d744d-8lc5m\" (UID: \"f55cf8b0-d7a7-4bae-bf93-c673ed2dc528\") " pod="calico-apiserver/calico-apiserver-5ff87d744d-8lc5m" Jan 30 14:13:52.005773 containerd[1481]: time="2025-01-30T14:13:52.005586263Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:13:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:13:52.165684 containerd[1481]: time="2025-01-30T14:13:52.165037666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mh58t,Uid:5a3593c1-ac5b-4526-8f21-4d2d404a8f63,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:52.197296 containerd[1481]: time="2025-01-30T14:13:52.196963027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6zngw,Uid:5d2bba59-863e-409a-b8ef-65a000a343fa,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:52.210130 containerd[1481]: time="2025-01-30T14:13:52.209863692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b56ddf557-t64pl,Uid:13a8dc64-168c-4ade-949f-18933b9e3810,Namespace:calico-system,Attempt:0,}" Jan 30 14:13:52.228423 containerd[1481]: time="2025-01-30T14:13:52.227981383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff87d744d-d22n9,Uid:eda514dd-0d84-4ae8-92bf-c1abd6012d3c,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:13:52.250973 containerd[1481]: time="2025-01-30T14:13:52.250924539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff87d744d-8lc5m,Uid:f55cf8b0-d7a7-4bae-bf93-c673ed2dc528,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:13:52.393400 systemd[1]: Created slice kubepods-besteffort-pod2389e6aa_aa58_48fd_bacc_def6ddcc0f86.slice - libcontainer container kubepods-besteffort-pod2389e6aa_aa58_48fd_bacc_def6ddcc0f86.slice. Jan 30 14:13:52.399922 containerd[1481]: time="2025-01-30T14:13:52.398873684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbm79,Uid:2389e6aa-aa58-48fd-bacc-def6ddcc0f86,Namespace:calico-system,Attempt:0,}" Jan 30 14:13:52.527125 containerd[1481]: time="2025-01-30T14:13:52.527054930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 14:13:52.555490 containerd[1481]: time="2025-01-30T14:13:52.554960871Z" level=error msg="Failed to destroy network for sandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.557792 containerd[1481]: time="2025-01-30T14:13:52.557404723Z" level=error msg="encountered an error cleaning up failed sandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.557792 containerd[1481]: time="2025-01-30T14:13:52.557492363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mh58t,Uid:5a3593c1-ac5b-4526-8f21-4d2d404a8f63,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.558031 kubelet[2755]: E0130 14:13:52.557915 2755 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.558031 kubelet[2755]: E0130 14:13:52.557993 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mh58t" Jan 30 14:13:52.558031 kubelet[2755]: E0130 14:13:52.558014 2755 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mh58t" Jan 30 14:13:52.558177 kubelet[2755]: E0130 14:13:52.558062 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-mh58t_kube-system(5a3593c1-ac5b-4526-8f21-4d2d404a8f63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-mh58t_kube-system(5a3593c1-ac5b-4526-8f21-4d2d404a8f63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mh58t" podUID="5a3593c1-ac5b-4526-8f21-4d2d404a8f63" Jan 30 14:13:52.568214 containerd[1481]: time="2025-01-30T14:13:52.567986176Z" level=error msg="Failed to destroy network for sandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.569865 containerd[1481]: time="2025-01-30T14:13:52.569544704Z" level=error msg="encountered an error cleaning up failed sandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.569865 containerd[1481]: time="2025-01-30T14:13:52.569618744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff87d744d-d22n9,Uid:eda514dd-0d84-4ae8-92bf-c1abd6012d3c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.570168 kubelet[2755]: E0130 14:13:52.569906 2755 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.570168 kubelet[2755]: E0130 14:13:52.569970 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff87d744d-d22n9" Jan 30 14:13:52.570168 kubelet[2755]: E0130 14:13:52.569989 2755 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff87d744d-d22n9" Jan 30 14:13:52.570294 kubelet[2755]: E0130 14:13:52.570040 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ff87d744d-d22n9_calico-apiserver(eda514dd-0d84-4ae8-92bf-c1abd6012d3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ff87d744d-d22n9_calico-apiserver(eda514dd-0d84-4ae8-92bf-c1abd6012d3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff87d744d-d22n9" podUID="eda514dd-0d84-4ae8-92bf-c1abd6012d3c" Jan 30 14:13:52.580323 containerd[1481]: time="2025-01-30T14:13:52.579592755Z" level=error msg="Failed to destroy network for sandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.582831 containerd[1481]: time="2025-01-30T14:13:52.582751131Z" level=error msg="encountered an error cleaning up failed sandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.583319 containerd[1481]: time="2025-01-30T14:13:52.583143412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff87d744d-8lc5m,Uid:f55cf8b0-d7a7-4bae-bf93-c673ed2dc528,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.583778 containerd[1481]: time="2025-01-30T14:13:52.583712575Z" level=error msg="Failed to destroy network for sandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.584441 containerd[1481]: time="2025-01-30T14:13:52.584132737Z" level=error msg="encountered an error cleaning up failed sandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.584441 containerd[1481]: time="2025-01-30T14:13:52.584178418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6zngw,Uid:5d2bba59-863e-409a-b8ef-65a000a343fa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.584587 kubelet[2755]: E0130 14:13:52.584349 2755 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.584587 kubelet[2755]: E0130 14:13:52.584405 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6zngw" Jan 30 14:13:52.584587 kubelet[2755]: E0130 14:13:52.584427 2755 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6zngw" Jan 30 14:13:52.584693 kubelet[2755]: E0130 14:13:52.584476 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6zngw_kube-system(5d2bba59-863e-409a-b8ef-65a000a343fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6zngw_kube-system(5d2bba59-863e-409a-b8ef-65a000a343fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6zngw" podUID="5d2bba59-863e-409a-b8ef-65a000a343fa" Jan 30 14:13:52.585039 kubelet[2755]: E0130 14:13:52.584855 2755 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.585039 kubelet[2755]: E0130 14:13:52.584927 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff87d744d-8lc5m" Jan 30 14:13:52.585039 kubelet[2755]: E0130 14:13:52.584946 2755 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff87d744d-8lc5m" Jan 30 14:13:52.585269 kubelet[2755]: E0130 14:13:52.584983 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ff87d744d-8lc5m_calico-apiserver(f55cf8b0-d7a7-4bae-bf93-c673ed2dc528)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ff87d744d-8lc5m_calico-apiserver(f55cf8b0-d7a7-4bae-bf93-c673ed2dc528)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff87d744d-8lc5m" podUID="f55cf8b0-d7a7-4bae-bf93-c673ed2dc528" Jan 30 14:13:52.630644 containerd[1481]: time="2025-01-30T14:13:52.630394051Z" level=error msg="Failed to destroy network for sandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.636749 containerd[1481]: time="2025-01-30T14:13:52.636337280Z" level=error msg="encountered an error cleaning up failed sandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.636749 containerd[1481]: time="2025-01-30T14:13:52.636514441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b56ddf557-t64pl,Uid:13a8dc64-168c-4ade-949f-18933b9e3810,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.637182 kubelet[2755]: E0130 14:13:52.636863 2755 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.637182 kubelet[2755]: E0130 14:13:52.636963 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b56ddf557-t64pl" Jan 30 14:13:52.637182 kubelet[2755]: E0130 14:13:52.636984 2755 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b56ddf557-t64pl" Jan 30 14:13:52.637426 kubelet[2755]: E0130 14:13:52.637030 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b56ddf557-t64pl_calico-system(13a8dc64-168c-4ade-949f-18933b9e3810)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b56ddf557-t64pl_calico-system(13a8dc64-168c-4ade-949f-18933b9e3810)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b56ddf557-t64pl" podUID="13a8dc64-168c-4ade-949f-18933b9e3810" Jan 30 14:13:52.657165 containerd[1481]: time="2025-01-30T14:13:52.656463662Z" level=error msg="Failed to destroy network for sandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.657852 containerd[1481]: time="2025-01-30T14:13:52.657428987Z" level=error msg="encountered an error cleaning up failed sandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.657852 containerd[1481]: time="2025-01-30T14:13:52.657543187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbm79,Uid:2389e6aa-aa58-48fd-bacc-def6ddcc0f86,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.658744 kubelet[2755]: E0130 14:13:52.658704 2755 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:52.658821 kubelet[2755]: E0130 14:13:52.658765 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbm79" Jan 30 14:13:52.658821 kubelet[2755]: E0130 14:13:52.658784 2755 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbm79" Jan 30 14:13:52.658965 kubelet[2755]: E0130 14:13:52.658823 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tbm79_calico-system(2389e6aa-aa58-48fd-bacc-def6ddcc0f86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tbm79_calico-system(2389e6aa-aa58-48fd-bacc-def6ddcc0f86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tbm79" podUID="2389e6aa-aa58-48fd-bacc-def6ddcc0f86" Jan 30 14:13:53.523930 kubelet[2755]: I0130 14:13:53.523615 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:13:53.525638 containerd[1481]: time="2025-01-30T14:13:53.525591351Z" level=info msg="StopPodSandbox for \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\"" Jan 30 14:13:53.526052 containerd[1481]: time="2025-01-30T14:13:53.525768872Z" level=info msg="Ensure that sandbox a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e in task-service has been cleanup successfully" Jan 30 14:13:53.528536 kubelet[2755]: I0130 14:13:53.526247 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:13:53.528664 containerd[1481]: time="2025-01-30T14:13:53.527194519Z" level=info msg="StopPodSandbox for \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\"" Jan 30 14:13:53.528664 containerd[1481]: time="2025-01-30T14:13:53.527456440Z" level=info msg="Ensure that sandbox 95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16 in task-service has been cleanup successfully" Jan 30 14:13:53.529120 kubelet[2755]: I0130 14:13:53.529047 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:13:53.530129 containerd[1481]: time="2025-01-30T14:13:53.530085013Z" level=info msg="StopPodSandbox for \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\"" Jan 30 14:13:53.530742 containerd[1481]: time="2025-01-30T14:13:53.530706976Z" level=info msg="Ensure that sandbox cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07 in task-service has been cleanup successfully" Jan 30 14:13:53.532124 kubelet[2755]: I0130 14:13:53.532052 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:13:53.534109 containerd[1481]: time="2025-01-30T14:13:53.533866552Z" level=info msg="StopPodSandbox for \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\"" Jan 30 14:13:53.534109 containerd[1481]: time="2025-01-30T14:13:53.534105633Z" level=info msg="Ensure that sandbox 51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32 in task-service has been cleanup successfully" Jan 30 14:13:53.538096 kubelet[2755]: I0130 14:13:53.538049 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:13:53.539682 containerd[1481]: time="2025-01-30T14:13:53.539568820Z" level=info msg="StopPodSandbox for \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\"" Jan 30 14:13:53.539800 containerd[1481]: time="2025-01-30T14:13:53.539755021Z" level=info msg="Ensure that sandbox 560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59 in task-service has been cleanup successfully" Jan 30 14:13:53.544807 kubelet[2755]: I0130 14:13:53.544770 2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:13:53.545716 containerd[1481]: time="2025-01-30T14:13:53.545680690Z" level=info msg="StopPodSandbox for \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\"" Jan 30 14:13:53.550164 containerd[1481]: time="2025-01-30T14:13:53.549809711Z" level=info msg="Ensure that sandbox a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548 in task-service has been cleanup successfully" Jan 30 14:13:53.615313 containerd[1481]: time="2025-01-30T14:13:53.615143074Z" level=error msg="StopPodSandbox for \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\" failed" error="failed to destroy network for sandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:53.615564 kubelet[2755]: E0130 14:13:53.615420 2755 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:13:53.615564 kubelet[2755]: E0130 14:13:53.615477 2755 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e"} Jan 30 14:13:53.615564 kubelet[2755]: E0130 14:13:53.615539 2755 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"13a8dc64-168c-4ade-949f-18933b9e3810\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:13:53.615710 kubelet[2755]: E0130 14:13:53.615562 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"13a8dc64-168c-4ade-949f-18933b9e3810\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b56ddf557-t64pl" podUID="13a8dc64-168c-4ade-949f-18933b9e3810" Jan 30 14:13:53.629816 containerd[1481]: time="2025-01-30T14:13:53.629465464Z" level=error msg="StopPodSandbox for \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\" failed" error="failed to destroy network for sandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:53.630377 kubelet[2755]: E0130 14:13:53.630282 2755 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:13:53.630377 kubelet[2755]: E0130 14:13:53.630349 2755 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07"} Jan 30 14:13:53.630767 kubelet[2755]: E0130 14:13:53.630386 2755 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2389e6aa-aa58-48fd-bacc-def6ddcc0f86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:13:53.630767 kubelet[2755]: E0130 14:13:53.630411 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2389e6aa-aa58-48fd-bacc-def6ddcc0f86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tbm79" podUID="2389e6aa-aa58-48fd-bacc-def6ddcc0f86" Jan 30 14:13:53.634947 containerd[1481]: time="2025-01-30T14:13:53.634872811Z" level=error msg="StopPodSandbox for \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\" failed" error="failed to destroy network for sandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:53.635405 kubelet[2755]: E0130 14:13:53.635351 2755 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:13:53.635465 kubelet[2755]: E0130 14:13:53.635421 2755 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16"} Jan 30 14:13:53.635465 kubelet[2755]: E0130 14:13:53.635454 2755 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a3593c1-ac5b-4526-8f21-4d2d404a8f63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:13:53.635559 kubelet[2755]: E0130 14:13:53.635479 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a3593c1-ac5b-4526-8f21-4d2d404a8f63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mh58t" podUID="5a3593c1-ac5b-4526-8f21-4d2d404a8f63" Jan 30 14:13:53.651867 containerd[1481]: time="2025-01-30T14:13:53.651813215Z" level=error msg="StopPodSandbox for \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\" failed" error="failed to destroy network for sandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:53.652266 kubelet[2755]: E0130 14:13:53.652213 2755 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:13:53.652356 kubelet[2755]: E0130 14:13:53.652273 2755 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59"} Jan 30 14:13:53.652356 kubelet[2755]: E0130 14:13:53.652308 2755 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eda514dd-0d84-4ae8-92bf-c1abd6012d3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:13:53.652356 kubelet[2755]: E0130 14:13:53.652330 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eda514dd-0d84-4ae8-92bf-c1abd6012d3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff87d744d-d22n9" podUID="eda514dd-0d84-4ae8-92bf-c1abd6012d3c" Jan 30 14:13:53.659012 containerd[1481]: time="2025-01-30T14:13:53.658759209Z" level=error msg="StopPodSandbox for \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\" failed" error="failed to destroy network for sandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:53.659460 kubelet[2755]: E0130 14:13:53.659421 2755 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:13:53.659522 kubelet[2755]: E0130 14:13:53.659475 2755 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32"} Jan 30 14:13:53.659557 kubelet[2755]: E0130 14:13:53.659524 2755 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f55cf8b0-d7a7-4bae-bf93-c673ed2dc528\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:13:53.659614 kubelet[2755]: E0130 14:13:53.659549 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f55cf8b0-d7a7-4bae-bf93-c673ed2dc528\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff87d744d-8lc5m" podUID="f55cf8b0-d7a7-4bae-bf93-c673ed2dc528" Jan 30 14:13:53.662030 containerd[1481]: time="2025-01-30T14:13:53.661970705Z" level=error msg="StopPodSandbox for \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\" failed" error="failed to destroy network for sandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:13:53.662261 kubelet[2755]: E0130 14:13:53.662220 2755 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:13:53.662322 kubelet[2755]: E0130 14:13:53.662274 2755 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548"} Jan 30 14:13:53.662322 kubelet[2755]: E0130 14:13:53.662309 2755 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d2bba59-863e-409a-b8ef-65a000a343fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:13:53.662424 kubelet[2755]: E0130 14:13:53.662329 2755 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d2bba59-863e-409a-b8ef-65a000a343fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6zngw" podUID="5d2bba59-863e-409a-b8ef-65a000a343fa" Jan 30 14:13:56.022326 kubelet[2755]: I0130 14:13:56.021169 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:13:57.455845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262700167.mount: Deactivated successfully. Jan 30 14:13:57.503196 containerd[1481]: time="2025-01-30T14:13:57.502955223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:57.504335 containerd[1481]: time="2025-01-30T14:13:57.503748507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 30 14:13:57.505555 containerd[1481]: time="2025-01-30T14:13:57.505487835Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:57.509561 containerd[1481]: time="2025-01-30T14:13:57.508383488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:57.509561 containerd[1481]: time="2025-01-30T14:13:57.509193692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.98180844s" Jan 30 14:13:57.509561 containerd[1481]: time="2025-01-30T14:13:57.509224932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 30 14:13:57.527735 containerd[1481]: time="2025-01-30T14:13:57.527696057Z" level=info msg="CreateContainer within sandbox \"a501a4019acb5db54d6e9eab90929fec4881759ddb6844650af091b7644b99b4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 14:13:57.549223 containerd[1481]: time="2025-01-30T14:13:57.549167396Z" level=info msg="CreateContainer within sandbox \"a501a4019acb5db54d6e9eab90929fec4881759ddb6844650af091b7644b99b4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f264acdda158d465640aaeb179e7a17d8b61b9d205c6a5af2819a6ad46d36aab\"" Jan 30 14:13:57.550164 containerd[1481]: time="2025-01-30T14:13:57.550130040Z" level=info msg="StartContainer for \"f264acdda158d465640aaeb179e7a17d8b61b9d205c6a5af2819a6ad46d36aab\"" Jan 30 14:13:57.589174 systemd[1]: Started cri-containerd-f264acdda158d465640aaeb179e7a17d8b61b9d205c6a5af2819a6ad46d36aab.scope - libcontainer container f264acdda158d465640aaeb179e7a17d8b61b9d205c6a5af2819a6ad46d36aab. Jan 30 14:13:57.622446 containerd[1481]: time="2025-01-30T14:13:57.622366772Z" level=info msg="StartContainer for \"f264acdda158d465640aaeb179e7a17d8b61b9d205c6a5af2819a6ad46d36aab\" returns successfully" Jan 30 14:13:57.741533 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 14:13:57.741689 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 14:13:58.586595 kubelet[2755]: I0130 14:13:58.586365 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wl6ms" podStartSLOduration=1.70291328 podStartE2EDuration="14.586341275s" podCreationTimestamp="2025-01-30 14:13:44 +0000 UTC" firstStartedPulling="2025-01-30 14:13:44.627569185 +0000 UTC m=+23.372479414" lastFinishedPulling="2025-01-30 14:13:57.51099714 +0000 UTC m=+36.255907409" observedRunningTime="2025-01-30 14:13:58.58523423 +0000 UTC m=+37.330144499" watchObservedRunningTime="2025-01-30 14:13:58.586341275 +0000 UTC m=+37.331251544" Jan 30 14:13:59.512920 kernel: bpftool[4008]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 14:13:59.569468 kubelet[2755]: I0130 14:13:59.568289 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:13:59.739613 systemd-networkd[1372]: vxlan.calico: Link UP Jan 30 14:13:59.739619 systemd-networkd[1372]: vxlan.calico: Gained carrier Jan 30 14:14:00.860180 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Jan 30 14:14:01.520975 kubelet[2755]: I0130 14:14:01.520328 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:14:05.379822 containerd[1481]: time="2025-01-30T14:14:05.378995927Z" level=info msg="StopPodSandbox for \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\"" Jan 30 14:14:05.382197 containerd[1481]: time="2025-01-30T14:14:05.382152979Z" level=info msg="StopPodSandbox for \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\"" Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.504 [INFO][4158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.504 [INFO][4158] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" iface="eth0" netns="/var/run/netns/cni-08d1d829-a36a-aeaf-97cc-99aea525231b" Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.505 [INFO][4158] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" iface="eth0" netns="/var/run/netns/cni-08d1d829-a36a-aeaf-97cc-99aea525231b" Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.505 [INFO][4158] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" iface="eth0" netns="/var/run/netns/cni-08d1d829-a36a-aeaf-97cc-99aea525231b" Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.506 [INFO][4158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.506 [INFO][4158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.562 [INFO][4170] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" HandleID="k8s-pod-network.a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.562 [INFO][4170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.563 [INFO][4170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.578 [WARNING][4170] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" HandleID="k8s-pod-network.a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.578 [INFO][4170] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" HandleID="k8s-pod-network.a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.581 [INFO][4170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:05.590539 containerd[1481]: 2025-01-30 14:14:05.586 [INFO][4158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:14:05.595968 containerd[1481]: time="2025-01-30T14:14:05.595556677Z" level=info msg="TearDown network for sandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\" successfully" Jan 30 14:14:05.595968 containerd[1481]: time="2025-01-30T14:14:05.595596397Z" level=info msg="StopPodSandbox for \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\" returns successfully" Jan 30 14:14:05.596657 systemd[1]: run-netns-cni\x2d08d1d829\x2da36a\x2daeaf\x2d97cc\x2d99aea525231b.mount: Deactivated successfully. Jan 30 14:14:05.601544 containerd[1481]: time="2025-01-30T14:14:05.600581537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b56ddf557-t64pl,Uid:13a8dc64-168c-4ade-949f-18933b9e3810,Namespace:calico-system,Attempt:1,}" Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.501 [INFO][4157] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.502 [INFO][4157] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" iface="eth0" netns="/var/run/netns/cni-d676bf36-27da-8f91-c4d2-f4c151bde5bb" Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.502 [INFO][4157] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" iface="eth0" netns="/var/run/netns/cni-d676bf36-27da-8f91-c4d2-f4c151bde5bb" Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.506 [INFO][4157] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" iface="eth0" netns="/var/run/netns/cni-d676bf36-27da-8f91-c4d2-f4c151bde5bb" Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.507 [INFO][4157] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.507 [INFO][4157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.562 [INFO][4169] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" HandleID="k8s-pod-network.cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Workload="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.562 [INFO][4169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.581 [INFO][4169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.602 [WARNING][4169] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" HandleID="k8s-pod-network.cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Workload="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.603 [INFO][4169] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" HandleID="k8s-pod-network.cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Workload="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.606 [INFO][4169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:05.610534 containerd[1481]: 2025-01-30 14:14:05.609 [INFO][4157] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:14:05.611418 containerd[1481]: time="2025-01-30T14:14:05.611312540Z" level=info msg="TearDown network for sandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\" successfully" Jan 30 14:14:05.611418 containerd[1481]: time="2025-01-30T14:14:05.611344501Z" level=info msg="StopPodSandbox for \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\" returns successfully" Jan 30 14:14:05.614227 systemd[1]: run-netns-cni\x2dd676bf36\x2d27da\x2d8f91\x2dc4d2\x2df4c151bde5bb.mount: Deactivated successfully. Jan 30 14:14:05.615470 containerd[1481]: time="2025-01-30T14:14:05.615394917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbm79,Uid:2389e6aa-aa58-48fd-bacc-def6ddcc0f86,Namespace:calico-system,Attempt:1,}" Jan 30 14:14:05.839946 systemd-networkd[1372]: cali79bf7f52944: Link UP Jan 30 14:14:05.840937 systemd-networkd[1372]: cali79bf7f52944: Gained carrier Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.699 [INFO][4183] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0 calico-kube-controllers-5b56ddf557- calico-system 13a8dc64-168c-4ade-949f-18933b9e3810 780 0 2025-01-30 14:13:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b56ddf557 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-d-83a473bcbf calico-kube-controllers-5b56ddf557-t64pl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali79bf7f52944 [] []}} ContainerID="5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" Namespace="calico-system" Pod="calico-kube-controllers-5b56ddf557-t64pl" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-" Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.699 [INFO][4183] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" Namespace="calico-system" Pod="calico-kube-controllers-5b56ddf557-t64pl" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.757 [INFO][4206] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" HandleID="k8s-pod-network.5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.786 [INFO][4206] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" HandleID="k8s-pod-network.5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b8460), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-d-83a473bcbf", "pod":"calico-kube-controllers-5b56ddf557-t64pl", "timestamp":"2025-01-30 14:14:05.757353927 +0000 UTC"}, Hostname:"ci-4081-3-0-d-83a473bcbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.786 [INFO][4206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.786 [INFO][4206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.786 [INFO][4206] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-d-83a473bcbf' Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.789 [INFO][4206] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.795 [INFO][4206] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.802 [INFO][4206] ipam/ipam.go 489: Trying affinity for 192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.804 [INFO][4206] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.808 [INFO][4206] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.808 [INFO][4206] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.812 [INFO][4206] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538 Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.818 [INFO][4206] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.827 [INFO][4206] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.65/26] block=192.168.73.64/26 handle="k8s-pod-network.5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.828 [INFO][4206] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.65/26] handle="k8s-pod-network.5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.828 [INFO][4206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:05.872787 containerd[1481]: 2025-01-30 14:14:05.828 [INFO][4206] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.65/26] IPv6=[] ContainerID="5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" HandleID="k8s-pod-network.5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:05.874719 containerd[1481]: 2025-01-30 14:14:05.831 [INFO][4183] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" Namespace="calico-system" Pod="calico-kube-controllers-5b56ddf557-t64pl" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0", GenerateName:"calico-kube-controllers-5b56ddf557-", Namespace:"calico-system", SelfLink:"", UID:"13a8dc64-168c-4ade-949f-18933b9e3810", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b56ddf557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"", Pod:"calico-kube-controllers-5b56ddf557-t64pl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79bf7f52944", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:05.874719 containerd[1481]: 2025-01-30 14:14:05.832 [INFO][4183] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.65/32] ContainerID="5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" Namespace="calico-system" Pod="calico-kube-controllers-5b56ddf557-t64pl" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:05.874719 containerd[1481]: 2025-01-30 14:14:05.832 [INFO][4183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79bf7f52944 ContainerID="5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" Namespace="calico-system" Pod="calico-kube-controllers-5b56ddf557-t64pl" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:05.874719 containerd[1481]: 2025-01-30 14:14:05.842 [INFO][4183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" Namespace="calico-system" Pod="calico-kube-controllers-5b56ddf557-t64pl" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:05.874719 containerd[1481]: 2025-01-30 14:14:05.845 [INFO][4183] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" Namespace="calico-system" Pod="calico-kube-controllers-5b56ddf557-t64pl" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0", GenerateName:"calico-kube-controllers-5b56ddf557-", Namespace:"calico-system", SelfLink:"", UID:"13a8dc64-168c-4ade-949f-18933b9e3810", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b56ddf557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538", Pod:"calico-kube-controllers-5b56ddf557-t64pl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79bf7f52944", MAC:"8e:5a:c0:03:d9:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:05.874719 containerd[1481]: 2025-01-30 14:14:05.869 [INFO][4183] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538" Namespace="calico-system" Pod="calico-kube-controllers-5b56ddf557-t64pl" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:05.922705 systemd-networkd[1372]: cali388b0e021f3: Link UP Jan 30 14:14:05.925223 systemd-networkd[1372]: cali388b0e021f3: Gained carrier Jan 30 14:14:05.941074 containerd[1481]: time="2025-01-30T14:14:05.940824825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:14:05.941435 containerd[1481]: time="2025-01-30T14:14:05.941223946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:14:05.941435 containerd[1481]: time="2025-01-30T14:14:05.941242346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:05.941435 containerd[1481]: time="2025-01-30T14:14:05.941372507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.711 [INFO][4191] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0 csi-node-driver- calico-system 2389e6aa-aa58-48fd-bacc-def6ddcc0f86 781 0 2025-01-30 14:13:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-d-83a473bcbf csi-node-driver-tbm79 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali388b0e021f3 [] []}} ContainerID="ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" Namespace="calico-system" Pod="csi-node-driver-tbm79" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-" Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.711 [INFO][4191] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" Namespace="calico-system" Pod="csi-node-driver-tbm79" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.763 [INFO][4210] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" HandleID="k8s-pod-network.ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" Workload="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.789 [INFO][4210] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" HandleID="k8s-pod-network.ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" Workload="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316f50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-d-83a473bcbf", "pod":"csi-node-driver-tbm79", "timestamp":"2025-01-30 14:14:05.763847394 +0000 UTC"}, Hostname:"ci-4081-3-0-d-83a473bcbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.789 [INFO][4210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.828 [INFO][4210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.829 [INFO][4210] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-d-83a473bcbf' Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.834 [INFO][4210] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.853 [INFO][4210] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.870 [INFO][4210] ipam/ipam.go 489: Trying affinity for 192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.875 [INFO][4210] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.880 [INFO][4210] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.880 [INFO][4210] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.886 [INFO][4210] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.896 [INFO][4210] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.910 [INFO][4210] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.66/26] block=192.168.73.64/26 handle="k8s-pod-network.ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.910 [INFO][4210] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.66/26] handle="k8s-pod-network.ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.910 [INFO][4210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:05.963941 containerd[1481]: 2025-01-30 14:14:05.910 [INFO][4210] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.66/26] IPv6=[] ContainerID="ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" HandleID="k8s-pod-network.ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" Workload="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:05.964715 containerd[1481]: 2025-01-30 14:14:05.916 [INFO][4191] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" Namespace="calico-system" Pod="csi-node-driver-tbm79" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2389e6aa-aa58-48fd-bacc-def6ddcc0f86", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"", Pod:"csi-node-driver-tbm79", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali388b0e021f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:05.964715 containerd[1481]: 2025-01-30 14:14:05.917 [INFO][4191] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.66/32] ContainerID="ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" Namespace="calico-system" Pod="csi-node-driver-tbm79" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:05.964715 containerd[1481]: 2025-01-30 14:14:05.917 [INFO][4191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali388b0e021f3 ContainerID="ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" Namespace="calico-system" Pod="csi-node-driver-tbm79" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:05.964715 containerd[1481]: 2025-01-30 14:14:05.928 [INFO][4191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" Namespace="calico-system" Pod="csi-node-driver-tbm79" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:05.964715 containerd[1481]: 2025-01-30 14:14:05.930 [INFO][4191] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" Namespace="calico-system" Pod="csi-node-driver-tbm79" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2389e6aa-aa58-48fd-bacc-def6ddcc0f86", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a", Pod:"csi-node-driver-tbm79", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali388b0e021f3", MAC:"d6:a7:f6:a3:21:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:05.964715 containerd[1481]: 2025-01-30 14:14:05.961 [INFO][4191] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a" Namespace="calico-system" Pod="csi-node-driver-tbm79" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:05.975336 systemd[1]: Started cri-containerd-5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538.scope - libcontainer container 5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538. Jan 30 14:14:06.014636 containerd[1481]: time="2025-01-30T14:14:06.014256079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:14:06.014636 containerd[1481]: time="2025-01-30T14:14:06.014326319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:14:06.014636 containerd[1481]: time="2025-01-30T14:14:06.014343279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:06.014636 containerd[1481]: time="2025-01-30T14:14:06.014438960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:06.041380 systemd[1]: Started cri-containerd-ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a.scope - libcontainer container ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a. Jan 30 14:14:06.087238 containerd[1481]: time="2025-01-30T14:14:06.087171128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b56ddf557-t64pl,Uid:13a8dc64-168c-4ade-949f-18933b9e3810,Namespace:calico-system,Attempt:1,} returns sandbox id \"5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538\"" Jan 30 14:14:06.093687 containerd[1481]: time="2025-01-30T14:14:06.092162027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 14:14:06.109755 containerd[1481]: time="2025-01-30T14:14:06.109699857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbm79,Uid:2389e6aa-aa58-48fd-bacc-def6ddcc0f86,Namespace:calico-system,Attempt:1,} returns sandbox id \"ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a\"" Jan 30 14:14:06.377052 containerd[1481]: time="2025-01-30T14:14:06.376708233Z" level=info msg="StopPodSandbox for \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\"" Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.463 [INFO][4339] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.463 [INFO][4339] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" iface="eth0" netns="/var/run/netns/cni-fbed8980-ef18-9816-7918-eb441c625d19" Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.464 [INFO][4339] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" iface="eth0" netns="/var/run/netns/cni-fbed8980-ef18-9816-7918-eb441c625d19" Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.464 [INFO][4339] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" iface="eth0" netns="/var/run/netns/cni-fbed8980-ef18-9816-7918-eb441c625d19" Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.464 [INFO][4339] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.464 [INFO][4339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.502 [INFO][4345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" HandleID="k8s-pod-network.560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.503 [INFO][4345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.503 [INFO][4345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.521 [WARNING][4345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" HandleID="k8s-pod-network.560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.521 [INFO][4345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" HandleID="k8s-pod-network.560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.526 [INFO][4345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:06.530937 containerd[1481]: 2025-01-30 14:14:06.528 [INFO][4339] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:14:06.532176 containerd[1481]: time="2025-01-30T14:14:06.531159884Z" level=info msg="TearDown network for sandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\" successfully" Jan 30 14:14:06.532176 containerd[1481]: time="2025-01-30T14:14:06.531196325Z" level=info msg="StopPodSandbox for \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\" returns successfully" Jan 30 14:14:06.532176 containerd[1481]: time="2025-01-30T14:14:06.531998288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff87d744d-d22n9,Uid:eda514dd-0d84-4ae8-92bf-c1abd6012d3c,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:14:06.604652 systemd[1]: run-netns-cni\x2dfbed8980\x2def18\x2d9816\x2d7918\x2deb441c625d19.mount: Deactivated successfully. Jan 30 14:14:06.726816 systemd-networkd[1372]: caliccf66497dbe: Link UP Jan 30 14:14:06.727113 systemd-networkd[1372]: caliccf66497dbe: Gained carrier Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.611 [INFO][4352] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0 calico-apiserver-5ff87d744d- calico-apiserver eda514dd-0d84-4ae8-92bf-c1abd6012d3c 792 0 2025-01-30 14:13:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ff87d744d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-d-83a473bcbf calico-apiserver-5ff87d744d-d22n9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliccf66497dbe [] []}} ContainerID="3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-d22n9" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-" Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.612 [INFO][4352] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-d22n9" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.654 [INFO][4362] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" HandleID="k8s-pod-network.3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.671 [INFO][4362] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" HandleID="k8s-pod-network.3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aa9c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-d-83a473bcbf", "pod":"calico-apiserver-5ff87d744d-d22n9", "timestamp":"2025-01-30 14:14:06.654644613 +0000 UTC"}, Hostname:"ci-4081-3-0-d-83a473bcbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.671 [INFO][4362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.671 [INFO][4362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.671 [INFO][4362] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-d-83a473bcbf' Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.674 [INFO][4362] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.681 [INFO][4362] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.689 [INFO][4362] ipam/ipam.go 489: Trying affinity for 192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.692 [INFO][4362] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.695 [INFO][4362] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.695 [INFO][4362] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.698 [INFO][4362] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703 Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.709 [INFO][4362] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.719 [INFO][4362] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.67/26] block=192.168.73.64/26 handle="k8s-pod-network.3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.719 [INFO][4362] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.67/26] handle="k8s-pod-network.3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.719 [INFO][4362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:06.745325 containerd[1481]: 2025-01-30 14:14:06.719 [INFO][4362] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.67/26] IPv6=[] ContainerID="3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" HandleID="k8s-pod-network.3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:06.747323 containerd[1481]: 2025-01-30 14:14:06.721 [INFO][4352] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-d22n9" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0", GenerateName:"calico-apiserver-5ff87d744d-", Namespace:"calico-apiserver", SelfLink:"", UID:"eda514dd-0d84-4ae8-92bf-c1abd6012d3c", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff87d744d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"", Pod:"calico-apiserver-5ff87d744d-d22n9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccf66497dbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:06.747323 containerd[1481]: 2025-01-30 14:14:06.722 [INFO][4352] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.67/32] ContainerID="3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-d22n9" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:06.747323 containerd[1481]: 2025-01-30 14:14:06.722 [INFO][4352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccf66497dbe ContainerID="3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-d22n9" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:06.747323 containerd[1481]: 2025-01-30 14:14:06.724 [INFO][4352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-d22n9" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:06.747323 containerd[1481]: 2025-01-30 14:14:06.725 [INFO][4352] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-d22n9" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0", GenerateName:"calico-apiserver-5ff87d744d-", Namespace:"calico-apiserver", SelfLink:"", UID:"eda514dd-0d84-4ae8-92bf-c1abd6012d3c", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff87d744d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703", Pod:"calico-apiserver-5ff87d744d-d22n9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccf66497dbe", MAC:"ba:d3:46:b7:20:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:06.747323 containerd[1481]: 2025-01-30 14:14:06.740 [INFO][4352] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-d22n9" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:06.783359 containerd[1481]: time="2025-01-30T14:14:06.782169558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:14:06.783359 containerd[1481]: time="2025-01-30T14:14:06.782230878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:14:06.783359 containerd[1481]: time="2025-01-30T14:14:06.782246878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:06.783359 containerd[1481]: time="2025-01-30T14:14:06.782383718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:06.827132 systemd[1]: Started cri-containerd-3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703.scope - libcontainer container 3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703. Jan 30 14:14:06.895135 containerd[1481]: time="2025-01-30T14:14:06.894896124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff87d744d-d22n9,Uid:eda514dd-0d84-4ae8-92bf-c1abd6012d3c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703\"" Jan 30 14:14:07.196139 systemd-networkd[1372]: cali388b0e021f3: Gained IPv6LL Jan 30 14:14:07.393013 systemd-networkd[1372]: cali79bf7f52944: Gained IPv6LL Jan 30 14:14:08.377398 containerd[1481]: time="2025-01-30T14:14:08.377263124Z" level=info msg="StopPodSandbox for \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\"" Jan 30 14:14:08.378343 containerd[1481]: time="2025-01-30T14:14:08.377263164Z" level=info msg="StopPodSandbox for \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\"" Jan 30 14:14:08.412150 systemd-networkd[1372]: caliccf66497dbe: Gained IPv6LL Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.475 [INFO][4448] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.475 [INFO][4448] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" iface="eth0" netns="/var/run/netns/cni-a359c10e-28a4-074e-a831-c489a43c62f5" Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.475 [INFO][4448] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" iface="eth0" netns="/var/run/netns/cni-a359c10e-28a4-074e-a831-c489a43c62f5" Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.475 [INFO][4448] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" iface="eth0" netns="/var/run/netns/cni-a359c10e-28a4-074e-a831-c489a43c62f5" Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.475 [INFO][4448] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.475 [INFO][4448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.558 [INFO][4459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" HandleID="k8s-pod-network.51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.559 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.559 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.575 [WARNING][4459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" HandleID="k8s-pod-network.51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.576 [INFO][4459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" HandleID="k8s-pod-network.51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.582 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:08.588556 containerd[1481]: 2025-01-30 14:14:08.585 [INFO][4448] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:14:08.591139 containerd[1481]: time="2025-01-30T14:14:08.589120577Z" level=info msg="TearDown network for sandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\" successfully" Jan 30 14:14:08.591139 containerd[1481]: time="2025-01-30T14:14:08.589153657Z" level=info msg="StopPodSandbox for \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\" returns successfully" Jan 30 14:14:08.593144 systemd[1]: run-netns-cni\x2da359c10e\x2d28a4\x2d074e\x2da831\x2dc489a43c62f5.mount: Deactivated successfully. Jan 30 14:14:08.594346 containerd[1481]: time="2025-01-30T14:14:08.593204393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff87d744d-8lc5m,Uid:f55cf8b0-d7a7-4bae-bf93-c673ed2dc528,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.483 [INFO][4442] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.485 [INFO][4442] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" iface="eth0" netns="/var/run/netns/cni-572c5fe6-1591-58f4-906e-52ed49c2d217" Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.486 [INFO][4442] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" iface="eth0" netns="/var/run/netns/cni-572c5fe6-1591-58f4-906e-52ed49c2d217" Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.486 [INFO][4442] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" iface="eth0" netns="/var/run/netns/cni-572c5fe6-1591-58f4-906e-52ed49c2d217" Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.486 [INFO][4442] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.486 [INFO][4442] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.559 [INFO][4467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" HandleID="k8s-pod-network.95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.559 [INFO][4467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.582 [INFO][4467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.606 [WARNING][4467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" HandleID="k8s-pod-network.95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.606 [INFO][4467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" HandleID="k8s-pod-network.95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.612 [INFO][4467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:08.623976 containerd[1481]: 2025-01-30 14:14:08.617 [INFO][4442] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:14:08.627083 containerd[1481]: time="2025-01-30T14:14:08.625454877Z" level=info msg="TearDown network for sandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\" successfully" Jan 30 14:14:08.627083 containerd[1481]: time="2025-01-30T14:14:08.625490717Z" level=info msg="StopPodSandbox for \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\" returns successfully" Jan 30 14:14:08.630192 containerd[1481]: time="2025-01-30T14:14:08.628182727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mh58t,Uid:5a3593c1-ac5b-4526-8f21-4d2d404a8f63,Namespace:kube-system,Attempt:1,}" Jan 30 14:14:08.632714 systemd[1]: run-netns-cni\x2d572c5fe6\x2d1591\x2d58f4\x2d906e\x2d52ed49c2d217.mount: Deactivated successfully. Jan 30 14:14:09.019640 systemd-networkd[1372]: calib9097f619da: Link UP Jan 30 14:14:09.022828 systemd-networkd[1372]: calib9097f619da: Gained carrier Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.758 [INFO][4476] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0 calico-apiserver-5ff87d744d- calico-apiserver f55cf8b0-d7a7-4bae-bf93-c673ed2dc528 805 0 2025-01-30 14:13:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ff87d744d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-d-83a473bcbf calico-apiserver-5ff87d744d-8lc5m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9097f619da [] []}} ContainerID="ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-8lc5m" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-" Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.759 [INFO][4476] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-8lc5m" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.834 [INFO][4501] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" HandleID="k8s-pod-network.ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.874 [INFO][4501] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" HandleID="k8s-pod-network.ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316e70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-d-83a473bcbf", "pod":"calico-apiserver-5ff87d744d-8lc5m", "timestamp":"2025-01-30 14:14:08.834598079 +0000 UTC"}, Hostname:"ci-4081-3-0-d-83a473bcbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.876 [INFO][4501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.877 [INFO][4501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.877 [INFO][4501] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-d-83a473bcbf' Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.886 [INFO][4501] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.906 [INFO][4501] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.933 [INFO][4501] ipam/ipam.go 489: Trying affinity for 192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.939 [INFO][4501] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.953 [INFO][4501] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.953 [INFO][4501] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.960 [INFO][4501] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11 Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.971 [INFO][4501] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.991 [INFO][4501] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.68/26] block=192.168.73.64/26 handle="k8s-pod-network.ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.991 [INFO][4501] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.68/26] handle="k8s-pod-network.ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.991 [INFO][4501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:09.059770 containerd[1481]: 2025-01-30 14:14:08.991 [INFO][4501] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.68/26] IPv6=[] ContainerID="ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" HandleID="k8s-pod-network.ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:09.063152 containerd[1481]: 2025-01-30 14:14:08.994 [INFO][4476] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-8lc5m" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0", GenerateName:"calico-apiserver-5ff87d744d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f55cf8b0-d7a7-4bae-bf93-c673ed2dc528", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff87d744d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"", Pod:"calico-apiserver-5ff87d744d-8lc5m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9097f619da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:09.063152 containerd[1481]: 2025-01-30 14:14:08.996 [INFO][4476] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.68/32] ContainerID="ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-8lc5m" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:09.063152 containerd[1481]: 2025-01-30 14:14:08.996 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9097f619da ContainerID="ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-8lc5m" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:09.063152 containerd[1481]: 2025-01-30 14:14:09.020 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-8lc5m" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:09.063152 containerd[1481]: 2025-01-30 14:14:09.021 [INFO][4476] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-8lc5m" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0", GenerateName:"calico-apiserver-5ff87d744d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f55cf8b0-d7a7-4bae-bf93-c673ed2dc528", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff87d744d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11", Pod:"calico-apiserver-5ff87d744d-8lc5m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9097f619da", MAC:"92:3e:11:52:0a:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:09.063152 containerd[1481]: 2025-01-30 14:14:09.052 [INFO][4476] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11" Namespace="calico-apiserver" Pod="calico-apiserver-5ff87d744d-8lc5m" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:09.131953 systemd-networkd[1372]: cali7c41f64a421: Link UP Jan 30 14:14:09.132335 systemd-networkd[1372]: cali7c41f64a421: Gained carrier Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:08.781 [INFO][4487] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0 coredns-7db6d8ff4d- kube-system 5a3593c1-ac5b-4526-8f21-4d2d404a8f63 806 0 2025-01-30 14:13:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-d-83a473bcbf coredns-7db6d8ff4d-mh58t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7c41f64a421 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mh58t" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-" Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:08.782 [INFO][4487] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mh58t" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:08.914 [INFO][4505] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" HandleID="k8s-pod-network.20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:08.953 [INFO][4505] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" HandleID="k8s-pod-network.20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400047bc10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-d-83a473bcbf", "pod":"coredns-7db6d8ff4d-mh58t", "timestamp":"2025-01-30 14:14:08.910589451 +0000 UTC"}, Hostname:"ci-4081-3-0-d-83a473bcbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:08.953 [INFO][4505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:08.991 [INFO][4505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:08.991 [INFO][4505] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-d-83a473bcbf' Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:08.998 [INFO][4505] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:09.011 [INFO][4505] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:09.053 [INFO][4505] ipam/ipam.go 489: Trying affinity for 192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:09.066 [INFO][4505] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:09.077 [INFO][4505] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:09.077 [INFO][4505] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:09.082 [INFO][4505] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4 Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:09.094 [INFO][4505] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:09.114 [INFO][4505] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.69/26] block=192.168.73.64/26 handle="k8s-pod-network.20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:09.115 [INFO][4505] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.69/26] handle="k8s-pod-network.20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:09.115 [INFO][4505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:09.173579 containerd[1481]: 2025-01-30 14:14:09.115 [INFO][4505] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.69/26] IPv6=[] ContainerID="20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" HandleID="k8s-pod-network.20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:09.175559 containerd[1481]: 2025-01-30 14:14:09.125 [INFO][4487] cni-plugin/k8s.go 386: Populated endpoint ContainerID="20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mh58t" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5a3593c1-ac5b-4526-8f21-4d2d404a8f63", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"", Pod:"coredns-7db6d8ff4d-mh58t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c41f64a421", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:09.175559 containerd[1481]: 2025-01-30 14:14:09.125 [INFO][4487] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.69/32] ContainerID="20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mh58t" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:09.175559 containerd[1481]: 2025-01-30 14:14:09.125 [INFO][4487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c41f64a421 ContainerID="20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mh58t" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:09.175559 containerd[1481]: 2025-01-30 14:14:09.131 [INFO][4487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mh58t" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:09.175559 containerd[1481]: 2025-01-30 14:14:09.140 [INFO][4487] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mh58t" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5a3593c1-ac5b-4526-8f21-4d2d404a8f63", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4", Pod:"coredns-7db6d8ff4d-mh58t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c41f64a421", MAC:"5e:d4:10:ab:d8:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:09.175559 containerd[1481]: 2025-01-30 14:14:09.170 [INFO][4487] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mh58t" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:09.186253 containerd[1481]: time="2025-01-30T14:14:09.185158174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:14:09.186253 containerd[1481]: time="2025-01-30T14:14:09.185218695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:14:09.186253 containerd[1481]: time="2025-01-30T14:14:09.185234175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:09.186253 containerd[1481]: time="2025-01-30T14:14:09.185314615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:09.215142 systemd[1]: Started cri-containerd-ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11.scope - libcontainer container ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11. Jan 30 14:14:09.265270 containerd[1481]: time="2025-01-30T14:14:09.264605035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:14:09.265270 containerd[1481]: time="2025-01-30T14:14:09.264671035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:14:09.265270 containerd[1481]: time="2025-01-30T14:14:09.264690195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:09.265848 containerd[1481]: time="2025-01-30T14:14:09.265695679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:09.299196 systemd[1]: Started cri-containerd-20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4.scope - libcontainer container 20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4. Jan 30 14:14:09.379389 containerd[1481]: time="2025-01-30T14:14:09.378367145Z" level=info msg="StopPodSandbox for \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\"" Jan 30 14:14:09.405929 containerd[1481]: time="2025-01-30T14:14:09.405465567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff87d744d-8lc5m,Uid:f55cf8b0-d7a7-4bae-bf93-c673ed2dc528,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11\"" Jan 30 14:14:09.439940 containerd[1481]: time="2025-01-30T14:14:09.439852617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mh58t,Uid:5a3593c1-ac5b-4526-8f21-4d2d404a8f63,Namespace:kube-system,Attempt:1,} returns sandbox id \"20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4\"" Jan 30 14:14:09.446675 containerd[1481]: time="2025-01-30T14:14:09.446305042Z" level=info msg="CreateContainer within sandbox \"20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:14:09.486763 containerd[1481]: time="2025-01-30T14:14:09.486701555Z" level=info msg="CreateContainer within sandbox \"20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"266196d87c6728f6ad051614d29e454ed2de0d809f36802b49b5eadcb4c56aaa\"" Jan 30 14:14:09.493095 containerd[1481]: time="2025-01-30T14:14:09.491571213Z" level=info msg="StartContainer for \"266196d87c6728f6ad051614d29e454ed2de0d809f36802b49b5eadcb4c56aaa\"" Jan 30 14:14:09.580604 systemd[1]: Started cri-containerd-266196d87c6728f6ad051614d29e454ed2de0d809f36802b49b5eadcb4c56aaa.scope - libcontainer container 266196d87c6728f6ad051614d29e454ed2de0d809f36802b49b5eadcb4c56aaa. Jan 30 14:14:09.693504 containerd[1481]: time="2025-01-30T14:14:09.693455096Z" level=info msg="StartContainer for \"266196d87c6728f6ad051614d29e454ed2de0d809f36802b49b5eadcb4c56aaa\" returns successfully" Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.558 [INFO][4639] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.559 [INFO][4639] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" iface="eth0" netns="/var/run/netns/cni-011d72b1-818c-5dfd-b55a-7db47f4e8425" Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.561 [INFO][4639] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" iface="eth0" netns="/var/run/netns/cni-011d72b1-818c-5dfd-b55a-7db47f4e8425" Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.562 [INFO][4639] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" iface="eth0" netns="/var/run/netns/cni-011d72b1-818c-5dfd-b55a-7db47f4e8425" Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.562 [INFO][4639] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.562 [INFO][4639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.672 [INFO][4661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" HandleID="k8s-pod-network.a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.673 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.673 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.695 [WARNING][4661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" HandleID="k8s-pod-network.a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.695 [INFO][4661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" HandleID="k8s-pod-network.a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.703 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:09.709921 containerd[1481]: 2025-01-30 14:14:09.706 [INFO][4639] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:14:09.712541 containerd[1481]: time="2025-01-30T14:14:09.712178647Z" level=info msg="TearDown network for sandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\" successfully" Jan 30 14:14:09.712541 containerd[1481]: time="2025-01-30T14:14:09.712220487Z" level=info msg="StopPodSandbox for \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\" returns successfully" Jan 30 14:14:09.714513 systemd[1]: run-netns-cni\x2d011d72b1\x2d818c\x2d5dfd\x2db55a\x2d7db47f4e8425.mount: Deactivated successfully. Jan 30 14:14:09.718970 containerd[1481]: time="2025-01-30T14:14:09.718353511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6zngw,Uid:5d2bba59-863e-409a-b8ef-65a000a343fa,Namespace:kube-system,Attempt:1,}" Jan 30 14:14:09.879542 containerd[1481]: time="2025-01-30T14:14:09.879156799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:09.880730 containerd[1481]: time="2025-01-30T14:14:09.880678124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 30 14:14:09.882946 containerd[1481]: time="2025-01-30T14:14:09.881976049Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:09.885483 containerd[1481]: time="2025-01-30T14:14:09.885442182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:09.886939 containerd[1481]: time="2025-01-30T14:14:09.886873588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.794668721s" Jan 30 14:14:09.887137 containerd[1481]: time="2025-01-30T14:14:09.887116669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 30 14:14:09.888799 containerd[1481]: time="2025-01-30T14:14:09.888774875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 14:14:09.924389 containerd[1481]: time="2025-01-30T14:14:09.924336849Z" level=info msg="CreateContainer within sandbox \"5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 14:14:09.946126 containerd[1481]: time="2025-01-30T14:14:09.946075892Z" level=info msg="CreateContainer within sandbox \"5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"45d5b93800323d13390f7ced5d7fd56cfb300cc940c0594e875fb25309439e5a\"" Jan 30 14:14:09.947650 containerd[1481]: time="2025-01-30T14:14:09.947144856Z" level=info msg="StartContainer for \"45d5b93800323d13390f7ced5d7fd56cfb300cc940c0594e875fb25309439e5a\"" Jan 30 14:14:10.017193 systemd[1]: Started cri-containerd-45d5b93800323d13390f7ced5d7fd56cfb300cc940c0594e875fb25309439e5a.scope - libcontainer container 45d5b93800323d13390f7ced5d7fd56cfb300cc940c0594e875fb25309439e5a. Jan 30 14:14:10.042081 systemd-networkd[1372]: cali545a6c6ce1b: Link UP Jan 30 14:14:10.042302 systemd-networkd[1372]: cali545a6c6ce1b: Gained carrier Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:09.865 [INFO][4691] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0 coredns-7db6d8ff4d- kube-system 5d2bba59-863e-409a-b8ef-65a000a343fa 820 0 2025-01-30 14:13:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-d-83a473bcbf coredns-7db6d8ff4d-6zngw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali545a6c6ce1b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6zngw" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-" Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:09.865 [INFO][4691] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6zngw" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:09.927 [INFO][4704] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" HandleID="k8s-pod-network.5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:09.953 [INFO][4704] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" HandleID="k8s-pod-network.5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317a10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-d-83a473bcbf", "pod":"coredns-7db6d8ff4d-6zngw", "timestamp":"2025-01-30 14:14:09.927725542 +0000 UTC"}, Hostname:"ci-4081-3-0-d-83a473bcbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:09.954 [INFO][4704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:09.954 [INFO][4704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:09.954 [INFO][4704] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-d-83a473bcbf' Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:09.962 [INFO][4704] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:09.971 [INFO][4704] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:09.987 [INFO][4704] ipam/ipam.go 489: Trying affinity for 192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:09.994 [INFO][4704] ipam/ipam.go 155: Attempting to load block cidr=192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:10.000 [INFO][4704] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:10.000 [INFO][4704] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:10.003 [INFO][4704] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368 Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:10.013 [INFO][4704] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:10.034 [INFO][4704] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.73.70/26] block=192.168.73.64/26 handle="k8s-pod-network.5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:10.034 [INFO][4704] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.70/26] handle="k8s-pod-network.5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" host="ci-4081-3-0-d-83a473bcbf" Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:10.034 [INFO][4704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:10.078306 containerd[1481]: 2025-01-30 14:14:10.035 [INFO][4704] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.70/26] IPv6=[] ContainerID="5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" HandleID="k8s-pod-network.5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:10.079405 containerd[1481]: 2025-01-30 14:14:10.038 [INFO][4691] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6zngw" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5d2bba59-863e-409a-b8ef-65a000a343fa", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"", Pod:"coredns-7db6d8ff4d-6zngw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali545a6c6ce1b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:10.079405 containerd[1481]: 2025-01-30 14:14:10.038 [INFO][4691] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.73.70/32] ContainerID="5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6zngw" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:10.079405 containerd[1481]: 2025-01-30 14:14:10.038 [INFO][4691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali545a6c6ce1b ContainerID="5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6zngw" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:10.079405 containerd[1481]: 2025-01-30 14:14:10.040 [INFO][4691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6zngw" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:10.079405 containerd[1481]: 2025-01-30 14:14:10.040 [INFO][4691] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6zngw" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5d2bba59-863e-409a-b8ef-65a000a343fa", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368", Pod:"coredns-7db6d8ff4d-6zngw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali545a6c6ce1b", MAC:"72:b7:fc:26:61:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:10.079405 containerd[1481]: 2025-01-30 14:14:10.075 [INFO][4691] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6zngw" WorkloadEndpoint="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:10.116730 containerd[1481]: time="2025-01-30T14:14:10.116454370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:14:10.116730 containerd[1481]: time="2025-01-30T14:14:10.116508570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:14:10.116730 containerd[1481]: time="2025-01-30T14:14:10.116518850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:10.116730 containerd[1481]: time="2025-01-30T14:14:10.116596530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:14:10.148649 containerd[1481]: time="2025-01-30T14:14:10.147245324Z" level=info msg="StartContainer for \"45d5b93800323d13390f7ced5d7fd56cfb300cc940c0594e875fb25309439e5a\" returns successfully" Jan 30 14:14:10.161398 systemd[1]: Started cri-containerd-5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368.scope - libcontainer container 5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368. Jan 30 14:14:10.231929 containerd[1481]: time="2025-01-30T14:14:10.231712519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6zngw,Uid:5d2bba59-863e-409a-b8ef-65a000a343fa,Namespace:kube-system,Attempt:1,} returns sandbox id \"5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368\"" Jan 30 14:14:10.241118 containerd[1481]: time="2025-01-30T14:14:10.239918630Z" level=info msg="CreateContainer within sandbox \"5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:14:10.265294 containerd[1481]: time="2025-01-30T14:14:10.265155604Z" level=info msg="CreateContainer within sandbox \"5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f463214b4da025731661181638681fceff1cb230519be16db9409b7e0627c46\"" Jan 30 14:14:10.266799 containerd[1481]: time="2025-01-30T14:14:10.266362728Z" level=info msg="StartContainer for \"9f463214b4da025731661181638681fceff1cb230519be16db9409b7e0627c46\"" Jan 30 14:14:10.302807 systemd[1]: Started cri-containerd-9f463214b4da025731661181638681fceff1cb230519be16db9409b7e0627c46.scope - libcontainer container 9f463214b4da025731661181638681fceff1cb230519be16db9409b7e0627c46. Jan 30 14:14:10.364642 containerd[1481]: time="2025-01-30T14:14:10.363732491Z" level=info msg="StartContainer for \"9f463214b4da025731661181638681fceff1cb230519be16db9409b7e0627c46\" returns successfully" Jan 30 14:14:10.523263 systemd-networkd[1372]: calib9097f619da: Gained IPv6LL Jan 30 14:14:10.730479 kubelet[2755]: I0130 14:14:10.730387 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mh58t" podStartSLOduration=33.728680611 podStartE2EDuration="33.728680611s" podCreationTimestamp="2025-01-30 14:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:14:10.701960631 +0000 UTC m=+49.446870940" watchObservedRunningTime="2025-01-30 14:14:10.728680611 +0000 UTC m=+49.473590880" Jan 30 14:14:10.803368 kubelet[2755]: I0130 14:14:10.801933 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6zngw" podStartSLOduration=33.801913004 podStartE2EDuration="33.801913004s" podCreationTimestamp="2025-01-30 14:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:14:10.762828658 +0000 UTC m=+49.507738927" watchObservedRunningTime="2025-01-30 14:14:10.801913004 +0000 UTC m=+49.546823273" Jan 30 14:14:10.839506 kubelet[2755]: I0130 14:14:10.839141 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b56ddf557-t64pl" podStartSLOduration=23.041784133 podStartE2EDuration="26.839120103s" podCreationTimestamp="2025-01-30 14:13:44 +0000 UTC" firstStartedPulling="2025-01-30 14:14:06.090749662 +0000 UTC m=+44.835659931" lastFinishedPulling="2025-01-30 14:14:09.888085632 +0000 UTC m=+48.632995901" observedRunningTime="2025-01-30 14:14:10.803261929 +0000 UTC m=+49.548172198" watchObservedRunningTime="2025-01-30 14:14:10.839120103 +0000 UTC m=+49.584030412" Jan 30 14:14:10.907393 systemd-networkd[1372]: cali7c41f64a421: Gained IPv6LL Jan 30 14:14:11.392995 containerd[1481]: time="2025-01-30T14:14:11.392903585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:11.393828 containerd[1481]: time="2025-01-30T14:14:11.393704148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 30 14:14:11.395954 containerd[1481]: time="2025-01-30T14:14:11.395444675Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:11.401466 containerd[1481]: time="2025-01-30T14:14:11.401398337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:11.402762 containerd[1481]: time="2025-01-30T14:14:11.402712101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.513646545s" Jan 30 14:14:11.402762 containerd[1481]: time="2025-01-30T14:14:11.402755302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 30 14:14:11.405369 containerd[1481]: time="2025-01-30T14:14:11.405211071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:14:11.407145 containerd[1481]: time="2025-01-30T14:14:11.406982957Z" level=info msg="CreateContainer within sandbox \"ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 14:14:11.430770 containerd[1481]: time="2025-01-30T14:14:11.430678764Z" level=info msg="CreateContainer within sandbox \"ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"19ca80d81a5b64d4821c4cc1d9c110e2515a1bda5cd063b38f337c6ffc06f505\"" Jan 30 14:14:11.433071 containerd[1481]: time="2025-01-30T14:14:11.431646888Z" level=info msg="StartContainer for \"19ca80d81a5b64d4821c4cc1d9c110e2515a1bda5cd063b38f337c6ffc06f505\"" Jan 30 14:14:11.495198 systemd[1]: Started cri-containerd-19ca80d81a5b64d4821c4cc1d9c110e2515a1bda5cd063b38f337c6ffc06f505.scope - libcontainer container 19ca80d81a5b64d4821c4cc1d9c110e2515a1bda5cd063b38f337c6ffc06f505. Jan 30 14:14:11.539023 containerd[1481]: time="2025-01-30T14:14:11.538137479Z" level=info msg="StartContainer for \"19ca80d81a5b64d4821c4cc1d9c110e2515a1bda5cd063b38f337c6ffc06f505\" returns successfully" Jan 30 14:14:11.932319 systemd-networkd[1372]: cali545a6c6ce1b: Gained IPv6LL Jan 30 14:14:13.555421 containerd[1481]: time="2025-01-30T14:14:13.554248657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:13.555421 containerd[1481]: time="2025-01-30T14:14:13.555373621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 30 14:14:13.556264 containerd[1481]: time="2025-01-30T14:14:13.556226784Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:13.561437 containerd[1481]: time="2025-01-30T14:14:13.561381683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:13.562540 containerd[1481]: time="2025-01-30T14:14:13.562491927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.157232096s" Jan 30 14:14:13.562670 containerd[1481]: time="2025-01-30T14:14:13.562548607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:14:13.566653 containerd[1481]: time="2025-01-30T14:14:13.566580181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:14:13.567488 containerd[1481]: time="2025-01-30T14:14:13.567443584Z" level=info msg="CreateContainer within sandbox \"3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:14:13.595412 containerd[1481]: time="2025-01-30T14:14:13.595242964Z" level=info msg="CreateContainer within sandbox \"3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d9d4dbf6f9ab80aaefa24caa1433908a358d26de0bb6a2f6cebb4c4f1ce1b900\"" Jan 30 14:14:13.598173 containerd[1481]: time="2025-01-30T14:14:13.597333691Z" level=info msg="StartContainer for \"d9d4dbf6f9ab80aaefa24caa1433908a358d26de0bb6a2f6cebb4c4f1ce1b900\"" Jan 30 14:14:13.640893 systemd[1]: run-containerd-runc-k8s.io-d9d4dbf6f9ab80aaefa24caa1433908a358d26de0bb6a2f6cebb4c4f1ce1b900-runc.e4lI1q.mount: Deactivated successfully. Jan 30 14:14:13.654166 systemd[1]: Started cri-containerd-d9d4dbf6f9ab80aaefa24caa1433908a358d26de0bb6a2f6cebb4c4f1ce1b900.scope - libcontainer container d9d4dbf6f9ab80aaefa24caa1433908a358d26de0bb6a2f6cebb4c4f1ce1b900. Jan 30 14:14:13.748040 containerd[1481]: time="2025-01-30T14:14:13.747862789Z" level=info msg="StartContainer for \"d9d4dbf6f9ab80aaefa24caa1433908a358d26de0bb6a2f6cebb4c4f1ce1b900\" returns successfully" Jan 30 14:14:13.960226 containerd[1481]: time="2025-01-30T14:14:13.960170667Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:13.962038 containerd[1481]: time="2025-01-30T14:14:13.961973394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 14:14:13.963680 containerd[1481]: time="2025-01-30T14:14:13.963641040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 397.005618ms" Jan 30 14:14:13.963757 containerd[1481]: time="2025-01-30T14:14:13.963686440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:14:13.967155 containerd[1481]: time="2025-01-30T14:14:13.964849364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 14:14:13.968442 containerd[1481]: time="2025-01-30T14:14:13.968380897Z" level=info msg="CreateContainer within sandbox \"ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:14:13.991225 containerd[1481]: time="2025-01-30T14:14:13.991173378Z" level=info msg="CreateContainer within sandbox \"ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"95a0e095f00b741de090ac3e134be02f399b2e3ad59b2a18254901a8dd83ca00\"" Jan 30 14:14:13.992093 containerd[1481]: time="2025-01-30T14:14:13.992043621Z" level=info msg="StartContainer for \"95a0e095f00b741de090ac3e134be02f399b2e3ad59b2a18254901a8dd83ca00\"" Jan 30 14:14:14.026135 systemd[1]: Started cri-containerd-95a0e095f00b741de090ac3e134be02f399b2e3ad59b2a18254901a8dd83ca00.scope - libcontainer container 95a0e095f00b741de090ac3e134be02f399b2e3ad59b2a18254901a8dd83ca00. Jan 30 14:14:14.078652 containerd[1481]: time="2025-01-30T14:14:14.078586286Z" level=info msg="StartContainer for \"95a0e095f00b741de090ac3e134be02f399b2e3ad59b2a18254901a8dd83ca00\" returns successfully" Jan 30 14:14:14.586277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3210147493.mount: Deactivated successfully. Jan 30 14:14:14.721942 kubelet[2755]: I0130 14:14:14.720050 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ff87d744d-8lc5m" podStartSLOduration=27.169416263 podStartE2EDuration="31.720028387s" podCreationTimestamp="2025-01-30 14:13:43 +0000 UTC" firstStartedPulling="2025-01-30 14:14:09.41414068 +0000 UTC m=+48.159050949" lastFinishedPulling="2025-01-30 14:14:13.964752804 +0000 UTC m=+52.709663073" observedRunningTime="2025-01-30 14:14:14.701279721 +0000 UTC m=+53.446189990" watchObservedRunningTime="2025-01-30 14:14:14.720028387 +0000 UTC m=+53.464938656" Jan 30 14:14:15.443799 containerd[1481]: time="2025-01-30T14:14:15.443749396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:15.445659 containerd[1481]: time="2025-01-30T14:14:15.445422922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 30 14:14:15.446564 containerd[1481]: time="2025-01-30T14:14:15.446524206Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:15.449008 containerd[1481]: time="2025-01-30T14:14:15.448954574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:14:15.451182 containerd[1481]: time="2025-01-30T14:14:15.450425619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.485539335s" Jan 30 14:14:15.451182 containerd[1481]: time="2025-01-30T14:14:15.450476860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 30 14:14:15.455039 containerd[1481]: time="2025-01-30T14:14:15.454993915Z" level=info msg="CreateContainer within sandbox \"ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 14:14:15.493992 containerd[1481]: time="2025-01-30T14:14:15.493579209Z" level=info msg="CreateContainer within sandbox \"ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4268bfc5ee9ec830b24c6bbd349dd7765639a41bf7948f68598db4db509f3dd1\"" Jan 30 14:14:15.496957 containerd[1481]: time="2025-01-30T14:14:15.496242939Z" level=info msg="StartContainer for \"4268bfc5ee9ec830b24c6bbd349dd7765639a41bf7948f68598db4db509f3dd1\"" Jan 30 14:14:15.553123 systemd[1]: Started cri-containerd-4268bfc5ee9ec830b24c6bbd349dd7765639a41bf7948f68598db4db509f3dd1.scope - libcontainer container 4268bfc5ee9ec830b24c6bbd349dd7765639a41bf7948f68598db4db509f3dd1. Jan 30 14:14:15.612649 containerd[1481]: time="2025-01-30T14:14:15.612151302Z" level=info msg="StartContainer for \"4268bfc5ee9ec830b24c6bbd349dd7765639a41bf7948f68598db4db509f3dd1\" returns successfully" Jan 30 14:14:15.690921 kubelet[2755]: I0130 14:14:15.690841 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:14:15.711933 kubelet[2755]: I0130 14:14:15.711632 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tbm79" podStartSLOduration=22.371413367 podStartE2EDuration="31.711609087s" podCreationTimestamp="2025-01-30 14:13:44 +0000 UTC" firstStartedPulling="2025-01-30 14:14:06.111875505 +0000 UTC m=+44.856785734" lastFinishedPulling="2025-01-30 14:14:15.452071105 +0000 UTC m=+54.196981454" observedRunningTime="2025-01-30 14:14:15.709167559 +0000 UTC m=+54.454077828" watchObservedRunningTime="2025-01-30 14:14:15.711609087 +0000 UTC m=+54.456519436" Jan 30 14:14:15.714413 kubelet[2755]: I0130 14:14:15.714136 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ff87d744d-d22n9" podStartSLOduration=26.049642226 podStartE2EDuration="32.714111536s" podCreationTimestamp="2025-01-30 14:13:43 +0000 UTC" firstStartedPulling="2025-01-30 14:14:06.899616502 +0000 UTC m=+45.644526771" lastFinishedPulling="2025-01-30 14:14:13.564085852 +0000 UTC m=+52.308996081" observedRunningTime="2025-01-30 14:14:14.723019677 +0000 UTC m=+53.467929946" watchObservedRunningTime="2025-01-30 14:14:15.714111536 +0000 UTC m=+54.459021805" Jan 30 14:14:16.490540 kubelet[2755]: I0130 14:14:16.490456 2755 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 14:14:16.496264 kubelet[2755]: I0130 14:14:16.495887 2755 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 14:14:21.382795 containerd[1481]: time="2025-01-30T14:14:21.382646856Z" level=info msg="StopPodSandbox for \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\"" Jan 30 14:14:21.500744 containerd[1481]: 2025-01-30 14:14:21.440 [WARNING][5067] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0", GenerateName:"calico-kube-controllers-5b56ddf557-", Namespace:"calico-system", SelfLink:"", UID:"13a8dc64-168c-4ade-949f-18933b9e3810", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b56ddf557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538", Pod:"calico-kube-controllers-5b56ddf557-t64pl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79bf7f52944", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:21.500744 containerd[1481]: 2025-01-30 14:14:21.440 [INFO][5067] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:14:21.500744 containerd[1481]: 2025-01-30 14:14:21.440 [INFO][5067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" iface="eth0" netns="" Jan 30 14:14:21.500744 containerd[1481]: 2025-01-30 14:14:21.440 [INFO][5067] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:14:21.500744 containerd[1481]: 2025-01-30 14:14:21.440 [INFO][5067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:14:21.500744 containerd[1481]: 2025-01-30 14:14:21.482 [INFO][5076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" HandleID="k8s-pod-network.a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:21.500744 containerd[1481]: 2025-01-30 14:14:21.482 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:21.500744 containerd[1481]: 2025-01-30 14:14:21.482 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:21.500744 containerd[1481]: 2025-01-30 14:14:21.493 [WARNING][5076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" HandleID="k8s-pod-network.a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:21.500744 containerd[1481]: 2025-01-30 14:14:21.493 [INFO][5076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" HandleID="k8s-pod-network.a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:21.500744 containerd[1481]: 2025-01-30 14:14:21.496 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:21.500744 containerd[1481]: 2025-01-30 14:14:21.498 [INFO][5067] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:14:21.500744 containerd[1481]: time="2025-01-30T14:14:21.500736557Z" level=info msg="TearDown network for sandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\" successfully" Jan 30 14:14:21.502973 containerd[1481]: time="2025-01-30T14:14:21.500768997Z" level=info msg="StopPodSandbox for \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\" returns successfully" Jan 30 14:14:21.502973 containerd[1481]: time="2025-01-30T14:14:21.501562680Z" level=info msg="RemovePodSandbox for \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\"" Jan 30 14:14:21.509592 containerd[1481]: time="2025-01-30T14:14:21.509402105Z" level=info msg="Forcibly stopping sandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\"" Jan 30 14:14:21.605529 containerd[1481]: 2025-01-30 14:14:21.558 [WARNING][5094] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0", GenerateName:"calico-kube-controllers-5b56ddf557-", Namespace:"calico-system", SelfLink:"", UID:"13a8dc64-168c-4ade-949f-18933b9e3810", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b56ddf557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"5bfa2e825a460e1b8f221ccef7cff9f4bd0973b32f14fcc26e1c15d236406538", Pod:"calico-kube-controllers-5b56ddf557-t64pl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79bf7f52944", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:21.605529 containerd[1481]: 2025-01-30 14:14:21.558 [INFO][5094] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:14:21.605529 containerd[1481]: 2025-01-30 14:14:21.558 [INFO][5094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" iface="eth0" netns="" Jan 30 14:14:21.605529 containerd[1481]: 2025-01-30 14:14:21.558 [INFO][5094] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:14:21.605529 containerd[1481]: 2025-01-30 14:14:21.558 [INFO][5094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:14:21.605529 containerd[1481]: 2025-01-30 14:14:21.580 [INFO][5100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" HandleID="k8s-pod-network.a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:21.605529 containerd[1481]: 2025-01-30 14:14:21.580 [INFO][5100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:21.605529 containerd[1481]: 2025-01-30 14:14:21.580 [INFO][5100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:21.605529 containerd[1481]: 2025-01-30 14:14:21.591 [WARNING][5100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" HandleID="k8s-pod-network.a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:21.605529 containerd[1481]: 2025-01-30 14:14:21.591 [INFO][5100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" HandleID="k8s-pod-network.a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--kube--controllers--5b56ddf557--t64pl-eth0" Jan 30 14:14:21.605529 containerd[1481]: 2025-01-30 14:14:21.594 [INFO][5100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:21.605529 containerd[1481]: 2025-01-30 14:14:21.599 [INFO][5094] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e" Jan 30 14:14:21.605529 containerd[1481]: time="2025-01-30T14:14:21.604224091Z" level=info msg="TearDown network for sandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\" successfully" Jan 30 14:14:21.609290 containerd[1481]: time="2025-01-30T14:14:21.609250867Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:14:21.609663 containerd[1481]: time="2025-01-30T14:14:21.609640228Z" level=info msg="RemovePodSandbox \"a26efdf14ef41a837157c8ee76469149905432d4ce33014ada107f6783ce531e\" returns successfully" Jan 30 14:14:21.610542 containerd[1481]: time="2025-01-30T14:14:21.610515071Z" level=info msg="StopPodSandbox for \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\"" Jan 30 14:14:21.717978 containerd[1481]: 2025-01-30 14:14:21.667 [WARNING][5118] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2389e6aa-aa58-48fd-bacc-def6ddcc0f86", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a", Pod:"csi-node-driver-tbm79", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali388b0e021f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:21.717978 containerd[1481]: 2025-01-30 14:14:21.667 [INFO][5118] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:14:21.717978 containerd[1481]: 2025-01-30 14:14:21.668 [INFO][5118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" iface="eth0" netns="" Jan 30 14:14:21.717978 containerd[1481]: 2025-01-30 14:14:21.668 [INFO][5118] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:14:21.717978 containerd[1481]: 2025-01-30 14:14:21.668 [INFO][5118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:14:21.717978 containerd[1481]: 2025-01-30 14:14:21.699 [INFO][5124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" HandleID="k8s-pod-network.cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Workload="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:21.717978 containerd[1481]: 2025-01-30 14:14:21.699 [INFO][5124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:21.717978 containerd[1481]: 2025-01-30 14:14:21.699 [INFO][5124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:21.717978 containerd[1481]: 2025-01-30 14:14:21.711 [WARNING][5124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" HandleID="k8s-pod-network.cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Workload="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:21.717978 containerd[1481]: 2025-01-30 14:14:21.711 [INFO][5124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" HandleID="k8s-pod-network.cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Workload="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:21.717978 containerd[1481]: 2025-01-30 14:14:21.714 [INFO][5124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:21.717978 containerd[1481]: 2025-01-30 14:14:21.715 [INFO][5118] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:14:21.717978 containerd[1481]: time="2025-01-30T14:14:21.717910977Z" level=info msg="TearDown network for sandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\" successfully" Jan 30 14:14:21.717978 containerd[1481]: time="2025-01-30T14:14:21.717938257Z" level=info msg="StopPodSandbox for \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\" returns successfully" Jan 30 14:14:21.720456 containerd[1481]: time="2025-01-30T14:14:21.719387622Z" level=info msg="RemovePodSandbox for \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\"" Jan 30 14:14:21.720456 containerd[1481]: time="2025-01-30T14:14:21.719427502Z" level=info msg="Forcibly stopping sandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\"" Jan 30 14:14:21.810127 containerd[1481]: 2025-01-30 14:14:21.768 [WARNING][5142] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2389e6aa-aa58-48fd-bacc-def6ddcc0f86", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"ebd3e5300143a3ba96fa1f3306087e62e6d34223506074c8a753b60b7b9c261a", Pod:"csi-node-driver-tbm79", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali388b0e021f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:21.810127 containerd[1481]: 2025-01-30 14:14:21.768 [INFO][5142] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:14:21.810127 containerd[1481]: 2025-01-30 14:14:21.768 [INFO][5142] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" iface="eth0" netns="" Jan 30 14:14:21.810127 containerd[1481]: 2025-01-30 14:14:21.768 [INFO][5142] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:14:21.810127 containerd[1481]: 2025-01-30 14:14:21.768 [INFO][5142] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:14:21.810127 containerd[1481]: 2025-01-30 14:14:21.792 [INFO][5148] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" HandleID="k8s-pod-network.cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Workload="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:21.810127 containerd[1481]: 2025-01-30 14:14:21.792 [INFO][5148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:21.810127 containerd[1481]: 2025-01-30 14:14:21.792 [INFO][5148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:21.810127 containerd[1481]: 2025-01-30 14:14:21.804 [WARNING][5148] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" HandleID="k8s-pod-network.cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Workload="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:21.810127 containerd[1481]: 2025-01-30 14:14:21.804 [INFO][5148] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" HandleID="k8s-pod-network.cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Workload="ci--4081--3--0--d--83a473bcbf-k8s-csi--node--driver--tbm79-eth0" Jan 30 14:14:21.810127 containerd[1481]: 2025-01-30 14:14:21.806 [INFO][5148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:21.810127 containerd[1481]: 2025-01-30 14:14:21.808 [INFO][5142] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07" Jan 30 14:14:21.810575 containerd[1481]: time="2025-01-30T14:14:21.810177875Z" level=info msg="TearDown network for sandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\" successfully" Jan 30 14:14:21.814626 containerd[1481]: time="2025-01-30T14:14:21.814535329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:14:21.814757 containerd[1481]: time="2025-01-30T14:14:21.814705410Z" level=info msg="RemovePodSandbox \"cc565e3e1efcded1b5ddcc0ede6271f6823f2e4648beed5084f1445624c58b07\" returns successfully" Jan 30 14:14:21.815271 containerd[1481]: time="2025-01-30T14:14:21.815246651Z" level=info msg="StopPodSandbox for \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\"" Jan 30 14:14:21.905434 containerd[1481]: 2025-01-30 14:14:21.859 [WARNING][5166] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0", GenerateName:"calico-apiserver-5ff87d744d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f55cf8b0-d7a7-4bae-bf93-c673ed2dc528", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff87d744d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11", Pod:"calico-apiserver-5ff87d744d-8lc5m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9097f619da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:21.905434 containerd[1481]: 2025-01-30 14:14:21.860 [INFO][5166] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:14:21.905434 containerd[1481]: 2025-01-30 14:14:21.860 [INFO][5166] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" iface="eth0" netns="" Jan 30 14:14:21.905434 containerd[1481]: 2025-01-30 14:14:21.860 [INFO][5166] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:14:21.905434 containerd[1481]: 2025-01-30 14:14:21.860 [INFO][5166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:14:21.905434 containerd[1481]: 2025-01-30 14:14:21.885 [INFO][5172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" HandleID="k8s-pod-network.51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:21.905434 containerd[1481]: 2025-01-30 14:14:21.885 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:21.905434 containerd[1481]: 2025-01-30 14:14:21.885 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:21.905434 containerd[1481]: 2025-01-30 14:14:21.898 [WARNING][5172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" HandleID="k8s-pod-network.51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:21.905434 containerd[1481]: 2025-01-30 14:14:21.898 [INFO][5172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" HandleID="k8s-pod-network.51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:21.905434 containerd[1481]: 2025-01-30 14:14:21.901 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:21.905434 containerd[1481]: 2025-01-30 14:14:21.903 [INFO][5166] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:14:21.906024 containerd[1481]: time="2025-01-30T14:14:21.905482462Z" level=info msg="TearDown network for sandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\" successfully" Jan 30 14:14:21.906024 containerd[1481]: time="2025-01-30T14:14:21.905509822Z" level=info msg="StopPodSandbox for \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\" returns successfully" Jan 30 14:14:21.906782 containerd[1481]: time="2025-01-30T14:14:21.906622626Z" level=info msg="RemovePodSandbox for \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\"" Jan 30 14:14:21.906782 containerd[1481]: time="2025-01-30T14:14:21.906734106Z" level=info msg="Forcibly stopping sandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\"" Jan 30 14:14:22.004970 containerd[1481]: 2025-01-30 14:14:21.959 [WARNING][5190] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0", GenerateName:"calico-apiserver-5ff87d744d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f55cf8b0-d7a7-4bae-bf93-c673ed2dc528", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff87d744d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"ee5bd63cc372a5cafce07342ac87a346ee762d7e93f0518617a070b769a99c11", Pod:"calico-apiserver-5ff87d744d-8lc5m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9097f619da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:22.004970 containerd[1481]: 2025-01-30 14:14:21.959 [INFO][5190] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:14:22.004970 containerd[1481]: 2025-01-30 14:14:21.959 [INFO][5190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" iface="eth0" netns="" Jan 30 14:14:22.004970 containerd[1481]: 2025-01-30 14:14:21.959 [INFO][5190] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:14:22.004970 containerd[1481]: 2025-01-30 14:14:21.959 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:14:22.004970 containerd[1481]: 2025-01-30 14:14:21.987 [INFO][5196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" HandleID="k8s-pod-network.51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:22.004970 containerd[1481]: 2025-01-30 14:14:21.987 [INFO][5196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:22.004970 containerd[1481]: 2025-01-30 14:14:21.987 [INFO][5196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:22.004970 containerd[1481]: 2025-01-30 14:14:21.997 [WARNING][5196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" HandleID="k8s-pod-network.51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:22.004970 containerd[1481]: 2025-01-30 14:14:21.998 [INFO][5196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" HandleID="k8s-pod-network.51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--8lc5m-eth0" Jan 30 14:14:22.004970 containerd[1481]: 2025-01-30 14:14:22.000 [INFO][5196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:22.004970 containerd[1481]: 2025-01-30 14:14:22.002 [INFO][5190] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32" Jan 30 14:14:22.005555 containerd[1481]: time="2025-01-30T14:14:22.004932823Z" level=info msg="TearDown network for sandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\" successfully" Jan 30 14:14:22.009284 containerd[1481]: time="2025-01-30T14:14:22.009219797Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:14:22.009602 containerd[1481]: time="2025-01-30T14:14:22.009308077Z" level=info msg="RemovePodSandbox \"51faa1ee17f6cb9ebba92564826618f64bf4ea4483fcc067d51ab09f10b80f32\" returns successfully" Jan 30 14:14:22.009787 containerd[1481]: time="2025-01-30T14:14:22.009743878Z" level=info msg="StopPodSandbox for \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\"" Jan 30 14:14:22.108660 containerd[1481]: 2025-01-30 14:14:22.059 [WARNING][5214] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0", GenerateName:"calico-apiserver-5ff87d744d-", Namespace:"calico-apiserver", SelfLink:"", UID:"eda514dd-0d84-4ae8-92bf-c1abd6012d3c", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff87d744d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703", Pod:"calico-apiserver-5ff87d744d-d22n9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccf66497dbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:22.108660 containerd[1481]: 2025-01-30 14:14:22.060 [INFO][5214] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:14:22.108660 containerd[1481]: 2025-01-30 14:14:22.060 [INFO][5214] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" iface="eth0" netns="" Jan 30 14:14:22.108660 containerd[1481]: 2025-01-30 14:14:22.060 [INFO][5214] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:14:22.108660 containerd[1481]: 2025-01-30 14:14:22.060 [INFO][5214] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:14:22.108660 containerd[1481]: 2025-01-30 14:14:22.087 [INFO][5220] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" HandleID="k8s-pod-network.560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:22.108660 containerd[1481]: 2025-01-30 14:14:22.087 [INFO][5220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:22.108660 containerd[1481]: 2025-01-30 14:14:22.087 [INFO][5220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:22.108660 containerd[1481]: 2025-01-30 14:14:22.100 [WARNING][5220] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" HandleID="k8s-pod-network.560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:22.108660 containerd[1481]: 2025-01-30 14:14:22.100 [INFO][5220] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" HandleID="k8s-pod-network.560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:22.108660 containerd[1481]: 2025-01-30 14:14:22.103 [INFO][5220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:22.108660 containerd[1481]: 2025-01-30 14:14:22.106 [INFO][5214] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:14:22.110148 containerd[1481]: time="2025-01-30T14:14:22.108701314Z" level=info msg="TearDown network for sandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\" successfully" Jan 30 14:14:22.110148 containerd[1481]: time="2025-01-30T14:14:22.108738754Z" level=info msg="StopPodSandbox for \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\" returns successfully" Jan 30 14:14:22.110148 containerd[1481]: time="2025-01-30T14:14:22.110014718Z" level=info msg="RemovePodSandbox for \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\"" Jan 30 14:14:22.110148 containerd[1481]: time="2025-01-30T14:14:22.110053318Z" level=info msg="Forcibly stopping sandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\"" Jan 30 14:14:22.209554 containerd[1481]: 2025-01-30 14:14:22.157 [WARNING][5238] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0", GenerateName:"calico-apiserver-5ff87d744d-", Namespace:"calico-apiserver", SelfLink:"", UID:"eda514dd-0d84-4ae8-92bf-c1abd6012d3c", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff87d744d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"3b43742c4bdcab82bb79786350f5fa15b0956c3baa50a6bc0fd7eddb7eb00703", Pod:"calico-apiserver-5ff87d744d-d22n9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccf66497dbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:22.209554 containerd[1481]: 2025-01-30 14:14:22.158 [INFO][5238] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:14:22.209554 containerd[1481]: 2025-01-30 14:14:22.158 [INFO][5238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" iface="eth0" netns="" Jan 30 14:14:22.209554 containerd[1481]: 2025-01-30 14:14:22.158 [INFO][5238] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:14:22.209554 containerd[1481]: 2025-01-30 14:14:22.158 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:14:22.209554 containerd[1481]: 2025-01-30 14:14:22.185 [INFO][5244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" HandleID="k8s-pod-network.560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:22.209554 containerd[1481]: 2025-01-30 14:14:22.186 [INFO][5244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:22.209554 containerd[1481]: 2025-01-30 14:14:22.186 [INFO][5244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:22.209554 containerd[1481]: 2025-01-30 14:14:22.201 [WARNING][5244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" HandleID="k8s-pod-network.560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:22.209554 containerd[1481]: 2025-01-30 14:14:22.202 [INFO][5244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" HandleID="k8s-pod-network.560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Workload="ci--4081--3--0--d--83a473bcbf-k8s-calico--apiserver--5ff87d744d--d22n9-eth0" Jan 30 14:14:22.209554 containerd[1481]: 2025-01-30 14:14:22.204 [INFO][5244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:22.209554 containerd[1481]: 2025-01-30 14:14:22.206 [INFO][5238] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59" Jan 30 14:14:22.209554 containerd[1481]: time="2025-01-30T14:14:22.208859953Z" level=info msg="TearDown network for sandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\" successfully" Jan 30 14:14:22.238606 containerd[1481]: time="2025-01-30T14:14:22.238074366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:14:22.238606 containerd[1481]: time="2025-01-30T14:14:22.238214247Z" level=info msg="RemovePodSandbox \"560a3c80d7a5e53962729fed452dcd06c68cd759fb48b9c56b8c5f40feb86d59\" returns successfully" Jan 30 14:14:22.240636 containerd[1481]: time="2025-01-30T14:14:22.240416694Z" level=info msg="StopPodSandbox for \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\"" Jan 30 14:14:22.357132 containerd[1481]: 2025-01-30 14:14:22.293 [WARNING][5278] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5d2bba59-863e-409a-b8ef-65a000a343fa", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368", Pod:"coredns-7db6d8ff4d-6zngw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali545a6c6ce1b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:22.357132 containerd[1481]: 2025-01-30 14:14:22.293 [INFO][5278] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:14:22.357132 containerd[1481]: 2025-01-30 14:14:22.293 [INFO][5278] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" iface="eth0" netns="" Jan 30 14:14:22.357132 containerd[1481]: 2025-01-30 14:14:22.293 [INFO][5278] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:14:22.357132 containerd[1481]: 2025-01-30 14:14:22.293 [INFO][5278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:14:22.357132 containerd[1481]: 2025-01-30 14:14:22.336 [INFO][5288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" HandleID="k8s-pod-network.a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:22.357132 containerd[1481]: 2025-01-30 14:14:22.336 [INFO][5288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:22.357132 containerd[1481]: 2025-01-30 14:14:22.336 [INFO][5288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:22.357132 containerd[1481]: 2025-01-30 14:14:22.347 [WARNING][5288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" HandleID="k8s-pod-network.a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:22.357132 containerd[1481]: 2025-01-30 14:14:22.347 [INFO][5288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" HandleID="k8s-pod-network.a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:22.357132 containerd[1481]: 2025-01-30 14:14:22.350 [INFO][5288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:22.357132 containerd[1481]: 2025-01-30 14:14:22.353 [INFO][5278] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:14:22.357132 containerd[1481]: time="2025-01-30T14:14:22.356631464Z" level=info msg="TearDown network for sandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\" successfully" Jan 30 14:14:22.357132 containerd[1481]: time="2025-01-30T14:14:22.356658384Z" level=info msg="StopPodSandbox for \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\" returns successfully" Jan 30 14:14:22.358983 containerd[1481]: time="2025-01-30T14:14:22.358834271Z" level=info msg="RemovePodSandbox for \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\"" Jan 30 14:14:22.358983 containerd[1481]: time="2025-01-30T14:14:22.358902871Z" level=info msg="Forcibly stopping sandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\"" Jan 30 14:14:22.456628 containerd[1481]: 2025-01-30 14:14:22.411 [WARNING][5306] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5d2bba59-863e-409a-b8ef-65a000a343fa", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"5a52efcc1992a71f3970ff3cfdb9ae7d5d1959bf38171fa4071492921c1b3368", Pod:"coredns-7db6d8ff4d-6zngw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali545a6c6ce1b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:22.456628 containerd[1481]: 2025-01-30 14:14:22.412 [INFO][5306] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:14:22.456628 containerd[1481]: 2025-01-30 14:14:22.412 [INFO][5306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" iface="eth0" netns="" Jan 30 14:14:22.456628 containerd[1481]: 2025-01-30 14:14:22.412 [INFO][5306] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:14:22.456628 containerd[1481]: 2025-01-30 14:14:22.412 [INFO][5306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:14:22.456628 containerd[1481]: 2025-01-30 14:14:22.439 [INFO][5312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" HandleID="k8s-pod-network.a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:22.456628 containerd[1481]: 2025-01-30 14:14:22.439 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:22.456628 containerd[1481]: 2025-01-30 14:14:22.439 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:22.456628 containerd[1481]: 2025-01-30 14:14:22.450 [WARNING][5312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" HandleID="k8s-pod-network.a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:22.456628 containerd[1481]: 2025-01-30 14:14:22.450 [INFO][5312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" HandleID="k8s-pod-network.a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--6zngw-eth0" Jan 30 14:14:22.456628 containerd[1481]: 2025-01-30 14:14:22.453 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:22.456628 containerd[1481]: 2025-01-30 14:14:22.454 [INFO][5306] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548" Jan 30 14:14:22.457440 containerd[1481]: time="2025-01-30T14:14:22.456681623Z" level=info msg="TearDown network for sandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\" successfully" Jan 30 14:14:22.461032 containerd[1481]: time="2025-01-30T14:14:22.460927836Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:14:22.461505 containerd[1481]: time="2025-01-30T14:14:22.461080597Z" level=info msg="RemovePodSandbox \"a1b06f7a61b3833c53b94c7f5a69324cd8d7c397912c2422c3e5dbb467b7b548\" returns successfully" Jan 30 14:14:22.462016 containerd[1481]: time="2025-01-30T14:14:22.461731079Z" level=info msg="StopPodSandbox for \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\"" Jan 30 14:14:22.572861 containerd[1481]: 2025-01-30 14:14:22.518 [WARNING][5330] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5a3593c1-ac5b-4526-8f21-4d2d404a8f63", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4", Pod:"coredns-7db6d8ff4d-mh58t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c41f64a421", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:22.572861 containerd[1481]: 2025-01-30 14:14:22.518 [INFO][5330] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:14:22.572861 containerd[1481]: 2025-01-30 14:14:22.518 [INFO][5330] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" iface="eth0" netns="" Jan 30 14:14:22.572861 containerd[1481]: 2025-01-30 14:14:22.518 [INFO][5330] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:14:22.572861 containerd[1481]: 2025-01-30 14:14:22.518 [INFO][5330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:14:22.572861 containerd[1481]: 2025-01-30 14:14:22.548 [INFO][5336] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" HandleID="k8s-pod-network.95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:22.572861 containerd[1481]: 2025-01-30 14:14:22.549 [INFO][5336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:22.572861 containerd[1481]: 2025-01-30 14:14:22.549 [INFO][5336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:22.572861 containerd[1481]: 2025-01-30 14:14:22.565 [WARNING][5336] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" HandleID="k8s-pod-network.95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:22.572861 containerd[1481]: 2025-01-30 14:14:22.565 [INFO][5336] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" HandleID="k8s-pod-network.95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:22.572861 containerd[1481]: 2025-01-30 14:14:22.567 [INFO][5336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:22.572861 containerd[1481]: 2025-01-30 14:14:22.570 [INFO][5330] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:14:22.573779 containerd[1481]: time="2025-01-30T14:14:22.573606196Z" level=info msg="TearDown network for sandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\" successfully" Jan 30 14:14:22.573779 containerd[1481]: time="2025-01-30T14:14:22.573640956Z" level=info msg="StopPodSandbox for \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\" returns successfully" Jan 30 14:14:22.574779 containerd[1481]: time="2025-01-30T14:14:22.574721439Z" level=info msg="RemovePodSandbox for \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\"" Jan 30 14:14:22.574866 containerd[1481]: time="2025-01-30T14:14:22.574795519Z" level=info msg="Forcibly stopping sandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\"" Jan 30 14:14:22.675906 containerd[1481]: 2025-01-30 14:14:22.624 [WARNING][5354] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5a3593c1-ac5b-4526-8f21-4d2d404a8f63", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-83a473bcbf", ContainerID:"20b44d58aeaf653508ff5dfdbdfe2b0107b99fe2988b65ddd662513bbf1a60b4", Pod:"coredns-7db6d8ff4d-mh58t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c41f64a421", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:14:22.675906 containerd[1481]: 2025-01-30 14:14:22.624 [INFO][5354] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:14:22.675906 containerd[1481]: 2025-01-30 14:14:22.625 [INFO][5354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" iface="eth0" netns="" Jan 30 14:14:22.675906 containerd[1481]: 2025-01-30 14:14:22.625 [INFO][5354] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:14:22.675906 containerd[1481]: 2025-01-30 14:14:22.625 [INFO][5354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:14:22.675906 containerd[1481]: 2025-01-30 14:14:22.652 [INFO][5361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" HandleID="k8s-pod-network.95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:22.675906 containerd[1481]: 2025-01-30 14:14:22.652 [INFO][5361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:14:22.675906 containerd[1481]: 2025-01-30 14:14:22.652 [INFO][5361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:14:22.675906 containerd[1481]: 2025-01-30 14:14:22.667 [WARNING][5361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" HandleID="k8s-pod-network.95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:22.675906 containerd[1481]: 2025-01-30 14:14:22.667 [INFO][5361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" HandleID="k8s-pod-network.95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Workload="ci--4081--3--0--d--83a473bcbf-k8s-coredns--7db6d8ff4d--mh58t-eth0" Jan 30 14:14:22.675906 containerd[1481]: 2025-01-30 14:14:22.671 [INFO][5361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:14:22.675906 containerd[1481]: 2025-01-30 14:14:22.673 [INFO][5354] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16" Jan 30 14:14:22.678727 containerd[1481]: time="2025-01-30T14:14:22.677995248Z" level=info msg="TearDown network for sandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\" successfully" Jan 30 14:14:22.682939 containerd[1481]: time="2025-01-30T14:14:22.682731584Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:14:22.682939 containerd[1481]: time="2025-01-30T14:14:22.682829864Z" level=info msg="RemovePodSandbox \"95abd3d809298de544c6f6e81f3109175739c8250fd40250623fa36367ed9c16\" returns successfully" Jan 30 14:14:26.328923 kubelet[2755]: I0130 14:14:26.328689 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:15:23.177868 systemd[1]: run-containerd-runc-k8s.io-45d5b93800323d13390f7ced5d7fd56cfb300cc940c0594e875fb25309439e5a-runc.5ZPMqs.mount: Deactivated successfully. Jan 30 14:16:52.231813 systemd[1]: run-containerd-runc-k8s.io-45d5b93800323d13390f7ced5d7fd56cfb300cc940c0594e875fb25309439e5a-runc.ui5qit.mount: Deactivated successfully. Jan 30 14:18:13.277717 systemd[1]: Started sshd@7-168.119.241.96:22-139.178.68.195:42872.service - OpenSSH per-connection server daemon (139.178.68.195:42872). Jan 30 14:18:14.279222 sshd[5850]: Accepted publickey for core from 139.178.68.195 port 42872 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:14.282959 sshd[5850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:14.289862 systemd-logind[1454]: New session 8 of user core. Jan 30 14:18:14.295282 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:18:15.055055 sshd[5850]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:15.061670 systemd[1]: sshd@7-168.119.241.96:22-139.178.68.195:42872.service: Deactivated successfully. Jan 30 14:18:15.066834 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:18:15.071626 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:18:15.073007 systemd-logind[1454]: Removed session 8. Jan 30 14:18:20.231521 systemd[1]: Started sshd@8-168.119.241.96:22-139.178.68.195:57238.service - OpenSSH per-connection server daemon (139.178.68.195:57238). Jan 30 14:18:21.200506 sshd[5863]: Accepted publickey for core from 139.178.68.195 port 57238 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:21.203094 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:21.208988 systemd-logind[1454]: New session 9 of user core. Jan 30 14:18:21.216152 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:18:21.957724 sshd[5863]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:21.963174 systemd[1]: sshd@8-168.119.241.96:22-139.178.68.195:57238.service: Deactivated successfully. Jan 30 14:18:21.966776 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:18:21.968708 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:18:21.969945 systemd-logind[1454]: Removed session 9. Jan 30 14:18:22.138513 systemd[1]: Started sshd@9-168.119.241.96:22-139.178.68.195:57252.service - OpenSSH per-connection server daemon (139.178.68.195:57252). Jan 30 14:18:23.119007 sshd[5879]: Accepted publickey for core from 139.178.68.195 port 57252 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:23.121198 sshd[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:23.127529 systemd-logind[1454]: New session 10 of user core. Jan 30 14:18:23.130144 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:18:23.175327 systemd[1]: run-containerd-runc-k8s.io-45d5b93800323d13390f7ced5d7fd56cfb300cc940c0594e875fb25309439e5a-runc.IITvhW.mount: Deactivated successfully. Jan 30 14:18:23.908171 sshd[5879]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:23.913633 systemd[1]: sshd@9-168.119.241.96:22-139.178.68.195:57252.service: Deactivated successfully. Jan 30 14:18:23.918812 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:18:23.921164 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:18:23.923002 systemd-logind[1454]: Removed session 10. Jan 30 14:18:24.086247 systemd[1]: Started sshd@10-168.119.241.96:22-139.178.68.195:57260.service - OpenSSH per-connection server daemon (139.178.68.195:57260). Jan 30 14:18:25.066687 sshd[5929]: Accepted publickey for core from 139.178.68.195 port 57260 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:25.067528 sshd[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:25.072972 systemd-logind[1454]: New session 11 of user core. Jan 30 14:18:25.078168 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:18:25.830735 sshd[5929]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:25.834785 systemd[1]: sshd@10-168.119.241.96:22-139.178.68.195:57260.service: Deactivated successfully. Jan 30 14:18:25.837108 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:18:25.840711 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:18:25.841970 systemd-logind[1454]: Removed session 11. Jan 30 14:18:31.009383 systemd[1]: Started sshd@11-168.119.241.96:22-139.178.68.195:42082.service - OpenSSH per-connection server daemon (139.178.68.195:42082). Jan 30 14:18:31.989574 sshd[5947]: Accepted publickey for core from 139.178.68.195 port 42082 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:31.991752 sshd[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:31.998840 systemd-logind[1454]: New session 12 of user core. Jan 30 14:18:32.003193 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:18:32.750432 sshd[5947]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:32.756468 systemd[1]: sshd@11-168.119.241.96:22-139.178.68.195:42082.service: Deactivated successfully. Jan 30 14:18:32.761630 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:18:32.762572 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:18:32.763856 systemd-logind[1454]: Removed session 12. Jan 30 14:18:37.923238 systemd[1]: Started sshd@12-168.119.241.96:22-139.178.68.195:57996.service - OpenSSH per-connection server daemon (139.178.68.195:57996). Jan 30 14:18:38.891596 sshd[5985]: Accepted publickey for core from 139.178.68.195 port 57996 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:38.893795 sshd[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:38.902183 systemd-logind[1454]: New session 13 of user core. Jan 30 14:18:38.911283 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:18:39.650031 sshd[5985]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:39.657291 systemd[1]: sshd@12-168.119.241.96:22-139.178.68.195:57996.service: Deactivated successfully. Jan 30 14:18:39.661366 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:18:39.662594 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:18:39.664005 systemd-logind[1454]: Removed session 13. Jan 30 14:18:44.830264 systemd[1]: Started sshd@13-168.119.241.96:22-139.178.68.195:58000.service - OpenSSH per-connection server daemon (139.178.68.195:58000). Jan 30 14:18:45.809270 sshd[6016]: Accepted publickey for core from 139.178.68.195 port 58000 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:45.811587 sshd[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:45.818122 systemd-logind[1454]: New session 14 of user core. Jan 30 14:18:45.825181 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:18:46.568342 sshd[6016]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:46.573962 systemd[1]: sshd@13-168.119.241.96:22-139.178.68.195:58000.service: Deactivated successfully. Jan 30 14:18:46.578130 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:18:46.579178 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:18:46.580853 systemd-logind[1454]: Removed session 14. Jan 30 14:18:46.745375 systemd[1]: Started sshd@14-168.119.241.96:22-139.178.68.195:55704.service - OpenSSH per-connection server daemon (139.178.68.195:55704). Jan 30 14:18:47.744454 sshd[6029]: Accepted publickey for core from 139.178.68.195 port 55704 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:47.746460 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:47.752602 systemd-logind[1454]: New session 15 of user core. Jan 30 14:18:47.759599 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:18:48.669488 sshd[6029]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:48.675135 systemd[1]: sshd@14-168.119.241.96:22-139.178.68.195:55704.service: Deactivated successfully. Jan 30 14:18:48.678823 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:18:48.682272 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:18:48.685953 systemd-logind[1454]: Removed session 15. Jan 30 14:18:48.844286 systemd[1]: Started sshd@15-168.119.241.96:22-139.178.68.195:55710.service - OpenSSH per-connection server daemon (139.178.68.195:55710). Jan 30 14:18:49.822946 sshd[6040]: Accepted publickey for core from 139.178.68.195 port 55710 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:49.824458 sshd[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:49.830144 systemd-logind[1454]: New session 16 of user core. Jan 30 14:18:49.839025 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:18:52.725658 sshd[6040]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:52.733690 systemd[1]: sshd@15-168.119.241.96:22-139.178.68.195:55710.service: Deactivated successfully. Jan 30 14:18:52.739781 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:18:52.744609 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:18:52.745905 systemd-logind[1454]: Removed session 16. Jan 30 14:18:52.903313 systemd[1]: Started sshd@16-168.119.241.96:22-139.178.68.195:55726.service - OpenSSH per-connection server daemon (139.178.68.195:55726). Jan 30 14:18:53.887108 sshd[6080]: Accepted publickey for core from 139.178.68.195 port 55726 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:53.888737 sshd[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:53.898201 systemd-logind[1454]: New session 17 of user core. Jan 30 14:18:53.905196 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:18:54.775025 sshd[6080]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:54.779651 systemd[1]: sshd@16-168.119.241.96:22-139.178.68.195:55726.service: Deactivated successfully. Jan 30 14:18:54.784034 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:18:54.788954 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:18:54.790977 systemd-logind[1454]: Removed session 17. Jan 30 14:18:54.951403 systemd[1]: Started sshd@17-168.119.241.96:22-139.178.68.195:36434.service - OpenSSH per-connection server daemon (139.178.68.195:36434). Jan 30 14:18:55.934915 sshd[6091]: Accepted publickey for core from 139.178.68.195 port 36434 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:55.934777 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:55.941816 systemd-logind[1454]: New session 18 of user core. Jan 30 14:18:55.948216 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:18:56.689863 sshd[6091]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:56.693851 systemd[1]: sshd@17-168.119.241.96:22-139.178.68.195:36434.service: Deactivated successfully. Jan 30 14:18:56.696658 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:18:56.699648 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:18:56.701150 systemd-logind[1454]: Removed session 18. Jan 30 14:19:01.867421 systemd[1]: Started sshd@18-168.119.241.96:22-139.178.68.195:36446.service - OpenSSH per-connection server daemon (139.178.68.195:36446). Jan 30 14:19:02.852266 sshd[6130]: Accepted publickey for core from 139.178.68.195 port 36446 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:19:02.854951 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:19:02.861562 systemd-logind[1454]: New session 19 of user core. Jan 30 14:19:02.865161 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:19:03.608019 sshd[6130]: pam_unix(sshd:session): session closed for user core Jan 30 14:19:03.614172 systemd[1]: sshd@18-168.119.241.96:22-139.178.68.195:36446.service: Deactivated successfully. Jan 30 14:19:03.617815 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:19:03.619428 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:19:03.620604 systemd-logind[1454]: Removed session 19. Jan 30 14:19:08.785217 systemd[1]: Started sshd@19-168.119.241.96:22-139.178.68.195:43104.service - OpenSSH per-connection server daemon (139.178.68.195:43104). Jan 30 14:19:09.765994 sshd[6145]: Accepted publickey for core from 139.178.68.195 port 43104 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:19:09.768011 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:19:09.774233 systemd-logind[1454]: New session 20 of user core. Jan 30 14:19:09.779475 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:19:10.524473 sshd[6145]: pam_unix(sshd:session): session closed for user core Jan 30 14:19:10.529685 systemd[1]: sshd@19-168.119.241.96:22-139.178.68.195:43104.service: Deactivated successfully. Jan 30 14:19:10.531838 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:19:10.532806 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:19:10.534751 systemd-logind[1454]: Removed session 20.