Dec 13 09:01:20.904791 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 09:01:20.904816 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 09:01:20.904827 kernel: KASLR enabled Dec 13 09:01:20.904832 kernel: efi: EFI v2.7 by EDK II Dec 13 09:01:20.904838 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x1347a1018 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Dec 13 09:01:20.904844 kernel: random: crng init done Dec 13 09:01:20.904851 kernel: ACPI: Early table checksum verification disabled Dec 13 09:01:20.904856 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Dec 13 09:01:20.904863 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Dec 13 09:01:20.904868 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:01:20.904876 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:01:20.904882 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:01:20.904888 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:01:20.904894 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:01:20.904902 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:01:20.904910 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:01:20.904916 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:01:20.904923 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:01:20.904929 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 09:01:20.904936 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 13 09:01:20.904942 kernel: NUMA: Failed to initialise from firmware Dec 13 09:01:20.904948 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 09:01:20.904955 kernel: NUMA: NODE_DATA [mem 0x13981e800-0x139823fff] Dec 13 09:01:20.904961 kernel: Zone ranges: Dec 13 09:01:20.904967 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 09:01:20.904973 kernel: DMA32 empty Dec 13 09:01:20.904981 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Dec 13 09:01:20.904987 kernel: Movable zone start for each node Dec 13 09:01:20.904993 kernel: Early memory node ranges Dec 13 09:01:20.904999 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Dec 13 09:01:20.905006 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Dec 13 09:01:20.905012 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Dec 13 09:01:20.905018 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Dec 13 09:01:20.905024 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Dec 13 09:01:20.905031 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 09:01:20.905037 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Dec 13 09:01:20.905044 kernel: psci: probing for conduit method from ACPI. Dec 13 09:01:20.905051 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 09:01:20.905058 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 09:01:20.905064 kernel: psci: Trusted OS migration not required Dec 13 09:01:20.905074 kernel: psci: SMC Calling Convention v1.1 Dec 13 09:01:20.905080 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 09:01:20.905087 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 09:01:20.905096 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 09:01:20.905103 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 09:01:20.905109 kernel: Detected PIPT I-cache on CPU0 Dec 13 09:01:20.905116 kernel: CPU features: detected: GIC system register CPU interface Dec 13 09:01:20.905123 kernel: CPU features: detected: Hardware dirty bit management Dec 13 09:01:20.905130 kernel: CPU features: detected: Spectre-v4 Dec 13 09:01:20.905136 kernel: CPU features: detected: Spectre-BHB Dec 13 09:01:20.905143 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 09:01:20.905150 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 09:01:20.905156 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 09:01:20.905163 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 09:01:20.905171 kernel: alternatives: applying boot alternatives Dec 13 09:01:20.905179 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 09:01:20.905198 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 09:01:20.905220 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 09:01:20.905227 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 09:01:20.905234 kernel: Fallback order for Node 0: 0 Dec 13 09:01:20.905241 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Dec 13 09:01:20.905248 kernel: Policy zone: Normal Dec 13 09:01:20.905254 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 09:01:20.905261 kernel: software IO TLB: area num 2. Dec 13 09:01:20.905268 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Dec 13 09:01:20.905277 kernel: Memory: 3881588K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 214412K reserved, 0K cma-reserved) Dec 13 09:01:20.905284 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 09:01:20.905291 kernel: trace event string verifier disabled Dec 13 09:01:20.905298 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 09:01:20.905305 kernel: rcu: RCU event tracing is enabled. Dec 13 09:01:20.905312 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 09:01:20.905319 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 09:01:20.905325 kernel: Tracing variant of Tasks RCU enabled. Dec 13 09:01:20.905332 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 09:01:20.905339 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 09:01:20.905346 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 09:01:20.905354 kernel: GICv3: 256 SPIs implemented Dec 13 09:01:20.905360 kernel: GICv3: 0 Extended SPIs implemented Dec 13 09:01:20.905367 kernel: Root IRQ handler: gic_handle_irq Dec 13 09:01:20.905374 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 09:01:20.905381 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 09:01:20.905387 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 09:01:20.905394 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 09:01:20.905401 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 09:01:20.905408 kernel: GICv3: using LPI property table @0x00000001000e0000 Dec 13 09:01:20.905415 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Dec 13 09:01:20.905422 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 09:01:20.905430 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 09:01:20.905437 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 09:01:20.905444 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 09:01:20.905451 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 09:01:20.905458 kernel: Console: colour dummy device 80x25 Dec 13 09:01:20.905465 kernel: ACPI: Core revision 20230628 Dec 13 09:01:20.905472 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 09:01:20.905479 kernel: pid_max: default: 32768 minimum: 301 Dec 13 09:01:20.905486 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 09:01:20.905493 kernel: landlock: Up and running. Dec 13 09:01:20.905501 kernel: SELinux: Initializing. Dec 13 09:01:20.905508 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 09:01:20.905515 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 09:01:20.905522 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:01:20.905529 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:01:20.905536 kernel: rcu: Hierarchical SRCU implementation. Dec 13 09:01:20.905543 kernel: rcu: Max phase no-delay instances is 400. Dec 13 09:01:20.905550 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 09:01:20.905557 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 09:01:20.905565 kernel: Remapping and enabling EFI services. Dec 13 09:01:20.905572 kernel: smp: Bringing up secondary CPUs ... Dec 13 09:01:20.905579 kernel: Detected PIPT I-cache on CPU1 Dec 13 09:01:20.905586 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 09:01:20.905594 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Dec 13 09:01:20.905601 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 09:01:20.905610 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 09:01:20.905617 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 09:01:20.905624 kernel: SMP: Total of 2 processors activated. Dec 13 09:01:20.905631 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 09:01:20.905640 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 09:01:20.905647 kernel: CPU features: detected: Common not Private translations Dec 13 09:01:20.905659 kernel: CPU features: detected: CRC32 instructions Dec 13 09:01:20.905668 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 09:01:20.905676 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 09:01:20.905683 kernel: CPU features: detected: LSE atomic instructions Dec 13 09:01:20.905690 kernel: CPU features: detected: Privileged Access Never Dec 13 09:01:20.905706 kernel: CPU features: detected: RAS Extension Support Dec 13 09:01:20.905714 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 09:01:20.905727 kernel: CPU: All CPU(s) started at EL1 Dec 13 09:01:20.905736 kernel: alternatives: applying system-wide alternatives Dec 13 09:01:20.905744 kernel: devtmpfs: initialized Dec 13 09:01:20.905751 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 09:01:20.905759 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 09:01:20.905766 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 09:01:20.905773 kernel: SMBIOS 3.0.0 present. Dec 13 09:01:20.905782 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Dec 13 09:01:20.905789 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 09:01:20.905797 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 09:01:20.905804 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 09:01:20.905812 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 09:01:20.905819 kernel: audit: initializing netlink subsys (disabled) Dec 13 09:01:20.905826 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Dec 13 09:01:20.905834 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 09:01:20.905841 kernel: cpuidle: using governor menu Dec 13 09:01:20.905850 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 09:01:20.905857 kernel: ASID allocator initialised with 32768 entries Dec 13 09:01:20.905865 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 09:01:20.905872 kernel: Serial: AMBA PL011 UART driver Dec 13 09:01:20.905879 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 09:01:20.905886 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 09:01:20.905893 kernel: Modules: 509040 pages in range for PLT usage Dec 13 09:01:20.905901 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 09:01:20.905908 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 09:01:20.905916 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 09:01:20.905924 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 09:01:20.905932 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 09:01:20.905940 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 09:01:20.905947 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 09:01:20.905955 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 09:01:20.905962 kernel: ACPI: Added _OSI(Module Device) Dec 13 09:01:20.905969 kernel: ACPI: Added _OSI(Processor Device) Dec 13 09:01:20.905977 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 09:01:20.905984 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 09:01:20.905993 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 09:01:20.906000 kernel: ACPI: Interpreter enabled Dec 13 09:01:20.906007 kernel: ACPI: Using GIC for interrupt routing Dec 13 09:01:20.906014 kernel: ACPI: MCFG table detected, 1 entries Dec 13 09:01:20.906022 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 09:01:20.906029 kernel: printk: console [ttyAMA0] enabled Dec 13 09:01:20.906036 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 09:01:20.906235 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 09:01:20.906323 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 09:01:20.906390 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 09:01:20.906454 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 09:01:20.906518 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 09:01:20.906528 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 09:01:20.906535 kernel: PCI host bridge to bus 0000:00 Dec 13 09:01:20.906610 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 09:01:20.906673 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 09:01:20.906793 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 09:01:20.906858 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 09:01:20.906944 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 09:01:20.907036 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Dec 13 09:01:20.907107 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Dec 13 09:01:20.907181 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 09:01:20.907296 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 09:01:20.907367 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Dec 13 09:01:20.907446 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 09:01:20.907516 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Dec 13 09:01:20.907593 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 09:01:20.907677 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Dec 13 09:01:20.907799 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 09:01:20.907871 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Dec 13 09:01:20.907975 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 09:01:20.908052 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Dec 13 09:01:20.908128 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 09:01:20.908999 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Dec 13 09:01:20.909125 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 09:01:20.909661 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Dec 13 09:01:20.909802 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 09:01:20.909881 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Dec 13 09:01:20.909956 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 09:01:20.910027 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Dec 13 09:01:20.910118 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Dec 13 09:01:20.910222 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Dec 13 09:01:20.910602 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 09:01:20.910686 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Dec 13 09:01:20.910828 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 09:01:20.910907 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 09:01:20.910990 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 09:01:20.911068 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Dec 13 09:01:20.911158 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 09:01:20.913373 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Dec 13 09:01:20.913471 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Dec 13 09:01:20.913559 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 09:01:20.913633 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Dec 13 09:01:20.913778 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 09:01:20.913858 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Dec 13 09:01:20.913947 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 09:01:20.914018 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Dec 13 09:01:20.914087 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 09:01:20.914166 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 09:01:20.914304 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Dec 13 09:01:20.914376 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Dec 13 09:01:20.914443 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 09:01:20.914516 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 09:01:20.914583 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 13 09:01:20.914650 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 13 09:01:20.914742 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 09:01:20.914819 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 09:01:20.914885 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 13 09:01:20.914957 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 09:01:20.915024 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 13 09:01:20.915092 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 09:01:20.915163 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 09:01:20.915321 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 13 09:01:20.915400 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 09:01:20.915471 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 09:01:20.915539 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 13 09:01:20.915606 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Dec 13 09:01:20.915675 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 09:01:20.915760 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 13 09:01:20.915828 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 13 09:01:20.915905 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 09:01:20.915971 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Dec 13 09:01:20.916035 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Dec 13 09:01:20.916108 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 09:01:20.916175 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 13 09:01:20.918375 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 13 09:01:20.918471 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 09:01:20.918540 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 13 09:01:20.918638 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 13 09:01:20.918792 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 09:01:20.918869 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 09:01:20.918943 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 09:01:20.919011 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 09:01:20.919082 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 09:01:20.919149 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 09:01:20.919254 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Dec 13 09:01:20.919325 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 09:01:20.919397 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Dec 13 09:01:20.919467 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 09:01:20.919538 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Dec 13 09:01:20.919605 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 09:01:20.919701 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Dec 13 09:01:20.919797 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 09:01:20.919881 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Dec 13 09:01:20.919952 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 09:01:20.920020 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Dec 13 09:01:20.920087 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 09:01:20.920158 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Dec 13 09:01:20.921348 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Dec 13 09:01:20.921441 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Dec 13 09:01:20.921508 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 09:01:20.921576 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Dec 13 09:01:20.921643 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 09:01:20.921775 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Dec 13 09:01:20.921851 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 09:01:20.921922 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Dec 13 09:01:20.921997 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 09:01:20.922069 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Dec 13 09:01:20.922137 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 09:01:20.922497 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Dec 13 09:01:20.922583 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 09:01:20.922655 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Dec 13 09:01:20.922748 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 09:01:20.922826 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Dec 13 09:01:20.922902 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 09:01:20.922975 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Dec 13 09:01:20.923043 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Dec 13 09:01:20.923121 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Dec 13 09:01:20.923337 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Dec 13 09:01:20.923426 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 09:01:20.923496 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Dec 13 09:01:20.923565 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 09:01:20.923636 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 09:01:20.923723 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 09:01:20.923799 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 09:01:20.923880 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Dec 13 09:01:20.923951 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 09:01:20.924022 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 09:01:20.924088 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Dec 13 09:01:20.924153 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 09:01:20.924248 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 09:01:20.924320 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Dec 13 09:01:20.924389 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 09:01:20.924456 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 09:01:20.924523 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Dec 13 09:01:20.924594 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 09:01:20.924670 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 09:01:20.924765 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 09:01:20.924851 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 09:01:20.924924 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Dec 13 09:01:20.924991 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 09:01:20.925071 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Dec 13 09:01:20.925146 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 09:01:20.925244 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 09:01:20.925314 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 13 09:01:20.925381 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 09:01:20.925456 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Dec 13 09:01:20.925526 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Dec 13 09:01:20.925606 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 09:01:20.925673 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 09:01:20.925802 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 13 09:01:20.925880 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 09:01:20.925955 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Dec 13 09:01:20.926027 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Dec 13 09:01:20.926097 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Dec 13 09:01:20.926167 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 09:01:20.929433 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 09:01:20.929544 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 13 09:01:20.929623 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 09:01:20.929710 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 09:01:20.929788 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 09:01:20.929854 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 13 09:01:20.929919 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 09:01:20.929990 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 09:01:20.930056 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Dec 13 09:01:20.930122 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Dec 13 09:01:20.930205 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 09:01:20.930278 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 09:01:20.930338 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 09:01:20.930397 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 09:01:20.930471 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 09:01:20.930531 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 09:01:20.930591 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 09:01:20.930669 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Dec 13 09:01:20.930772 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 09:01:20.930842 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 09:01:20.930915 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Dec 13 09:01:20.930978 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 09:01:20.931039 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 09:01:20.931109 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 09:01:20.931174 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 13 09:01:20.933402 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 09:01:20.933514 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Dec 13 09:01:20.933580 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 13 09:01:20.933641 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 09:01:20.933733 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Dec 13 09:01:20.933804 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 13 09:01:20.933868 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 09:01:20.933951 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Dec 13 09:01:20.934014 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 13 09:01:20.934085 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 09:01:20.934153 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Dec 13 09:01:20.935260 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 13 09:01:20.935375 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 09:01:20.935450 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Dec 13 09:01:20.935514 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 13 09:01:20.935576 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 09:01:20.935592 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 09:01:20.935600 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 09:01:20.935608 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 09:01:20.935616 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 09:01:20.935624 kernel: iommu: Default domain type: Translated Dec 13 09:01:20.935632 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 09:01:20.935639 kernel: efivars: Registered efivars operations Dec 13 09:01:20.935647 kernel: vgaarb: loaded Dec 13 09:01:20.935655 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 09:01:20.935664 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 09:01:20.935672 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 09:01:20.935682 kernel: pnp: PnP ACPI init Dec 13 09:01:20.935785 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 09:01:20.935799 kernel: pnp: PnP ACPI: found 1 devices Dec 13 09:01:20.935807 kernel: NET: Registered PF_INET protocol family Dec 13 09:01:20.935815 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 09:01:20.935823 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 09:01:20.935834 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 09:01:20.935842 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 09:01:20.935850 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 09:01:20.935858 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 09:01:20.935866 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 09:01:20.935874 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 09:01:20.935882 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 09:01:20.935963 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 13 09:01:20.935976 kernel: PCI: CLS 0 bytes, default 64 Dec 13 09:01:20.935985 kernel: kvm [1]: HYP mode not available Dec 13 09:01:20.935993 kernel: Initialise system trusted keyrings Dec 13 09:01:20.936001 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 09:01:20.936009 kernel: Key type asymmetric registered Dec 13 09:01:20.936016 kernel: Asymmetric key parser 'x509' registered Dec 13 09:01:20.936024 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 09:01:20.936032 kernel: io scheduler mq-deadline registered Dec 13 09:01:20.936040 kernel: io scheduler kyber registered Dec 13 09:01:20.936047 kernel: io scheduler bfq registered Dec 13 09:01:20.936058 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 09:01:20.936131 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Dec 13 09:01:20.937310 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Dec 13 09:01:20.937461 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:01:20.937543 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Dec 13 09:01:20.937612 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Dec 13 09:01:20.937681 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:01:20.937818 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Dec 13 09:01:20.937916 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Dec 13 09:01:20.937985 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:01:20.938056 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Dec 13 09:01:20.938123 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Dec 13 09:01:20.938247 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:01:20.938329 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Dec 13 09:01:20.938406 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Dec 13 09:01:20.938485 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:01:20.938558 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Dec 13 09:01:20.938623 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Dec 13 09:01:20.938689 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:01:20.938779 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Dec 13 09:01:20.938859 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Dec 13 09:01:20.938927 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:01:20.939000 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Dec 13 09:01:20.939067 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Dec 13 09:01:20.939133 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:01:20.939146 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 13 09:01:20.939287 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Dec 13 09:01:20.939359 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Dec 13 09:01:20.939424 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:01:20.939435 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 09:01:20.939443 kernel: ACPI: button: Power Button [PWRB] Dec 13 09:01:20.939451 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 09:01:20.939528 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Dec 13 09:01:20.939603 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 13 09:01:20.939677 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Dec 13 09:01:20.939688 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 09:01:20.939709 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 09:01:20.939783 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Dec 13 09:01:20.939795 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Dec 13 09:01:20.939802 kernel: thunder_xcv, ver 1.0 Dec 13 09:01:20.939813 kernel: thunder_bgx, ver 1.0 Dec 13 09:01:20.939821 kernel: nicpf, ver 1.0 Dec 13 09:01:20.939829 kernel: nicvf, ver 1.0 Dec 13 09:01:20.939911 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 09:01:20.939976 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T09:01:20 UTC (1734080480) Dec 13 09:01:20.939986 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 09:01:20.939995 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 09:01:20.940002 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 09:01:20.940012 kernel: watchdog: Hard watchdog permanently disabled Dec 13 09:01:20.940020 kernel: NET: Registered PF_INET6 protocol family Dec 13 09:01:20.940027 kernel: Segment Routing with IPv6 Dec 13 09:01:20.940035 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 09:01:20.940043 kernel: NET: Registered PF_PACKET protocol family Dec 13 09:01:20.940050 kernel: Key type dns_resolver registered Dec 13 09:01:20.940058 kernel: registered taskstats version 1 Dec 13 09:01:20.940066 kernel: Loading compiled-in X.509 certificates Dec 13 09:01:20.940074 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 09:01:20.940083 kernel: Key type .fscrypt registered Dec 13 09:01:20.940090 kernel: Key type fscrypt-provisioning registered Dec 13 09:01:20.940098 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 09:01:20.940106 kernel: ima: Allocated hash algorithm: sha1 Dec 13 09:01:20.940113 kernel: ima: No architecture policies found Dec 13 09:01:20.940121 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 09:01:20.940129 kernel: clk: Disabling unused clocks Dec 13 09:01:20.940137 kernel: Freeing unused kernel memory: 39360K Dec 13 09:01:20.940144 kernel: Run /init as init process Dec 13 09:01:20.940154 kernel: with arguments: Dec 13 09:01:20.940161 kernel: /init Dec 13 09:01:20.940169 kernel: with environment: Dec 13 09:01:20.940176 kernel: HOME=/ Dec 13 09:01:20.940184 kernel: TERM=linux Dec 13 09:01:20.940218 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 09:01:20.940229 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:01:20.940239 systemd[1]: Detected virtualization kvm. Dec 13 09:01:20.940251 systemd[1]: Detected architecture arm64. Dec 13 09:01:20.940259 systemd[1]: Running in initrd. Dec 13 09:01:20.940267 systemd[1]: No hostname configured, using default hostname. Dec 13 09:01:20.940275 systemd[1]: Hostname set to . Dec 13 09:01:20.940283 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:01:20.940292 systemd[1]: Queued start job for default target initrd.target. Dec 13 09:01:20.940300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:01:20.940308 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:01:20.940319 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 09:01:20.940328 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:01:20.940336 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 09:01:20.940345 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 09:01:20.940355 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 09:01:20.940363 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 09:01:20.940373 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:01:20.940381 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:01:20.940390 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:01:20.940398 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:01:20.940406 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:01:20.940414 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:01:20.940422 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:01:20.940431 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:01:20.940439 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 09:01:20.940449 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 09:01:20.940457 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:01:20.940466 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:01:20.940475 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:01:20.940483 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:01:20.940493 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 09:01:20.940501 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:01:20.940509 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 09:01:20.940518 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 09:01:20.940528 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:01:20.940536 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:01:20.940544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:01:20.940553 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 09:01:20.940561 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:01:20.940599 systemd-journald[236]: Collecting audit messages is disabled. Dec 13 09:01:20.940623 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 09:01:20.940632 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 09:01:20.940642 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 09:01:20.940650 kernel: Bridge firewalling registered Dec 13 09:01:20.940659 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:01:20.940667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:01:20.940677 systemd-journald[236]: Journal started Dec 13 09:01:20.940729 systemd-journald[236]: Runtime Journal (/run/log/journal/7ba8d4c6a89f45a9ab9e5d8f1dc0fda3) is 8.0M, max 76.5M, 68.5M free. Dec 13 09:01:20.910018 systemd-modules-load[237]: Inserted module 'overlay' Dec 13 09:01:20.941953 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:01:20.932730 systemd-modules-load[237]: Inserted module 'br_netfilter' Dec 13 09:01:20.942775 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:01:20.949478 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:01:20.953437 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:01:20.955047 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:01:20.961177 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:01:20.976359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:01:20.982743 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:01:20.997329 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 09:01:21.000638 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:01:21.003596 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:01:21.012174 dracut-cmdline[269]: dracut-dracut-053 Dec 13 09:01:21.014450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:01:21.019521 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 09:01:21.050325 systemd-resolved[277]: Positive Trust Anchors: Dec 13 09:01:21.050342 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:01:21.050375 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:01:21.056488 systemd-resolved[277]: Defaulting to hostname 'linux'. Dec 13 09:01:21.057645 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:01:21.058361 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:01:21.128269 kernel: SCSI subsystem initialized Dec 13 09:01:21.133239 kernel: Loading iSCSI transport class v2.0-870. Dec 13 09:01:21.141247 kernel: iscsi: registered transport (tcp) Dec 13 09:01:21.155275 kernel: iscsi: registered transport (qla4xxx) Dec 13 09:01:21.155343 kernel: QLogic iSCSI HBA Driver Dec 13 09:01:21.209340 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 09:01:21.219412 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 09:01:21.238708 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 09:01:21.238793 kernel: device-mapper: uevent: version 1.0.3 Dec 13 09:01:21.238805 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 09:01:21.298253 kernel: raid6: neonx8 gen() 14932 MB/s Dec 13 09:01:21.315239 kernel: raid6: neonx4 gen() 15010 MB/s Dec 13 09:01:21.333276 kernel: raid6: neonx2 gen() 12764 MB/s Dec 13 09:01:21.349245 kernel: raid6: neonx1 gen() 10390 MB/s Dec 13 09:01:21.366246 kernel: raid6: int64x8 gen() 6912 MB/s Dec 13 09:01:21.383254 kernel: raid6: int64x4 gen() 7220 MB/s Dec 13 09:01:21.400272 kernel: raid6: int64x2 gen() 6029 MB/s Dec 13 09:01:21.417257 kernel: raid6: int64x1 gen() 5012 MB/s Dec 13 09:01:21.417357 kernel: raid6: using algorithm neonx4 gen() 15010 MB/s Dec 13 09:01:21.434267 kernel: raid6: .... xor() 11871 MB/s, rmw enabled Dec 13 09:01:21.434355 kernel: raid6: using neon recovery algorithm Dec 13 09:01:21.439272 kernel: xor: measuring software checksum speed Dec 13 09:01:21.439344 kernel: 8regs : 17217 MB/sec Dec 13 09:01:21.440518 kernel: 32regs : 19641 MB/sec Dec 13 09:01:21.440542 kernel: arm64_neon : 26972 MB/sec Dec 13 09:01:21.440569 kernel: xor: using function: arm64_neon (26972 MB/sec) Dec 13 09:01:21.493288 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 09:01:21.508276 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:01:21.515487 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:01:21.531773 systemd-udevd[455]: Using default interface naming scheme 'v255'. Dec 13 09:01:21.535286 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:01:21.544454 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 09:01:21.560061 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Dec 13 09:01:21.597328 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:01:21.603612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:01:21.655887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:01:21.662491 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 09:01:21.685678 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 09:01:21.688323 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:01:21.688998 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:01:21.690751 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:01:21.697391 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 09:01:21.718358 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:01:21.760317 kernel: scsi host0: Virtio SCSI HBA Dec 13 09:01:21.777343 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 09:01:21.777432 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 09:01:21.821842 kernel: ACPI: bus type USB registered Dec 13 09:01:21.821895 kernel: usbcore: registered new interface driver usbfs Dec 13 09:01:21.822429 kernel: usbcore: registered new interface driver hub Dec 13 09:01:21.822454 kernel: usbcore: registered new device driver usb Dec 13 09:01:21.823118 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:01:21.823255 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:01:21.825131 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:01:21.825740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:01:21.825883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:01:21.827041 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:01:21.836453 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:01:21.843215 kernel: sr 0:0:0:0: Power-on or device reset occurred Dec 13 09:01:21.845474 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Dec 13 09:01:21.845595 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 09:01:21.845606 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Dec 13 09:01:21.859230 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 09:01:21.884822 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 09:01:21.884951 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 09:01:21.885044 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 09:01:21.885198 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 09:01:21.885294 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 09:01:21.885376 kernel: hub 1-0:1.0: USB hub found Dec 13 09:01:21.885477 kernel: hub 1-0:1.0: 4 ports detected Dec 13 09:01:21.885578 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 09:01:21.885760 kernel: hub 2-0:1.0: USB hub found Dec 13 09:01:21.885874 kernel: hub 2-0:1.0: 4 ports detected Dec 13 09:01:21.885962 kernel: sd 0:0:0:1: Power-on or device reset occurred Dec 13 09:01:21.902487 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 09:01:21.902615 kernel: sd 0:0:0:1: [sda] Write Protect is off Dec 13 09:01:21.902722 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Dec 13 09:01:21.902810 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 09:01:21.902899 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 09:01:21.902912 kernel: GPT:17805311 != 80003071 Dec 13 09:01:21.902922 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 09:01:21.902932 kernel: GPT:17805311 != 80003071 Dec 13 09:01:21.902941 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 09:01:21.902951 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 09:01:21.902961 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Dec 13 09:01:21.863545 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:01:21.871600 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:01:21.918233 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:01:21.964212 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (521) Dec 13 09:01:21.964264 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (516) Dec 13 09:01:21.972024 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 09:01:21.978594 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 09:01:21.988287 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 09:01:21.989596 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 09:01:21.995595 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 09:01:22.000395 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 09:01:22.018515 disk-uuid[571]: Primary Header is updated. Dec 13 09:01:22.018515 disk-uuid[571]: Secondary Entries is updated. Dec 13 09:01:22.018515 disk-uuid[571]: Secondary Header is updated. Dec 13 09:01:22.024238 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 09:01:22.029318 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 09:01:22.034271 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 09:01:22.122269 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 09:01:22.364451 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 13 09:01:22.501433 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 13 09:01:22.501490 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 09:01:22.503232 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 13 09:01:22.556655 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 13 09:01:22.557540 kernel: usbcore: registered new interface driver usbhid Dec 13 09:01:22.557581 kernel: usbhid: USB HID core driver Dec 13 09:01:23.038223 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 09:01:23.039122 disk-uuid[572]: The operation has completed successfully. Dec 13 09:01:23.090169 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 09:01:23.090320 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 09:01:23.105436 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 09:01:23.120216 sh[590]: Success Dec 13 09:01:23.133234 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 09:01:23.208130 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 09:01:23.210379 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 09:01:23.213260 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 09:01:23.238686 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 09:01:23.238767 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 09:01:23.238794 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 09:01:23.239619 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 09:01:23.239660 kernel: BTRFS info (device dm-0): using free space tree Dec 13 09:01:23.246228 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 09:01:23.248691 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 09:01:23.251252 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 09:01:23.261608 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 09:01:23.266398 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 09:01:23.279254 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 09:01:23.279312 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 09:01:23.279324 kernel: BTRFS info (device sda6): using free space tree Dec 13 09:01:23.283232 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 09:01:23.283295 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 09:01:23.295393 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 09:01:23.297726 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 09:01:23.306887 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 09:01:23.312418 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 09:01:23.406453 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:01:23.410010 ignition[678]: Ignition 2.19.0 Dec 13 09:01:23.410023 ignition[678]: Stage: fetch-offline Dec 13 09:01:23.410064 ignition[678]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:01:23.410072 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:01:23.410267 ignition[678]: parsed url from cmdline: "" Dec 13 09:01:23.410271 ignition[678]: no config URL provided Dec 13 09:01:23.410275 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:01:23.410283 ignition[678]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:01:23.410289 ignition[678]: failed to fetch config: resource requires networking Dec 13 09:01:23.410604 ignition[678]: Ignition finished successfully Dec 13 09:01:23.417470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:01:23.418279 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:01:23.446140 systemd-networkd[778]: lo: Link UP Dec 13 09:01:23.446151 systemd-networkd[778]: lo: Gained carrier Dec 13 09:01:23.447745 systemd-networkd[778]: Enumeration completed Dec 13 09:01:23.448270 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:01:23.448591 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:01:23.448594 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:01:23.450433 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:01:23.450436 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:01:23.450950 systemd-networkd[778]: eth0: Link UP Dec 13 09:01:23.450954 systemd-networkd[778]: eth0: Gained carrier Dec 13 09:01:23.450962 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:01:23.451542 systemd[1]: Reached target network.target - Network. Dec 13 09:01:23.455704 systemd-networkd[778]: eth1: Link UP Dec 13 09:01:23.455707 systemd-networkd[778]: eth1: Gained carrier Dec 13 09:01:23.455717 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:01:23.457484 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 09:01:23.473144 ignition[781]: Ignition 2.19.0 Dec 13 09:01:23.474110 ignition[781]: Stage: fetch Dec 13 09:01:23.474880 ignition[781]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:01:23.474894 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:01:23.475025 ignition[781]: parsed url from cmdline: "" Dec 13 09:01:23.475029 ignition[781]: no config URL provided Dec 13 09:01:23.475033 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:01:23.475042 ignition[781]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:01:23.475065 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 09:01:23.475837 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 09:01:23.489318 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 09:01:23.516300 systemd-networkd[778]: eth0: DHCPv4 address 188.245.203.154/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 09:01:23.676608 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 09:01:23.684710 ignition[781]: GET result: OK Dec 13 09:01:23.684897 ignition[781]: parsing config with SHA512: 7f3709a740fdef06de7369b12f4fed3bf664acd33911f11b232e2708b8951593b2205d557495d6e16c344a54f34626a48df0b199256e80cf96a29b92fbbe8eda Dec 13 09:01:23.692476 unknown[781]: fetched base config from "system" Dec 13 09:01:23.692492 unknown[781]: fetched base config from "system" Dec 13 09:01:23.692973 ignition[781]: fetch: fetch complete Dec 13 09:01:23.692498 unknown[781]: fetched user config from "hetzner" Dec 13 09:01:23.692980 ignition[781]: fetch: fetch passed Dec 13 09:01:23.695486 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 09:01:23.693027 ignition[781]: Ignition finished successfully Dec 13 09:01:23.703535 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 09:01:23.719022 ignition[789]: Ignition 2.19.0 Dec 13 09:01:23.719040 ignition[789]: Stage: kargs Dec 13 09:01:23.719312 ignition[789]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:01:23.719323 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:01:23.720341 ignition[789]: kargs: kargs passed Dec 13 09:01:23.723652 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 09:01:23.720417 ignition[789]: Ignition finished successfully Dec 13 09:01:23.733798 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 09:01:23.748061 ignition[795]: Ignition 2.19.0 Dec 13 09:01:23.748078 ignition[795]: Stage: disks Dec 13 09:01:23.748387 ignition[795]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:01:23.748405 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:01:23.749601 ignition[795]: disks: disks passed Dec 13 09:01:23.749663 ignition[795]: Ignition finished successfully Dec 13 09:01:23.753219 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 09:01:23.754495 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 09:01:23.755317 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 09:01:23.756418 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:01:23.757413 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:01:23.758325 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:01:23.766429 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 09:01:23.785604 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 09:01:23.790617 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 09:01:23.798348 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 09:01:23.853210 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 09:01:23.854060 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 09:01:23.855290 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 09:01:23.864412 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:01:23.868507 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 09:01:23.877415 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 09:01:23.879395 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 09:01:23.882336 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (811) Dec 13 09:01:23.882454 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:01:23.885992 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 09:01:23.887325 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 09:01:23.887349 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 09:01:23.887359 kernel: BTRFS info (device sda6): using free space tree Dec 13 09:01:23.894284 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 09:01:23.898607 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 09:01:23.898636 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 09:01:23.900623 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:01:23.942107 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 09:01:23.946151 coreos-metadata[813]: Dec 13 09:01:23.946 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 09:01:23.949031 coreos-metadata[813]: Dec 13 09:01:23.948 INFO Fetch successful Dec 13 09:01:23.949031 coreos-metadata[813]: Dec 13 09:01:23.948 INFO wrote hostname ci-4081-2-1-6-29baf1648e to /sysroot/etc/hostname Dec 13 09:01:23.951763 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:01:23.953525 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Dec 13 09:01:23.958055 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 09:01:23.962632 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 09:01:24.058823 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 09:01:24.064361 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 09:01:24.067383 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 09:01:24.078233 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 09:01:24.098229 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 09:01:24.102974 ignition[928]: INFO : Ignition 2.19.0 Dec 13 09:01:24.102974 ignition[928]: INFO : Stage: mount Dec 13 09:01:24.104081 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:01:24.104081 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:01:24.105895 ignition[928]: INFO : mount: mount passed Dec 13 09:01:24.105895 ignition[928]: INFO : Ignition finished successfully Dec 13 09:01:24.105773 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 09:01:24.113379 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 09:01:24.240309 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 09:01:24.247513 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:01:24.257411 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (939) Dec 13 09:01:24.259007 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 09:01:24.259047 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 09:01:24.259074 kernel: BTRFS info (device sda6): using free space tree Dec 13 09:01:24.262940 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 09:01:24.263015 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 09:01:24.268019 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:01:24.289816 ignition[956]: INFO : Ignition 2.19.0 Dec 13 09:01:24.289816 ignition[956]: INFO : Stage: files Dec 13 09:01:24.291035 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:01:24.291035 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:01:24.291035 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Dec 13 09:01:24.293168 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 09:01:24.293168 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 09:01:24.295460 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 09:01:24.296590 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 09:01:24.296590 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 09:01:24.295852 unknown[956]: wrote ssh authorized keys file for user: core Dec 13 09:01:24.299006 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 09:01:24.299006 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 09:01:24.370692 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 09:01:24.587288 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 09:01:24.587288 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 09:01:24.589751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 09:01:24.651621 systemd-networkd[778]: eth1: Gained IPv6LL Dec 13 09:01:25.163586 systemd-networkd[778]: eth0: Gained IPv6LL Dec 13 09:01:25.208966 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 09:01:25.467762 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 09:01:25.467762 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 09:01:25.471271 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 09:01:25.471271 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 09:01:25.471271 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 09:01:25.471271 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 09:01:25.471271 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 09:01:25.471271 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 09:01:25.471271 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 09:01:25.471271 ignition[956]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 13 09:01:25.471271 ignition[956]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 09:01:25.471271 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:01:25.471271 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:01:25.471271 ignition[956]: INFO : files: files passed Dec 13 09:01:25.471271 ignition[956]: INFO : Ignition finished successfully Dec 13 09:01:25.472136 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 09:01:25.480501 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 09:01:25.484698 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 09:01:25.489514 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 09:01:25.489619 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 09:01:25.500441 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:01:25.500441 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:01:25.503271 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:01:25.505469 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:01:25.506336 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 09:01:25.512497 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 09:01:25.546509 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 09:01:25.546702 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 09:01:25.549580 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 09:01:25.551095 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 09:01:25.552849 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 09:01:25.559482 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 09:01:25.579906 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:01:25.587497 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 09:01:25.601139 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:01:25.601906 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:01:25.603205 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 09:01:25.604253 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 09:01:25.604392 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:01:25.605699 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 09:01:25.606316 systemd[1]: Stopped target basic.target - Basic System. Dec 13 09:01:25.607398 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 09:01:25.608422 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:01:25.609497 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 09:01:25.610569 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 09:01:25.611723 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:01:25.612863 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 09:01:25.613885 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 09:01:25.614947 systemd[1]: Stopped target swap.target - Swaps. Dec 13 09:01:25.615850 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 09:01:25.615984 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:01:25.617151 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:01:25.617865 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:01:25.618845 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 09:01:25.619257 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:01:25.619941 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 09:01:25.620065 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 09:01:25.621629 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 09:01:25.621921 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:01:25.622884 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 09:01:25.623089 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 09:01:25.623886 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 09:01:25.624038 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:01:25.636265 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 09:01:25.639597 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 09:01:25.640219 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 09:01:25.640433 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:01:25.643678 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 09:01:25.643841 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:01:25.656169 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 09:01:25.656911 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 09:01:25.660405 ignition[1008]: INFO : Ignition 2.19.0 Dec 13 09:01:25.660405 ignition[1008]: INFO : Stage: umount Dec 13 09:01:25.660405 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:01:25.660405 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:01:25.660405 ignition[1008]: INFO : umount: umount passed Dec 13 09:01:25.660405 ignition[1008]: INFO : Ignition finished successfully Dec 13 09:01:25.660957 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 09:01:25.662848 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 09:01:25.666441 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 09:01:25.666517 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 09:01:25.668415 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 09:01:25.668487 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 09:01:25.669680 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 09:01:25.669739 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 09:01:25.675355 systemd[1]: Stopped target network.target - Network. Dec 13 09:01:25.675943 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 09:01:25.676025 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:01:25.681838 systemd[1]: Stopped target paths.target - Path Units. Dec 13 09:01:25.682925 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 09:01:25.688317 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:01:25.689098 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 09:01:25.689729 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 09:01:25.692338 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 09:01:25.692391 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:01:25.693862 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 09:01:25.693907 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:01:25.696966 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 09:01:25.697052 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 09:01:25.699308 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 09:01:25.699372 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 09:01:25.701012 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 09:01:25.702110 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 09:01:25.707020 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 09:01:25.712277 systemd-networkd[778]: eth0: DHCPv6 lease lost Dec 13 09:01:25.713095 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 09:01:25.713259 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 09:01:25.715994 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 09:01:25.716057 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:01:25.718944 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 09:01:25.719046 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 09:01:25.720234 systemd-networkd[778]: eth1: DHCPv6 lease lost Dec 13 09:01:25.722079 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 09:01:25.723255 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 09:01:25.724794 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 09:01:25.724849 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:01:25.725625 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 09:01:25.725722 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 09:01:25.736435 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 09:01:25.737646 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 09:01:25.737829 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:01:25.740681 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 09:01:25.740751 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:01:25.741757 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 09:01:25.741805 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 09:01:25.742865 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:01:25.758475 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 09:01:25.760254 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 09:01:25.761159 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 09:01:25.761341 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:01:25.764542 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 09:01:25.764612 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 09:01:25.765759 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 09:01:25.765791 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:01:25.766692 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 09:01:25.766738 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:01:25.768138 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 09:01:25.768182 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 09:01:25.769585 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:01:25.769628 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:01:25.778462 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 09:01:25.779742 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 09:01:25.779849 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:01:25.781741 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 09:01:25.781900 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:01:25.783462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 09:01:25.783532 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:01:25.785034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:01:25.785090 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:01:25.796858 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 09:01:25.797059 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 09:01:25.800797 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 09:01:25.807388 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 09:01:25.826919 systemd[1]: Switching root. Dec 13 09:01:25.861534 systemd-journald[236]: Journal stopped Dec 13 09:01:26.791856 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Dec 13 09:01:26.791942 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 09:01:26.791956 kernel: SELinux: policy capability open_perms=1 Dec 13 09:01:26.791970 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 09:01:26.791980 kernel: SELinux: policy capability always_check_network=0 Dec 13 09:01:26.791989 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 09:01:26.792003 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 09:01:26.792013 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 09:01:26.792022 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 09:01:26.792032 kernel: audit: type=1403 audit(1734080485.977:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 09:01:26.792042 systemd[1]: Successfully loaded SELinux policy in 35.738ms. Dec 13 09:01:26.792067 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.132ms. Dec 13 09:01:26.792078 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:01:26.792089 systemd[1]: Detected virtualization kvm. Dec 13 09:01:26.792102 systemd[1]: Detected architecture arm64. Dec 13 09:01:26.792111 systemd[1]: Detected first boot. Dec 13 09:01:26.792122 systemd[1]: Hostname set to . Dec 13 09:01:26.792133 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:01:26.792146 zram_generator::config[1052]: No configuration found. Dec 13 09:01:26.792159 systemd[1]: Populated /etc with preset unit settings. Dec 13 09:01:26.792170 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 09:01:26.792181 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 09:01:26.792207 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 09:01:26.792219 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 09:01:26.792230 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 09:01:26.792240 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 09:01:26.792250 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 09:01:26.792260 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 09:01:26.792273 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 09:01:26.792284 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 09:01:26.792294 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 09:01:26.792304 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:01:26.792315 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:01:26.792330 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 09:01:26.792346 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 09:01:26.792357 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 09:01:26.792372 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:01:26.792385 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 09:01:26.792395 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:01:26.792406 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 09:01:26.792416 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 09:01:26.792430 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 09:01:26.792442 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 09:01:26.792454 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:01:26.792464 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:01:26.792475 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:01:26.792485 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:01:26.792496 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 09:01:26.792506 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 09:01:26.792517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:01:26.792527 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:01:26.792538 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:01:26.792549 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 09:01:26.792560 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 09:01:26.792570 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 09:01:26.792580 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 09:01:26.792591 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 09:01:26.792604 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 09:01:26.792616 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 09:01:26.792627 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 09:01:26.792650 systemd[1]: Reached target machines.target - Containers. Dec 13 09:01:26.792666 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 09:01:26.792684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:01:26.792696 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:01:26.792707 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 09:01:26.792717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:01:26.792729 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:01:26.792740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:01:26.792750 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 09:01:26.792762 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:01:26.792774 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 09:01:26.792785 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 09:01:26.792795 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 09:01:26.792806 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 09:01:26.792817 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 09:01:26.792829 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:01:26.792840 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:01:26.792852 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 09:01:26.792862 kernel: fuse: init (API version 7.39) Dec 13 09:01:26.792872 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 09:01:26.792883 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:01:26.792895 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 09:01:26.792908 systemd[1]: Stopped verity-setup.service. Dec 13 09:01:26.792919 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 09:01:26.792931 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 09:01:26.792942 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 09:01:26.792953 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 09:01:26.792963 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 09:01:26.792975 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 09:01:26.792986 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:01:26.792997 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 09:01:26.793007 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 09:01:26.793018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:01:26.793028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:01:26.793038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:01:26.793049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:01:26.793060 kernel: ACPI: bus type drm_connector registered Dec 13 09:01:26.793070 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 09:01:26.793083 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 09:01:26.793096 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:01:26.793106 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:01:26.793119 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:01:26.793130 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 09:01:26.793142 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 09:01:26.793153 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 09:01:26.793163 kernel: loop: module loaded Dec 13 09:01:26.793173 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 09:01:26.793183 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:01:26.797259 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 09:01:26.797283 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:01:26.797295 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:01:26.797344 systemd-journald[1119]: Collecting audit messages is disabled. Dec 13 09:01:26.797376 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 09:01:26.797388 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 09:01:26.797399 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 09:01:26.797410 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 09:01:26.797422 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:01:26.797433 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 09:01:26.797446 systemd-journald[1119]: Journal started Dec 13 09:01:26.797470 systemd-journald[1119]: Runtime Journal (/run/log/journal/7ba8d4c6a89f45a9ab9e5d8f1dc0fda3) is 8.0M, max 76.5M, 68.5M free. Dec 13 09:01:26.471227 systemd[1]: Queued start job for default target multi-user.target. Dec 13 09:01:26.492083 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 09:01:26.492607 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 09:01:26.810311 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 09:01:26.814438 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 09:01:26.817217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:01:26.821210 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 09:01:26.821297 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:01:26.832216 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 09:01:26.836276 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:01:26.840244 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 09:01:26.841373 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:01:26.844064 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 09:01:26.859724 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Dec 13 09:01:26.859976 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Dec 13 09:01:26.861831 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 09:01:26.884825 kernel: loop0: detected capacity change from 0 to 114328 Dec 13 09:01:26.901729 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 09:01:26.904326 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:01:26.917207 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 09:01:26.913347 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:01:26.915119 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:01:26.917925 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 09:01:26.926086 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 09:01:26.931091 systemd-journald[1119]: Time spent on flushing to /var/log/journal/7ba8d4c6a89f45a9ab9e5d8f1dc0fda3 is 37.469ms for 1136 entries. Dec 13 09:01:26.931091 systemd-journald[1119]: System Journal (/var/log/journal/7ba8d4c6a89f45a9ab9e5d8f1dc0fda3) is 8.0M, max 584.8M, 576.8M free. Dec 13 09:01:26.989345 systemd-journald[1119]: Received client request to flush runtime journal. Dec 13 09:01:26.989431 kernel: loop1: detected capacity change from 0 to 8 Dec 13 09:01:26.989454 kernel: loop2: detected capacity change from 0 to 114432 Dec 13 09:01:26.934813 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 09:01:26.942720 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 09:01:26.946239 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 09:01:26.997814 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 09:01:27.003773 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 09:01:27.006175 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 09:01:27.008163 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 09:01:27.043250 kernel: loop3: detected capacity change from 0 to 194096 Dec 13 09:01:27.044341 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 09:01:27.051445 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:01:27.087228 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Dec 13 09:01:27.087576 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Dec 13 09:01:27.092422 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:01:27.103289 kernel: loop4: detected capacity change from 0 to 114328 Dec 13 09:01:27.117352 kernel: loop5: detected capacity change from 0 to 8 Dec 13 09:01:27.119241 kernel: loop6: detected capacity change from 0 to 114432 Dec 13 09:01:27.132242 kernel: loop7: detected capacity change from 0 to 194096 Dec 13 09:01:27.158799 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 09:01:27.159418 (sd-merge)[1195]: Merged extensions into '/usr'. Dec 13 09:01:27.166583 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 09:01:27.166605 systemd[1]: Reloading... Dec 13 09:01:27.302218 zram_generator::config[1221]: No configuration found. Dec 13 09:01:27.459414 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 09:01:27.491263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:01:27.540914 systemd[1]: Reloading finished in 372 ms. Dec 13 09:01:27.577449 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 09:01:27.579831 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 09:01:27.589514 systemd[1]: Starting ensure-sysext.service... Dec 13 09:01:27.593987 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:01:27.602207 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Dec 13 09:01:27.602234 systemd[1]: Reloading... Dec 13 09:01:27.632517 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 09:01:27.632800 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 09:01:27.633513 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 09:01:27.633800 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Dec 13 09:01:27.633860 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Dec 13 09:01:27.639976 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:01:27.640005 systemd-tmpfiles[1259]: Skipping /boot Dec 13 09:01:27.655420 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:01:27.655437 systemd-tmpfiles[1259]: Skipping /boot Dec 13 09:01:27.702235 zram_generator::config[1281]: No configuration found. Dec 13 09:01:27.821837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:01:27.870373 systemd[1]: Reloading finished in 267 ms. Dec 13 09:01:27.885868 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 09:01:27.890835 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:01:27.899792 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:01:27.911088 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 09:01:27.913604 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 09:01:27.923334 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:01:27.926703 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:01:27.930542 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 09:01:27.946539 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 09:01:27.950278 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:01:27.952356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:01:27.955958 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:01:27.959488 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:01:27.961440 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:01:27.963480 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:01:27.963696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:01:27.969379 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:01:27.975501 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:01:27.976923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:01:27.985609 systemd[1]: Finished ensure-sysext.service. Dec 13 09:01:28.009549 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 09:01:28.011135 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 09:01:28.017450 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 09:01:28.020096 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 09:01:28.032348 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Dec 13 09:01:28.041066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:01:28.041818 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:01:28.052961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:01:28.053146 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:01:28.055005 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:01:28.056121 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:01:28.057779 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:01:28.058260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:01:28.060513 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 09:01:28.061979 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:01:28.062090 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:01:28.067562 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 09:01:28.073153 augenrules[1362]: No rules Dec 13 09:01:28.076700 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:01:28.095952 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 09:01:28.097574 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:01:28.103015 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:01:28.120534 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:01:28.138856 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 09:01:28.140330 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 09:01:28.207435 systemd-resolved[1329]: Positive Trust Anchors: Dec 13 09:01:28.207459 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:01:28.207492 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:01:28.211219 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1394) Dec 13 09:01:28.214931 systemd-resolved[1329]: Using system hostname 'ci-4081-2-1-6-29baf1648e'. Dec 13 09:01:28.218768 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:01:28.220386 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:01:28.225723 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 09:01:28.230024 systemd-networkd[1379]: lo: Link UP Dec 13 09:01:28.232526 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1394) Dec 13 09:01:28.232347 systemd-networkd[1379]: lo: Gained carrier Dec 13 09:01:28.270970 systemd-networkd[1379]: Enumeration completed Dec 13 09:01:28.271328 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:01:28.272147 systemd[1]: Reached target network.target - Network. Dec 13 09:01:28.283428 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 09:01:28.325234 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 09:01:28.327748 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:01:28.327781 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:01:28.328496 systemd-networkd[1379]: eth0: Link UP Dec 13 09:01:28.328500 systemd-networkd[1379]: eth0: Gained carrier Dec 13 09:01:28.328518 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:01:28.339060 systemd-networkd[1379]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:01:28.339716 systemd-networkd[1379]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:01:28.341007 systemd-networkd[1379]: eth1: Link UP Dec 13 09:01:28.341014 systemd-networkd[1379]: eth1: Gained carrier Dec 13 09:01:28.341036 systemd-networkd[1379]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:01:28.345213 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1375) Dec 13 09:01:28.371389 systemd-networkd[1379]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 09:01:28.372261 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Dec 13 09:01:28.391329 systemd-networkd[1379]: eth0: DHCPv4 address 188.245.203.154/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 09:01:28.391679 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Dec 13 09:01:28.392320 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Dec 13 09:01:28.404986 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 09:01:28.414501 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 09:01:28.435575 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 09:01:28.446266 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 09:01:28.447328 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:01:28.455047 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:01:28.458322 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Dec 13 09:01:28.458373 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 09:01:28.458389 kernel: [drm] features: -context_init Dec 13 09:01:28.459150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:01:28.466451 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:01:28.467678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:01:28.467713 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:01:28.485219 kernel: [drm] number of scanouts: 1 Dec 13 09:01:28.485322 kernel: [drm] number of cap sets: 0 Dec 13 09:01:28.496250 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 09:01:28.489811 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:01:28.491225 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:01:28.497813 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 09:01:28.497612 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:01:28.497783 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:01:28.509270 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 09:01:28.511461 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:01:28.511704 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:01:28.519999 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:01:28.520063 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:01:28.527507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:01:28.596124 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:01:28.649420 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 09:01:28.659520 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 09:01:28.675710 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:01:28.707228 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 09:01:28.710958 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:01:28.712047 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:01:28.713362 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 09:01:28.714248 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 09:01:28.715220 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 09:01:28.715921 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 09:01:28.716617 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 09:01:28.717306 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 09:01:28.717343 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:01:28.717806 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:01:28.721271 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 09:01:28.723869 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 09:01:28.730743 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 09:01:28.734144 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 09:01:28.737241 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 09:01:28.738174 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:01:28.739112 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:01:28.739821 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:01:28.739941 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:01:28.743072 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:01:28.743665 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 09:01:28.749636 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 09:01:28.759603 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 09:01:28.764783 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 09:01:28.766719 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 09:01:28.767291 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 09:01:28.771274 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 09:01:28.776528 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 09:01:28.782430 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 09:01:28.790371 jq[1444]: false Dec 13 09:01:28.785429 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 09:01:28.792475 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 09:01:28.800557 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 09:01:28.804429 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 09:01:28.804974 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 09:01:28.808445 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 09:01:28.811953 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 09:01:28.818257 coreos-metadata[1442]: Dec 13 09:01:28.813 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 09:01:28.818257 coreos-metadata[1442]: Dec 13 09:01:28.814 INFO Fetch successful Dec 13 09:01:28.818257 coreos-metadata[1442]: Dec 13 09:01:28.815 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 09:01:28.818257 coreos-metadata[1442]: Dec 13 09:01:28.815 INFO Fetch successful Dec 13 09:01:28.816234 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 09:01:28.820611 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 09:01:28.822257 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 09:01:28.850088 dbus-daemon[1443]: [system] SELinux support is enabled Dec 13 09:01:28.855260 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 09:01:28.860859 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 09:01:28.863311 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 09:01:28.890007 jq[1455]: true Dec 13 09:01:28.874595 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 09:01:28.874665 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 09:01:28.879469 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 09:01:28.879492 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 09:01:28.890498 jq[1473]: true Dec 13 09:01:28.889120 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 09:01:28.904232 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 09:01:28.904441 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 09:01:28.911924 extend-filesystems[1445]: Found loop4 Dec 13 09:01:28.911924 extend-filesystems[1445]: Found loop5 Dec 13 09:01:28.917071 extend-filesystems[1445]: Found loop6 Dec 13 09:01:28.917071 extend-filesystems[1445]: Found loop7 Dec 13 09:01:28.917071 extend-filesystems[1445]: Found sda Dec 13 09:01:28.917071 extend-filesystems[1445]: Found sda1 Dec 13 09:01:28.917071 extend-filesystems[1445]: Found sda2 Dec 13 09:01:28.917071 extend-filesystems[1445]: Found sda3 Dec 13 09:01:28.917071 extend-filesystems[1445]: Found usr Dec 13 09:01:28.917071 extend-filesystems[1445]: Found sda4 Dec 13 09:01:28.917071 extend-filesystems[1445]: Found sda6 Dec 13 09:01:28.917071 extend-filesystems[1445]: Found sda7 Dec 13 09:01:28.917071 extend-filesystems[1445]: Found sda9 Dec 13 09:01:28.917071 extend-filesystems[1445]: Checking size of /dev/sda9 Dec 13 09:01:28.941346 tar[1459]: linux-arm64/helm Dec 13 09:01:28.987275 extend-filesystems[1445]: Resized partition /dev/sda9 Dec 13 09:01:28.995177 update_engine[1454]: I20241213 09:01:28.994264 1454 main.cc:92] Flatcar Update Engine starting Dec 13 09:01:28.996488 systemd-logind[1453]: New seat seat0. Dec 13 09:01:28.997576 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 09:01:28.997592 systemd-logind[1453]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 13 09:01:28.997865 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 09:01:29.001645 extend-filesystems[1503]: resize2fs 1.47.1 (20-May-2024) Dec 13 09:01:29.015430 update_engine[1454]: I20241213 09:01:29.012706 1454 update_check_scheduler.cc:74] Next update check in 5m41s Dec 13 09:01:29.019891 systemd[1]: Started update-engine.service - Update Engine. Dec 13 09:01:29.021213 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 09:01:29.024042 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 09:01:29.036457 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 09:01:29.037582 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 09:01:29.047838 bash[1508]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:01:29.051598 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 09:01:29.066783 systemd[1]: Starting sshkeys.service... Dec 13 09:01:29.092560 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 09:01:29.100827 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 09:01:29.178246 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1394) Dec 13 09:01:29.197922 coreos-metadata[1519]: Dec 13 09:01:29.197 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 09:01:29.199976 coreos-metadata[1519]: Dec 13 09:01:29.199 INFO Fetch successful Dec 13 09:01:29.212274 unknown[1519]: wrote ssh authorized keys file for user: core Dec 13 09:01:29.216204 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 09:01:29.241034 extend-filesystems[1503]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 09:01:29.241034 extend-filesystems[1503]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 09:01:29.241034 extend-filesystems[1503]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 09:01:29.249024 extend-filesystems[1445]: Resized filesystem in /dev/sda9 Dec 13 09:01:29.249024 extend-filesystems[1445]: Found sr0 Dec 13 09:01:29.242010 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 09:01:29.243541 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 09:01:29.306575 containerd[1469]: time="2024-12-13T09:01:29.305913200Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 09:01:29.328849 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 09:01:29.341553 update-ssh-keys[1526]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:01:29.344442 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 09:01:29.349276 systemd[1]: Finished sshkeys.service. Dec 13 09:01:29.367264 containerd[1469]: time="2024-12-13T09:01:29.366091160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:01:29.368088 containerd[1469]: time="2024-12-13T09:01:29.368032880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:01:29.368088 containerd[1469]: time="2024-12-13T09:01:29.368083080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 09:01:29.368231 containerd[1469]: time="2024-12-13T09:01:29.368103520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 09:01:29.368354 containerd[1469]: time="2024-12-13T09:01:29.368327160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 09:01:29.368387 containerd[1469]: time="2024-12-13T09:01:29.368353280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 09:01:29.368449 containerd[1469]: time="2024-12-13T09:01:29.368428920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:01:29.368449 containerd[1469]: time="2024-12-13T09:01:29.368446240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:01:29.368664 containerd[1469]: time="2024-12-13T09:01:29.368637160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:01:29.368664 containerd[1469]: time="2024-12-13T09:01:29.368661440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 09:01:29.368716 containerd[1469]: time="2024-12-13T09:01:29.368675400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:01:29.368716 containerd[1469]: time="2024-12-13T09:01:29.368686520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 09:01:29.368779 containerd[1469]: time="2024-12-13T09:01:29.368761120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:01:29.368981 containerd[1469]: time="2024-12-13T09:01:29.368959520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:01:29.369087 containerd[1469]: time="2024-12-13T09:01:29.369065840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:01:29.369113 containerd[1469]: time="2024-12-13T09:01:29.369085040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 09:01:29.369172 containerd[1469]: time="2024-12-13T09:01:29.369156160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 09:01:29.369328 containerd[1469]: time="2024-12-13T09:01:29.369304280Z" level=info msg="metadata content store policy set" policy=shared Dec 13 09:01:29.379406 containerd[1469]: time="2024-12-13T09:01:29.379340720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 09:01:29.380253 containerd[1469]: time="2024-12-13T09:01:29.380224640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 09:01:29.380253 containerd[1469]: time="2024-12-13T09:01:29.380254640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 09:01:29.380379 containerd[1469]: time="2024-12-13T09:01:29.380324520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 09:01:29.380379 containerd[1469]: time="2024-12-13T09:01:29.380345040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 09:01:29.380563 containerd[1469]: time="2024-12-13T09:01:29.380537920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 09:01:29.381405 containerd[1469]: time="2024-12-13T09:01:29.381370840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 09:01:29.381646 containerd[1469]: time="2024-12-13T09:01:29.381589560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 09:01:29.381707 containerd[1469]: time="2024-12-13T09:01:29.381655120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 09:01:29.381707 containerd[1469]: time="2024-12-13T09:01:29.381675680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 09:01:29.381707 containerd[1469]: time="2024-12-13T09:01:29.381690160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 09:01:29.381782 containerd[1469]: time="2024-12-13T09:01:29.381709080Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 09:01:29.381782 containerd[1469]: time="2024-12-13T09:01:29.381722360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 09:01:29.381782 containerd[1469]: time="2024-12-13T09:01:29.381738440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 09:01:29.381782 containerd[1469]: time="2024-12-13T09:01:29.381754880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 09:01:29.381782 containerd[1469]: time="2024-12-13T09:01:29.381768880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 09:01:29.381782 containerd[1469]: time="2024-12-13T09:01:29.381781280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381793280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381815120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381829760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381843760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381858000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381870480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381884360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381896360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381909280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381924320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381941000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381953240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381966200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381980200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382003 containerd[1469]: time="2024-12-13T09:01:29.381997520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 09:01:29.382726 containerd[1469]: time="2024-12-13T09:01:29.382022240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382726 containerd[1469]: time="2024-12-13T09:01:29.382034760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.382726 containerd[1469]: time="2024-12-13T09:01:29.382046440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 09:01:29.383384 containerd[1469]: time="2024-12-13T09:01:29.383228080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 09:01:29.383384 containerd[1469]: time="2024-12-13T09:01:29.383259800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 09:01:29.383443 containerd[1469]: time="2024-12-13T09:01:29.383416160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 09:01:29.383443 containerd[1469]: time="2024-12-13T09:01:29.383432400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 09:01:29.383492 containerd[1469]: time="2024-12-13T09:01:29.383444400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.383492 containerd[1469]: time="2024-12-13T09:01:29.383459480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 09:01:29.383492 containerd[1469]: time="2024-12-13T09:01:29.383470240Z" level=info msg="NRI interface is disabled by configuration." Dec 13 09:01:29.383492 containerd[1469]: time="2024-12-13T09:01:29.383481080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 09:01:29.384138 containerd[1469]: time="2024-12-13T09:01:29.383855760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 09:01:29.384138 containerd[1469]: time="2024-12-13T09:01:29.383924400Z" level=info msg="Connect containerd service" Dec 13 09:01:29.384138 containerd[1469]: time="2024-12-13T09:01:29.383955480Z" level=info msg="using legacy CRI server" Dec 13 09:01:29.384138 containerd[1469]: time="2024-12-13T09:01:29.383962320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 09:01:29.384138 containerd[1469]: time="2024-12-13T09:01:29.384057680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 09:01:29.387911 containerd[1469]: time="2024-12-13T09:01:29.387744120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 09:01:29.388023 containerd[1469]: time="2024-12-13T09:01:29.387965600Z" level=info msg="Start subscribing containerd event" Dec 13 09:01:29.388023 containerd[1469]: time="2024-12-13T09:01:29.388016080Z" level=info msg="Start recovering state" Dec 13 09:01:29.388799 containerd[1469]: time="2024-12-13T09:01:29.388094000Z" level=info msg="Start event monitor" Dec 13 09:01:29.388799 containerd[1469]: time="2024-12-13T09:01:29.388114600Z" level=info msg="Start snapshots syncer" Dec 13 09:01:29.388799 containerd[1469]: time="2024-12-13T09:01:29.388123920Z" level=info msg="Start cni network conf syncer for default" Dec 13 09:01:29.388799 containerd[1469]: time="2024-12-13T09:01:29.388132120Z" level=info msg="Start streaming server" Dec 13 09:01:29.388972 containerd[1469]: time="2024-12-13T09:01:29.388938640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 09:01:29.389015 containerd[1469]: time="2024-12-13T09:01:29.388996120Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 09:01:29.390734 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 09:01:29.393296 containerd[1469]: time="2024-12-13T09:01:29.393257160Z" level=info msg="containerd successfully booted in 0.088696s" Dec 13 09:01:29.633488 tar[1459]: linux-arm64/LICENSE Dec 13 09:01:29.633762 tar[1459]: linux-arm64/README.md Dec 13 09:01:29.646285 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 09:01:30.105829 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 09:01:30.128020 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 09:01:30.136689 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 09:01:30.147365 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 09:01:30.147653 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 09:01:30.155344 systemd-networkd[1379]: eth1: Gained IPv6LL Dec 13 09:01:30.156968 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Dec 13 09:01:30.157808 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 09:01:30.161863 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 09:01:30.163666 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 09:01:30.175184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:01:30.178357 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 09:01:30.180598 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 09:01:30.198837 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 09:01:30.201697 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 09:01:30.204365 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 09:01:30.220140 systemd-networkd[1379]: eth0: Gained IPv6LL Dec 13 09:01:30.221828 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Dec 13 09:01:30.224331 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 09:01:30.905463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:01:30.907139 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 09:01:30.912372 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:01:30.913770 systemd[1]: Startup finished in 847ms (kernel) + 5.274s (initrd) + 4.971s (userspace) = 11.093s. Dec 13 09:01:31.518422 kubelet[1573]: E1213 09:01:31.518361 1573 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:01:31.520498 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:01:31.520658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:01:41.771176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 09:01:41.778590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:01:41.910117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:01:41.923091 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:01:41.975779 kubelet[1593]: E1213 09:01:41.975720 1593 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:01:41.979268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:01:41.979409 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:01:52.230919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 09:01:52.245658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:01:52.375785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:01:52.394760 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:01:52.445491 kubelet[1609]: E1213 09:01:52.445388 1609 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:01:52.447791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:01:52.447921 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:02:00.619853 systemd-timesyncd[1345]: Contacted time server 77.90.60.63:123 (2.flatcar.pool.ntp.org). Dec 13 09:02:00.620043 systemd-timesyncd[1345]: Initial clock synchronization to Fri 2024-12-13 09:02:00.530838 UTC. Dec 13 09:02:02.698990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 09:02:02.706685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:02:02.826063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:02:02.830928 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:02:02.883008 kubelet[1625]: E1213 09:02:02.882964 1625 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:02:02.885963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:02:02.886465 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:02:13.017780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 09:02:13.024582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:02:13.139382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:02:13.144042 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:02:13.196770 kubelet[1640]: E1213 09:02:13.196695 1640 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:02:13.199688 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:02:13.199843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:02:13.864326 update_engine[1454]: I20241213 09:02:13.864123 1454 update_attempter.cc:509] Updating boot flags... Dec 13 09:02:13.910235 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1658) Dec 13 09:02:13.977218 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1658) Dec 13 09:02:23.267926 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 09:02:23.276595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:02:23.393528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:02:23.404831 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:02:23.453143 kubelet[1675]: E1213 09:02:23.453081 1675 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:02:23.455618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:02:23.455755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:02:33.518150 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 09:02:33.526562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:02:33.651356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:02:33.662580 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:02:33.717723 kubelet[1690]: E1213 09:02:33.717658 1690 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:02:33.721563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:02:33.722047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:02:43.767638 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 09:02:43.776472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:02:43.885448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:02:43.896633 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:02:43.950853 kubelet[1706]: E1213 09:02:43.950812 1706 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:02:43.954042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:02:43.954217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:02:54.017712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 09:02:54.028674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:02:54.156871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:02:54.161935 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:02:54.206832 kubelet[1723]: E1213 09:02:54.206766 1723 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:02:54.209719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:02:54.209889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:03:02.851778 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 09:03:02.858647 systemd[1]: Started sshd@0-188.245.203.154:22-139.178.89.65:52182.service - OpenSSH per-connection server daemon (139.178.89.65:52182). Dec 13 09:03:03.845683 sshd[1731]: Accepted publickey for core from 139.178.89.65 port 52182 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:03.847893 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:03.863945 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 09:03:03.875270 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 09:03:03.882614 systemd-logind[1453]: New session 1 of user core. Dec 13 09:03:03.901328 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 09:03:03.915772 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 09:03:03.920835 (systemd)[1735]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 09:03:04.041547 systemd[1735]: Queued start job for default target default.target. Dec 13 09:03:04.048356 systemd[1735]: Created slice app.slice - User Application Slice. Dec 13 09:03:04.048393 systemd[1735]: Reached target paths.target - Paths. Dec 13 09:03:04.048502 systemd[1735]: Reached target timers.target - Timers. Dec 13 09:03:04.050319 systemd[1735]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 09:03:04.068033 systemd[1735]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 09:03:04.068202 systemd[1735]: Reached target sockets.target - Sockets. Dec 13 09:03:04.068230 systemd[1735]: Reached target basic.target - Basic System. Dec 13 09:03:04.068287 systemd[1735]: Reached target default.target - Main User Target. Dec 13 09:03:04.068320 systemd[1735]: Startup finished in 138ms. Dec 13 09:03:04.068490 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 09:03:04.077518 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 09:03:04.268123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 09:03:04.286763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:03:04.404155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:03:04.409636 (kubelet)[1752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:03:04.482078 kubelet[1752]: E1213 09:03:04.482019 1752 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:03:04.484583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:03:04.484757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:03:04.775612 systemd[1]: Started sshd@1-188.245.203.154:22-139.178.89.65:52188.service - OpenSSH per-connection server daemon (139.178.89.65:52188). Dec 13 09:03:05.777709 sshd[1762]: Accepted publickey for core from 139.178.89.65 port 52188 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:05.780825 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:05.787260 systemd-logind[1453]: New session 2 of user core. Dec 13 09:03:05.790465 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 09:03:06.469039 sshd[1762]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:06.474130 systemd[1]: sshd@1-188.245.203.154:22-139.178.89.65:52188.service: Deactivated successfully. Dec 13 09:03:06.476098 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 09:03:06.477374 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Dec 13 09:03:06.478591 systemd-logind[1453]: Removed session 2. Dec 13 09:03:06.645542 systemd[1]: Started sshd@2-188.245.203.154:22-139.178.89.65:52196.service - OpenSSH per-connection server daemon (139.178.89.65:52196). Dec 13 09:03:07.634043 sshd[1769]: Accepted publickey for core from 139.178.89.65 port 52196 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:07.637101 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:07.643094 systemd-logind[1453]: New session 3 of user core. Dec 13 09:03:07.649562 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 09:03:08.319752 sshd[1769]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:08.324632 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Dec 13 09:03:08.325409 systemd[1]: sshd@2-188.245.203.154:22-139.178.89.65:52196.service: Deactivated successfully. Dec 13 09:03:08.327478 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 09:03:08.329396 systemd-logind[1453]: Removed session 3. Dec 13 09:03:08.489969 systemd[1]: Started sshd@3-188.245.203.154:22-139.178.89.65:34466.service - OpenSSH per-connection server daemon (139.178.89.65:34466). Dec 13 09:03:09.493175 sshd[1776]: Accepted publickey for core from 139.178.89.65 port 34466 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:09.495272 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:09.500563 systemd-logind[1453]: New session 4 of user core. Dec 13 09:03:09.512615 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 09:03:10.183304 sshd[1776]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:10.188029 systemd[1]: sshd@3-188.245.203.154:22-139.178.89.65:34466.service: Deactivated successfully. Dec 13 09:03:10.189876 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 09:03:10.191483 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Dec 13 09:03:10.192706 systemd-logind[1453]: Removed session 4. Dec 13 09:03:10.360698 systemd[1]: Started sshd@4-188.245.203.154:22-139.178.89.65:34468.service - OpenSSH per-connection server daemon (139.178.89.65:34468). Dec 13 09:03:11.338453 sshd[1783]: Accepted publickey for core from 139.178.89.65 port 34468 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:11.340815 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:11.345423 systemd-logind[1453]: New session 5 of user core. Dec 13 09:03:11.351466 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 09:03:11.870811 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 09:03:11.871278 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:03:11.891970 sudo[1786]: pam_unix(sudo:session): session closed for user root Dec 13 09:03:12.053354 sshd[1783]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:12.058918 systemd[1]: sshd@4-188.245.203.154:22-139.178.89.65:34468.service: Deactivated successfully. Dec 13 09:03:12.061647 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 09:03:12.062739 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Dec 13 09:03:12.064008 systemd-logind[1453]: Removed session 5. Dec 13 09:03:12.232533 systemd[1]: Started sshd@5-188.245.203.154:22-139.178.89.65:34480.service - OpenSSH per-connection server daemon (139.178.89.65:34480). Dec 13 09:03:13.222727 sshd[1791]: Accepted publickey for core from 139.178.89.65 port 34480 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:13.226088 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:13.235373 systemd-logind[1453]: New session 6 of user core. Dec 13 09:03:13.249449 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 09:03:13.750959 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 09:03:13.751382 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:03:13.756343 sudo[1795]: pam_unix(sudo:session): session closed for user root Dec 13 09:03:13.763584 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 09:03:13.763978 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:03:13.784688 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 09:03:13.787899 auditctl[1798]: No rules Dec 13 09:03:13.788675 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 09:03:13.790300 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 09:03:13.799907 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:03:13.825832 augenrules[1816]: No rules Dec 13 09:03:13.827259 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:03:13.829011 sudo[1794]: pam_unix(sudo:session): session closed for user root Dec 13 09:03:13.993576 sshd[1791]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:13.999352 systemd[1]: sshd@5-188.245.203.154:22-139.178.89.65:34480.service: Deactivated successfully. Dec 13 09:03:14.001399 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 09:03:14.003820 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Dec 13 09:03:14.004752 systemd-logind[1453]: Removed session 6. Dec 13 09:03:14.178018 systemd[1]: Started sshd@6-188.245.203.154:22-139.178.89.65:34494.service - OpenSSH per-connection server daemon (139.178.89.65:34494). Dec 13 09:03:14.517851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 09:03:14.536593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:03:14.652373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:03:14.671900 (kubelet)[1834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:03:14.721515 kubelet[1834]: E1213 09:03:14.721468 1834 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:03:14.723807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:03:14.724037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:03:15.160541 sshd[1824]: Accepted publickey for core from 139.178.89.65 port 34494 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:15.163442 sshd[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:15.174452 systemd-logind[1453]: New session 7 of user core. Dec 13 09:03:15.181501 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 09:03:15.684357 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 09:03:15.684641 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:03:15.995671 (dockerd)[1858]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 09:03:15.995708 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 09:03:16.253607 dockerd[1858]: time="2024-12-13T09:03:16.253100816Z" level=info msg="Starting up" Dec 13 09:03:16.350028 dockerd[1858]: time="2024-12-13T09:03:16.349734908Z" level=info msg="Loading containers: start." Dec 13 09:03:16.460290 kernel: Initializing XFRM netlink socket Dec 13 09:03:16.542031 systemd-networkd[1379]: docker0: Link UP Dec 13 09:03:16.556599 dockerd[1858]: time="2024-12-13T09:03:16.556519888Z" level=info msg="Loading containers: done." Dec 13 09:03:16.574442 dockerd[1858]: time="2024-12-13T09:03:16.574374095Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 09:03:16.574618 dockerd[1858]: time="2024-12-13T09:03:16.574521575Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 09:03:16.574673 dockerd[1858]: time="2024-12-13T09:03:16.574647936Z" level=info msg="Daemon has completed initialization" Dec 13 09:03:16.605817 dockerd[1858]: time="2024-12-13T09:03:16.605680737Z" level=info msg="API listen on /run/docker.sock" Dec 13 09:03:16.606205 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 09:03:17.704011 containerd[1469]: time="2024-12-13T09:03:17.703906488Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 09:03:18.406345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163841367.mount: Deactivated successfully. Dec 13 09:03:20.277763 containerd[1469]: time="2024-12-13T09:03:20.277674805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:20.279964 containerd[1469]: time="2024-12-13T09:03:20.279673210Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864102" Dec 13 09:03:20.281505 containerd[1469]: time="2024-12-13T09:03:20.280867013Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:20.284308 containerd[1469]: time="2024-12-13T09:03:20.284263981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:20.285517 containerd[1469]: time="2024-12-13T09:03:20.285484264Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 2.581527216s" Dec 13 09:03:20.285622 containerd[1469]: time="2024-12-13T09:03:20.285607144Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 09:03:20.307623 containerd[1469]: time="2024-12-13T09:03:20.307585997Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 09:03:22.625264 containerd[1469]: time="2024-12-13T09:03:22.624181100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:22.627224 containerd[1469]: time="2024-12-13T09:03:22.626984627Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900714" Dec 13 09:03:22.628988 containerd[1469]: time="2024-12-13T09:03:22.628904551Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:22.632875 containerd[1469]: time="2024-12-13T09:03:22.632816161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:22.634795 containerd[1469]: time="2024-12-13T09:03:22.634614685Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.326834527s" Dec 13 09:03:22.634795 containerd[1469]: time="2024-12-13T09:03:22.634662405Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 09:03:22.661542 containerd[1469]: time="2024-12-13T09:03:22.661295507Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 09:03:24.220557 containerd[1469]: time="2024-12-13T09:03:24.220480413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:24.222115 containerd[1469]: time="2024-12-13T09:03:24.221831536Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164352" Dec 13 09:03:24.223223 containerd[1469]: time="2024-12-13T09:03:24.223106659Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:24.228041 containerd[1469]: time="2024-12-13T09:03:24.228003150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:24.229759 containerd[1469]: time="2024-12-13T09:03:24.229641034Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.568303207s" Dec 13 09:03:24.229759 containerd[1469]: time="2024-12-13T09:03:24.229676834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 09:03:24.258646 containerd[1469]: time="2024-12-13T09:03:24.258595740Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 09:03:24.767470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 09:03:24.783583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:03:24.935509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:03:24.937671 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:03:24.996010 kubelet[2090]: E1213 09:03:24.995958 2090 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:03:25.003004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:03:25.003571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:03:25.343328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130555677.mount: Deactivated successfully. Dec 13 09:03:25.698881 containerd[1469]: time="2024-12-13T09:03:25.698721495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:25.700286 containerd[1469]: time="2024-12-13T09:03:25.699607017Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662037" Dec 13 09:03:25.700992 containerd[1469]: time="2024-12-13T09:03:25.700873780Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:25.704017 containerd[1469]: time="2024-12-13T09:03:25.703945667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:25.705120 containerd[1469]: time="2024-12-13T09:03:25.704530548Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.445891608s" Dec 13 09:03:25.705120 containerd[1469]: time="2024-12-13T09:03:25.704571228Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 09:03:25.732556 containerd[1469]: time="2024-12-13T09:03:25.732466770Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 09:03:26.459369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284699766.mount: Deactivated successfully. Dec 13 09:03:27.099229 containerd[1469]: time="2024-12-13T09:03:27.099151502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:27.100768 containerd[1469]: time="2024-12-13T09:03:27.100704985Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Dec 13 09:03:27.102215 containerd[1469]: time="2024-12-13T09:03:27.101699587Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:27.105708 containerd[1469]: time="2024-12-13T09:03:27.105647596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:27.108290 containerd[1469]: time="2024-12-13T09:03:27.108239841Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.37549423s" Dec 13 09:03:27.108290 containerd[1469]: time="2024-12-13T09:03:27.108281482Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 09:03:27.133236 containerd[1469]: time="2024-12-13T09:03:27.133162656Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 09:03:27.667128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3552751372.mount: Deactivated successfully. Dec 13 09:03:27.675402 containerd[1469]: time="2024-12-13T09:03:27.675315873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:27.677605 containerd[1469]: time="2024-12-13T09:03:27.677452357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Dec 13 09:03:27.678881 containerd[1469]: time="2024-12-13T09:03:27.678724440Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:27.682695 containerd[1469]: time="2024-12-13T09:03:27.682625288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:27.684409 containerd[1469]: time="2024-12-13T09:03:27.684264252Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 551.043636ms" Dec 13 09:03:27.684409 containerd[1469]: time="2024-12-13T09:03:27.684310372Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 09:03:27.710227 containerd[1469]: time="2024-12-13T09:03:27.710042828Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 09:03:28.316276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122361141.mount: Deactivated successfully. Dec 13 09:03:31.692137 containerd[1469]: time="2024-12-13T09:03:31.692067452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:31.693984 containerd[1469]: time="2024-12-13T09:03:31.693936576Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Dec 13 09:03:31.695214 containerd[1469]: time="2024-12-13T09:03:31.694760178Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:31.699478 containerd[1469]: time="2024-12-13T09:03:31.698334825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:03:31.701097 containerd[1469]: time="2024-12-13T09:03:31.699702468Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.98961752s" Dec 13 09:03:31.701097 containerd[1469]: time="2024-12-13T09:03:31.699752908Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 09:03:35.017957 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 09:03:35.030897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:03:35.158592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:03:35.161830 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:03:35.213502 kubelet[2278]: E1213 09:03:35.213406 2278 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:03:35.216470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:03:35.217276 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:03:37.336276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:03:37.345784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:03:37.382292 systemd[1]: Reloading requested from client PID 2292 ('systemctl') (unit session-7.scope)... Dec 13 09:03:37.382430 systemd[1]: Reloading... Dec 13 09:03:37.496068 zram_generator::config[2333]: No configuration found. Dec 13 09:03:37.603015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:03:37.673177 systemd[1]: Reloading finished in 290 ms. Dec 13 09:03:37.735618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:03:37.740433 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:03:37.744481 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 09:03:37.745009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:03:37.751724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:03:37.888630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:03:37.889135 (kubelet)[2382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:03:37.944068 kubelet[2382]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:03:37.944068 kubelet[2382]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:03:37.944068 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:03:37.944462 kubelet[2382]: I1213 09:03:37.944108 2382 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:03:38.493106 kubelet[2382]: I1213 09:03:38.493061 2382 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 09:03:38.493106 kubelet[2382]: I1213 09:03:38.493093 2382 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:03:38.493346 kubelet[2382]: I1213 09:03:38.493322 2382 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 09:03:38.515031 kubelet[2382]: I1213 09:03:38.514855 2382 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:03:38.515808 kubelet[2382]: E1213 09:03:38.515560 2382 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://188.245.203.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:38.525675 kubelet[2382]: I1213 09:03:38.525519 2382 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:03:38.525989 kubelet[2382]: I1213 09:03:38.525861 2382 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:03:38.526143 kubelet[2382]: I1213 09:03:38.525904 2382 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-6-29baf1648e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 09:03:38.526279 kubelet[2382]: I1213 09:03:38.526218 2382 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:03:38.526279 kubelet[2382]: I1213 09:03:38.526230 2382 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 09:03:38.526504 kubelet[2382]: I1213 09:03:38.526486 2382 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:03:38.529130 kubelet[2382]: I1213 09:03:38.527816 2382 kubelet.go:400] "Attempting to sync node with API server" Dec 13 09:03:38.529130 kubelet[2382]: I1213 09:03:38.527849 2382 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:03:38.529130 kubelet[2382]: I1213 09:03:38.528088 2382 kubelet.go:312] "Adding apiserver pod source" Dec 13 09:03:38.529130 kubelet[2382]: I1213 09:03:38.528104 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:03:38.529858 kubelet[2382]: I1213 09:03:38.529839 2382 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:03:38.530316 kubelet[2382]: I1213 09:03:38.530293 2382 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:03:38.531113 kubelet[2382]: W1213 09:03:38.531061 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.203.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:38.531273 kubelet[2382]: E1213 09:03:38.531259 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.203.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:38.531621 kubelet[2382]: W1213 09:03:38.531589 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.203.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-6-29baf1648e&limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:38.531735 kubelet[2382]: E1213 09:03:38.531724 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.203.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-6-29baf1648e&limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:38.532025 kubelet[2382]: W1213 09:03:38.532000 2382 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 09:03:38.540049 kubelet[2382]: I1213 09:03:38.540003 2382 server.go:1264] "Started kubelet" Dec 13 09:03:38.543379 kubelet[2382]: I1213 09:03:38.543338 2382 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:03:38.545412 kubelet[2382]: I1213 09:03:38.545353 2382 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:03:38.546636 kubelet[2382]: I1213 09:03:38.545771 2382 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:03:38.546737 kubelet[2382]: I1213 09:03:38.546664 2382 server.go:455] "Adding debug handlers to kubelet server" Dec 13 09:03:38.548625 kubelet[2382]: E1213 09:03:38.548041 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.203.154:6443/api/v1/namespaces/default/events\": dial tcp 188.245.203.154:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-6-29baf1648e.1810b1228ff692b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-6-29baf1648e,UID:ci-4081-2-1-6-29baf1648e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-6-29baf1648e,},FirstTimestamp:2024-12-13 09:03:38.539946677 +0000 UTC m=+0.644148597,LastTimestamp:2024-12-13 09:03:38.539946677 +0000 UTC m=+0.644148597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-6-29baf1648e,}" Dec 13 09:03:38.550510 kubelet[2382]: I1213 09:03:38.550360 2382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:03:38.557469 kubelet[2382]: E1213 09:03:38.557444 2382 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-6-29baf1648e\" not found" Dec 13 09:03:38.557745 kubelet[2382]: I1213 09:03:38.557734 2382 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 09:03:38.557916 kubelet[2382]: I1213 09:03:38.557903 2382 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 09:03:38.558025 kubelet[2382]: I1213 09:03:38.558014 2382 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:03:38.558499 kubelet[2382]: W1213 09:03:38.558453 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.203.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:38.558744 kubelet[2382]: E1213 09:03:38.558728 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.203.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:38.558917 kubelet[2382]: E1213 09:03:38.558900 2382 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 09:03:38.560847 kubelet[2382]: E1213 09:03:38.560817 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.203.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-6-29baf1648e?timeout=10s\": dial tcp 188.245.203.154:6443: connect: connection refused" interval="200ms" Dec 13 09:03:38.561182 kubelet[2382]: I1213 09:03:38.561119 2382 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:03:38.561367 kubelet[2382]: I1213 09:03:38.561263 2382 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:03:38.563284 kubelet[2382]: I1213 09:03:38.562471 2382 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:03:38.574814 kubelet[2382]: I1213 09:03:38.574499 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:03:38.575953 kubelet[2382]: I1213 09:03:38.575914 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:03:38.576113 kubelet[2382]: I1213 09:03:38.576086 2382 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:03:38.576113 kubelet[2382]: I1213 09:03:38.576111 2382 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 09:03:38.576205 kubelet[2382]: E1213 09:03:38.576157 2382 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 09:03:38.584964 kubelet[2382]: W1213 09:03:38.584788 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.203.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:38.585358 kubelet[2382]: E1213 09:03:38.584944 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.203.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:38.592178 kubelet[2382]: I1213 09:03:38.592128 2382 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:03:38.592508 kubelet[2382]: I1213 09:03:38.592368 2382 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:03:38.592508 kubelet[2382]: I1213 09:03:38.592397 2382 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:03:38.594903 kubelet[2382]: I1213 09:03:38.594767 2382 policy_none.go:49] "None policy: Start" Dec 13 09:03:38.595611 kubelet[2382]: I1213 09:03:38.595594 2382 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:03:38.595756 kubelet[2382]: I1213 09:03:38.595621 2382 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:03:38.602764 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 09:03:38.619302 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 09:03:38.622905 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 09:03:38.632386 kubelet[2382]: I1213 09:03:38.632272 2382 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:03:38.632842 kubelet[2382]: I1213 09:03:38.632625 2382 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:03:38.632842 kubelet[2382]: I1213 09:03:38.632798 2382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:03:38.636297 kubelet[2382]: E1213 09:03:38.636252 2382 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-6-29baf1648e\" not found" Dec 13 09:03:38.661433 kubelet[2382]: I1213 09:03:38.661017 2382 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.661433 kubelet[2382]: E1213 09:03:38.661386 2382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.203.154:6443/api/v1/nodes\": dial tcp 188.245.203.154:6443: connect: connection refused" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.676868 kubelet[2382]: I1213 09:03:38.676752 2382 topology_manager.go:215] "Topology Admit Handler" podUID="52fd0f782fc6fe93157d6c0faa3d9bb4" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.680462 kubelet[2382]: I1213 09:03:38.680423 2382 topology_manager.go:215] "Topology Admit Handler" podUID="46b5423142cd62a125aad175ecbc9d6b" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.683670 kubelet[2382]: I1213 09:03:38.683184 2382 topology_manager.go:215] "Topology Admit Handler" podUID="024846a6abe60edfc324e85d04d7791c" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.690511 systemd[1]: Created slice kubepods-burstable-pod52fd0f782fc6fe93157d6c0faa3d9bb4.slice - libcontainer container kubepods-burstable-pod52fd0f782fc6fe93157d6c0faa3d9bb4.slice. Dec 13 09:03:38.712489 systemd[1]: Created slice kubepods-burstable-pod46b5423142cd62a125aad175ecbc9d6b.slice - libcontainer container kubepods-burstable-pod46b5423142cd62a125aad175ecbc9d6b.slice. Dec 13 09:03:38.719287 systemd[1]: Created slice kubepods-burstable-pod024846a6abe60edfc324e85d04d7791c.slice - libcontainer container kubepods-burstable-pod024846a6abe60edfc324e85d04d7791c.slice. Dec 13 09:03:38.759816 kubelet[2382]: I1213 09:03:38.758990 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52fd0f782fc6fe93157d6c0faa3d9bb4-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-6-29baf1648e\" (UID: \"52fd0f782fc6fe93157d6c0faa3d9bb4\") " pod="kube-system/kube-apiserver-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.759816 kubelet[2382]: I1213 09:03:38.759057 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46b5423142cd62a125aad175ecbc9d6b-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-6-29baf1648e\" (UID: \"46b5423142cd62a125aad175ecbc9d6b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.759816 kubelet[2382]: I1213 09:03:38.759106 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46b5423142cd62a125aad175ecbc9d6b-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-6-29baf1648e\" (UID: \"46b5423142cd62a125aad175ecbc9d6b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.759816 kubelet[2382]: I1213 09:03:38.759263 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/46b5423142cd62a125aad175ecbc9d6b-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-6-29baf1648e\" (UID: \"46b5423142cd62a125aad175ecbc9d6b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.759816 kubelet[2382]: I1213 09:03:38.759314 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/024846a6abe60edfc324e85d04d7791c-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-6-29baf1648e\" (UID: \"024846a6abe60edfc324e85d04d7791c\") " pod="kube-system/kube-scheduler-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.760261 kubelet[2382]: I1213 09:03:38.759367 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52fd0f782fc6fe93157d6c0faa3d9bb4-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-6-29baf1648e\" (UID: \"52fd0f782fc6fe93157d6c0faa3d9bb4\") " pod="kube-system/kube-apiserver-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.760261 kubelet[2382]: I1213 09:03:38.759463 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52fd0f782fc6fe93157d6c0faa3d9bb4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-6-29baf1648e\" (UID: \"52fd0f782fc6fe93157d6c0faa3d9bb4\") " pod="kube-system/kube-apiserver-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.760261 kubelet[2382]: I1213 09:03:38.759505 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/46b5423142cd62a125aad175ecbc9d6b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-6-29baf1648e\" (UID: \"46b5423142cd62a125aad175ecbc9d6b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.760261 kubelet[2382]: I1213 09:03:38.759546 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46b5423142cd62a125aad175ecbc9d6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-6-29baf1648e\" (UID: \"46b5423142cd62a125aad175ecbc9d6b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.761916 kubelet[2382]: E1213 09:03:38.761827 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.203.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-6-29baf1648e?timeout=10s\": dial tcp 188.245.203.154:6443: connect: connection refused" interval="400ms" Dec 13 09:03:38.863688 kubelet[2382]: I1213 09:03:38.863626 2382 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:38.864277 kubelet[2382]: E1213 09:03:38.864184 2382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.203.154:6443/api/v1/nodes\": dial tcp 188.245.203.154:6443: connect: connection refused" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:39.010522 containerd[1469]: time="2024-12-13T09:03:39.010017682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-6-29baf1648e,Uid:52fd0f782fc6fe93157d6c0faa3d9bb4,Namespace:kube-system,Attempt:0,}" Dec 13 09:03:39.017551 containerd[1469]: time="2024-12-13T09:03:39.017485902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-6-29baf1648e,Uid:46b5423142cd62a125aad175ecbc9d6b,Namespace:kube-system,Attempt:0,}" Dec 13 09:03:39.023527 containerd[1469]: time="2024-12-13T09:03:39.023466111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-6-29baf1648e,Uid:024846a6abe60edfc324e85d04d7791c,Namespace:kube-system,Attempt:0,}" Dec 13 09:03:39.163672 kubelet[2382]: E1213 09:03:39.163602 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.203.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-6-29baf1648e?timeout=10s\": dial tcp 188.245.203.154:6443: connect: connection refused" interval="800ms" Dec 13 09:03:39.267169 kubelet[2382]: I1213 09:03:39.266723 2382 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:39.268080 kubelet[2382]: E1213 09:03:39.268033 2382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.203.154:6443/api/v1/nodes\": dial tcp 188.245.203.154:6443: connect: connection refused" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:39.514918 kubelet[2382]: W1213 09:03:39.514793 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.203.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:39.514918 kubelet[2382]: E1213 09:03:39.514882 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.203.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:39.570242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3669748229.mount: Deactivated successfully. Dec 13 09:03:39.578794 containerd[1469]: time="2024-12-13T09:03:39.578732166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:03:39.579945 containerd[1469]: time="2024-12-13T09:03:39.579884966Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Dec 13 09:03:39.580943 containerd[1469]: time="2024-12-13T09:03:39.580854960Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:03:39.581980 containerd[1469]: time="2024-12-13T09:03:39.581917597Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:03:39.583039 containerd[1469]: time="2024-12-13T09:03:39.583004355Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:03:39.584069 containerd[1469]: time="2024-12-13T09:03:39.584012590Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:03:39.584625 containerd[1469]: time="2024-12-13T09:03:39.584585530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:03:39.588270 containerd[1469]: time="2024-12-13T09:03:39.588128334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:03:39.591230 containerd[1469]: time="2024-12-13T09:03:39.589418739Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.290613ms" Dec 13 09:03:39.591230 containerd[1469]: time="2024-12-13T09:03:39.590695343Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.121269ms" Dec 13 09:03:39.596126 containerd[1469]: time="2024-12-13T09:03:39.596056930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.453904ms" Dec 13 09:03:39.699097 kubelet[2382]: W1213 09:03:39.699028 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.203.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:39.699097 kubelet[2382]: E1213 09:03:39.699078 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.203.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:39.716268 containerd[1469]: time="2024-12-13T09:03:39.714703470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:03:39.716268 containerd[1469]: time="2024-12-13T09:03:39.715447136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:03:39.716268 containerd[1469]: time="2024-12-13T09:03:39.715466817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:03:39.719971 containerd[1469]: time="2024-12-13T09:03:39.715928553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:03:39.724820 containerd[1469]: time="2024-12-13T09:03:39.722124929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:03:39.724820 containerd[1469]: time="2024-12-13T09:03:39.722176851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:03:39.724820 containerd[1469]: time="2024-12-13T09:03:39.722201932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:03:39.724820 containerd[1469]: time="2024-12-13T09:03:39.722278054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:03:39.726127 containerd[1469]: time="2024-12-13T09:03:39.725904701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:03:39.726127 containerd[1469]: time="2024-12-13T09:03:39.726090787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:03:39.726285 containerd[1469]: time="2024-12-13T09:03:39.726103308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:03:39.726533 containerd[1469]: time="2024-12-13T09:03:39.726369077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:03:39.750337 kubelet[2382]: W1213 09:03:39.749542 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.203.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:39.750337 kubelet[2382]: E1213 09:03:39.749619 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.203.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:39.749802 systemd[1]: Started cri-containerd-65611fcb9813633640864ab1843f83d4c524223f41ad3a96b7cb22101ffd5f42.scope - libcontainer container 65611fcb9813633640864ab1843f83d4c524223f41ad3a96b7cb22101ffd5f42. Dec 13 09:03:39.760510 systemd[1]: Started cri-containerd-037e959b57f11213dee530191632fe665f0d6c73507f9855d63faf0d64db9744.scope - libcontainer container 037e959b57f11213dee530191632fe665f0d6c73507f9855d63faf0d64db9744. Dec 13 09:03:39.762567 systemd[1]: Started cri-containerd-37c968eea2404c87e584ec5c19772fdaaa14a23da73de6177d0710a2b1402d79.scope - libcontainer container 37c968eea2404c87e584ec5c19772fdaaa14a23da73de6177d0710a2b1402d79. Dec 13 09:03:39.769978 kubelet[2382]: W1213 09:03:39.769784 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.203.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-6-29baf1648e&limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:39.769978 kubelet[2382]: E1213 09:03:39.769851 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.203.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-6-29baf1648e&limit=500&resourceVersion=0": dial tcp 188.245.203.154:6443: connect: connection refused Dec 13 09:03:39.821800 containerd[1469]: time="2024-12-13T09:03:39.820752530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-6-29baf1648e,Uid:024846a6abe60edfc324e85d04d7791c,Namespace:kube-system,Attempt:0,} returns sandbox id \"65611fcb9813633640864ab1843f83d4c524223f41ad3a96b7cb22101ffd5f42\"" Dec 13 09:03:39.828903 containerd[1469]: time="2024-12-13T09:03:39.828858093Z" level=info msg="CreateContainer within sandbox \"65611fcb9813633640864ab1843f83d4c524223f41ad3a96b7cb22101ffd5f42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 09:03:39.830414 containerd[1469]: time="2024-12-13T09:03:39.830364986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-6-29baf1648e,Uid:46b5423142cd62a125aad175ecbc9d6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"37c968eea2404c87e584ec5c19772fdaaa14a23da73de6177d0710a2b1402d79\"" Dec 13 09:03:39.834971 containerd[1469]: time="2024-12-13T09:03:39.834712978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-6-29baf1648e,Uid:52fd0f782fc6fe93157d6c0faa3d9bb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"037e959b57f11213dee530191632fe665f0d6c73507f9855d63faf0d64db9744\"" Dec 13 09:03:39.837161 containerd[1469]: time="2024-12-13T09:03:39.836976696Z" level=info msg="CreateContainer within sandbox \"37c968eea2404c87e584ec5c19772fdaaa14a23da73de6177d0710a2b1402d79\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 09:03:39.839256 containerd[1469]: time="2024-12-13T09:03:39.839216535Z" level=info msg="CreateContainer within sandbox \"037e959b57f11213dee530191632fe665f0d6c73507f9855d63faf0d64db9744\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 09:03:39.855530 containerd[1469]: time="2024-12-13T09:03:39.855481342Z" level=info msg="CreateContainer within sandbox \"37c968eea2404c87e584ec5c19772fdaaa14a23da73de6177d0710a2b1402d79\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aa1892f5e19903a9b58ef43a8d8c477e5da1af5bf2432097530b8670013164c2\"" Dec 13 09:03:39.856318 containerd[1469]: time="2024-12-13T09:03:39.856283610Z" level=info msg="StartContainer for \"aa1892f5e19903a9b58ef43a8d8c477e5da1af5bf2432097530b8670013164c2\"" Dec 13 09:03:39.859048 containerd[1469]: time="2024-12-13T09:03:39.858705015Z" level=info msg="CreateContainer within sandbox \"65611fcb9813633640864ab1843f83d4c524223f41ad3a96b7cb22101ffd5f42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"69f2dd5c55de256c09d88a7e3f1d38663754ad5ef59fb6643d07cfb0ca4c8e4a\"" Dec 13 09:03:39.859246 containerd[1469]: time="2024-12-13T09:03:39.859213352Z" level=info msg="StartContainer for \"69f2dd5c55de256c09d88a7e3f1d38663754ad5ef59fb6643d07cfb0ca4c8e4a\"" Dec 13 09:03:39.861229 containerd[1469]: time="2024-12-13T09:03:39.860930732Z" level=info msg="CreateContainer within sandbox \"037e959b57f11213dee530191632fe665f0d6c73507f9855d63faf0d64db9744\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"053dc5edf95c28d9e24e4aedbf47a9713072c6cce5edb7c41f2510ba15bdfc08\"" Dec 13 09:03:39.861923 containerd[1469]: time="2024-12-13T09:03:39.861818803Z" level=info msg="StartContainer for \"053dc5edf95c28d9e24e4aedbf47a9713072c6cce5edb7c41f2510ba15bdfc08\"" Dec 13 09:03:39.899981 systemd[1]: Started cri-containerd-69f2dd5c55de256c09d88a7e3f1d38663754ad5ef59fb6643d07cfb0ca4c8e4a.scope - libcontainer container 69f2dd5c55de256c09d88a7e3f1d38663754ad5ef59fb6643d07cfb0ca4c8e4a. Dec 13 09:03:39.909453 systemd[1]: Started cri-containerd-aa1892f5e19903a9b58ef43a8d8c477e5da1af5bf2432097530b8670013164c2.scope - libcontainer container aa1892f5e19903a9b58ef43a8d8c477e5da1af5bf2432097530b8670013164c2. Dec 13 09:03:39.918422 systemd[1]: Started cri-containerd-053dc5edf95c28d9e24e4aedbf47a9713072c6cce5edb7c41f2510ba15bdfc08.scope - libcontainer container 053dc5edf95c28d9e24e4aedbf47a9713072c6cce5edb7c41f2510ba15bdfc08. Dec 13 09:03:39.964700 kubelet[2382]: E1213 09:03:39.964642 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.203.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-6-29baf1648e?timeout=10s\": dial tcp 188.245.203.154:6443: connect: connection refused" interval="1.6s" Dec 13 09:03:39.982227 containerd[1469]: time="2024-12-13T09:03:39.981250651Z" level=info msg="StartContainer for \"aa1892f5e19903a9b58ef43a8d8c477e5da1af5bf2432097530b8670013164c2\" returns successfully" Dec 13 09:03:39.982878 containerd[1469]: time="2024-12-13T09:03:39.982847986Z" level=info msg="StartContainer for \"69f2dd5c55de256c09d88a7e3f1d38663754ad5ef59fb6643d07cfb0ca4c8e4a\" returns successfully" Dec 13 09:03:39.992232 containerd[1469]: time="2024-12-13T09:03:39.991942984Z" level=info msg="StartContainer for \"053dc5edf95c28d9e24e4aedbf47a9713072c6cce5edb7c41f2510ba15bdfc08\" returns successfully" Dec 13 09:03:40.071226 kubelet[2382]: I1213 09:03:40.070813 2382 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:40.072524 kubelet[2382]: E1213 09:03:40.071913 2382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.203.154:6443/api/v1/nodes\": dial tcp 188.245.203.154:6443: connect: connection refused" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:41.678163 kubelet[2382]: I1213 09:03:41.677501 2382 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:43.004627 kubelet[2382]: E1213 09:03:43.004573 2382 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-6-29baf1648e\" not found" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:43.049402 kubelet[2382]: I1213 09:03:43.049350 2382 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:43.530920 kubelet[2382]: I1213 09:03:43.530864 2382 apiserver.go:52] "Watching apiserver" Dec 13 09:03:43.559199 kubelet[2382]: I1213 09:03:43.559146 2382 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 09:03:43.661239 kubelet[2382]: E1213 09:03:43.660464 2382 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-6-29baf1648e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:45.348131 systemd[1]: Reloading requested from client PID 2654 ('systemctl') (unit session-7.scope)... Dec 13 09:03:45.348561 systemd[1]: Reloading... Dec 13 09:03:45.446217 zram_generator::config[2694]: No configuration found. Dec 13 09:03:45.555690 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:03:45.640862 systemd[1]: Reloading finished in 291 ms. Dec 13 09:03:45.682312 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:03:45.695796 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 09:03:45.696420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:03:45.696661 systemd[1]: kubelet.service: Consumed 1.094s CPU time, 111.3M memory peak, 0B memory swap peak. Dec 13 09:03:45.711490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:03:45.834511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:03:45.837962 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:03:45.895953 kubelet[2739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:03:45.895953 kubelet[2739]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:03:45.895953 kubelet[2739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:03:45.895953 kubelet[2739]: I1213 09:03:45.895235 2739 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:03:45.902699 kubelet[2739]: I1213 09:03:45.901304 2739 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 09:03:45.902953 kubelet[2739]: I1213 09:03:45.902844 2739 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:03:45.903532 kubelet[2739]: I1213 09:03:45.903442 2739 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 09:03:45.905997 kubelet[2739]: I1213 09:03:45.905657 2739 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 09:03:45.908341 kubelet[2739]: I1213 09:03:45.908321 2739 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:03:45.915910 kubelet[2739]: I1213 09:03:45.915876 2739 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:03:45.916082 kubelet[2739]: I1213 09:03:45.916044 2739 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:03:45.916255 kubelet[2739]: I1213 09:03:45.916069 2739 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-6-29baf1648e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 09:03:45.916340 kubelet[2739]: I1213 09:03:45.916262 2739 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:03:45.916340 kubelet[2739]: I1213 09:03:45.916272 2739 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 09:03:45.916340 kubelet[2739]: I1213 09:03:45.916304 2739 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:03:45.916424 kubelet[2739]: I1213 09:03:45.916409 2739 kubelet.go:400] "Attempting to sync node with API server" Dec 13 09:03:45.916424 kubelet[2739]: I1213 09:03:45.916421 2739 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:03:45.916936 kubelet[2739]: I1213 09:03:45.916509 2739 kubelet.go:312] "Adding apiserver pod source" Dec 13 09:03:45.916936 kubelet[2739]: I1213 09:03:45.916535 2739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:03:45.918712 kubelet[2739]: I1213 09:03:45.918675 2739 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:03:45.919123 kubelet[2739]: I1213 09:03:45.919109 2739 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:03:45.919814 kubelet[2739]: I1213 09:03:45.919802 2739 server.go:1264] "Started kubelet" Dec 13 09:03:45.921981 kubelet[2739]: I1213 09:03:45.921964 2739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:03:45.929353 kubelet[2739]: I1213 09:03:45.929314 2739 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:03:45.933208 kubelet[2739]: I1213 09:03:45.931535 2739 server.go:455] "Adding debug handlers to kubelet server" Dec 13 09:03:45.936201 kubelet[2739]: I1213 09:03:45.933937 2739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:03:45.936595 kubelet[2739]: I1213 09:03:45.936575 2739 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:03:45.940199 kubelet[2739]: I1213 09:03:45.938619 2739 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 09:03:45.944083 kubelet[2739]: I1213 09:03:45.942563 2739 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 09:03:45.944494 kubelet[2739]: I1213 09:03:45.944476 2739 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:03:45.948168 kubelet[2739]: I1213 09:03:45.948138 2739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:03:45.949510 kubelet[2739]: I1213 09:03:45.949486 2739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:03:45.949978 kubelet[2739]: I1213 09:03:45.949620 2739 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:03:45.949978 kubelet[2739]: I1213 09:03:45.949642 2739 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 09:03:45.949978 kubelet[2739]: E1213 09:03:45.949680 2739 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 09:03:45.967981 kubelet[2739]: I1213 09:03:45.967952 2739 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:03:45.969101 kubelet[2739]: E1213 09:03:45.969070 2739 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 09:03:45.972219 kubelet[2739]: I1213 09:03:45.972141 2739 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:03:45.976076 kubelet[2739]: I1213 09:03:45.975528 2739 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:03:46.018423 kubelet[2739]: I1213 09:03:46.018349 2739 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:03:46.018829 kubelet[2739]: I1213 09:03:46.018566 2739 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:03:46.018829 kubelet[2739]: I1213 09:03:46.018590 2739 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:03:46.018829 kubelet[2739]: I1213 09:03:46.018740 2739 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 09:03:46.018829 kubelet[2739]: I1213 09:03:46.018751 2739 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 09:03:46.018829 kubelet[2739]: I1213 09:03:46.018768 2739 policy_none.go:49] "None policy: Start" Dec 13 09:03:46.019911 kubelet[2739]: I1213 09:03:46.019892 2739 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:03:46.020269 kubelet[2739]: I1213 09:03:46.020068 2739 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:03:46.021011 kubelet[2739]: I1213 09:03:46.020410 2739 state_mem.go:75] "Updated machine memory state" Dec 13 09:03:46.025220 kubelet[2739]: I1213 09:03:46.025174 2739 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:03:46.025544 kubelet[2739]: I1213 09:03:46.025509 2739 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:03:46.025852 kubelet[2739]: I1213 09:03:46.025838 2739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:03:46.044137 kubelet[2739]: I1213 09:03:46.044111 2739 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.050597 kubelet[2739]: I1213 09:03:46.050551 2739 topology_manager.go:215] "Topology Admit Handler" podUID="46b5423142cd62a125aad175ecbc9d6b" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.050725 kubelet[2739]: I1213 09:03:46.050667 2739 topology_manager.go:215] "Topology Admit Handler" podUID="024846a6abe60edfc324e85d04d7791c" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.050725 kubelet[2739]: I1213 09:03:46.050702 2739 topology_manager.go:215] "Topology Admit Handler" podUID="52fd0f782fc6fe93157d6c0faa3d9bb4" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.061814 kubelet[2739]: I1213 09:03:46.061787 2739 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.062222 kubelet[2739]: I1213 09:03:46.062096 2739 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.246642 kubelet[2739]: I1213 09:03:46.245947 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46b5423142cd62a125aad175ecbc9d6b-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-6-29baf1648e\" (UID: \"46b5423142cd62a125aad175ecbc9d6b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.246642 kubelet[2739]: I1213 09:03:46.246023 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/46b5423142cd62a125aad175ecbc9d6b-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-6-29baf1648e\" (UID: \"46b5423142cd62a125aad175ecbc9d6b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.246642 kubelet[2739]: I1213 09:03:46.246050 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46b5423142cd62a125aad175ecbc9d6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-6-29baf1648e\" (UID: \"46b5423142cd62a125aad175ecbc9d6b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.246642 kubelet[2739]: I1213 09:03:46.246069 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46b5423142cd62a125aad175ecbc9d6b-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-6-29baf1648e\" (UID: \"46b5423142cd62a125aad175ecbc9d6b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.246642 kubelet[2739]: I1213 09:03:46.246086 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/46b5423142cd62a125aad175ecbc9d6b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-6-29baf1648e\" (UID: \"46b5423142cd62a125aad175ecbc9d6b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.246952 kubelet[2739]: I1213 09:03:46.246105 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/024846a6abe60edfc324e85d04d7791c-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-6-29baf1648e\" (UID: \"024846a6abe60edfc324e85d04d7791c\") " pod="kube-system/kube-scheduler-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.246952 kubelet[2739]: I1213 09:03:46.246122 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52fd0f782fc6fe93157d6c0faa3d9bb4-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-6-29baf1648e\" (UID: \"52fd0f782fc6fe93157d6c0faa3d9bb4\") " pod="kube-system/kube-apiserver-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.246952 kubelet[2739]: I1213 09:03:46.246138 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52fd0f782fc6fe93157d6c0faa3d9bb4-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-6-29baf1648e\" (UID: \"52fd0f782fc6fe93157d6c0faa3d9bb4\") " pod="kube-system/kube-apiserver-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.246952 kubelet[2739]: I1213 09:03:46.246155 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52fd0f782fc6fe93157d6c0faa3d9bb4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-6-29baf1648e\" (UID: \"52fd0f782fc6fe93157d6c0faa3d9bb4\") " pod="kube-system/kube-apiserver-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:46.918312 kubelet[2739]: I1213 09:03:46.918259 2739 apiserver.go:52] "Watching apiserver" Dec 13 09:03:46.945456 kubelet[2739]: I1213 09:03:46.945375 2739 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 09:03:47.014224 kubelet[2739]: E1213 09:03:47.012953 2739 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-6-29baf1648e\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-6-29baf1648e" Dec 13 09:03:47.039971 kubelet[2739]: I1213 09:03:47.039901 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-6-29baf1648e" podStartSLOduration=1.039857023 podStartE2EDuration="1.039857023s" podCreationTimestamp="2024-12-13 09:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:03:47.039459932 +0000 UTC m=+1.197142461" watchObservedRunningTime="2024-12-13 09:03:47.039857023 +0000 UTC m=+1.197539552" Dec 13 09:03:47.074550 kubelet[2739]: I1213 09:03:47.074482 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-6-29baf1648e" podStartSLOduration=1.074463123 podStartE2EDuration="1.074463123s" podCreationTimestamp="2024-12-13 09:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:03:47.057717849 +0000 UTC m=+1.215400338" watchObservedRunningTime="2024-12-13 09:03:47.074463123 +0000 UTC m=+1.232145652" Dec 13 09:03:47.088681 kubelet[2739]: I1213 09:03:47.088626 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-6-29baf1648e" podStartSLOduration=1.088606323 podStartE2EDuration="1.088606323s" podCreationTimestamp="2024-12-13 09:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:03:47.076623344 +0000 UTC m=+1.234305873" watchObservedRunningTime="2024-12-13 09:03:47.088606323 +0000 UTC m=+1.246288852" Dec 13 09:03:51.643106 sudo[1843]: pam_unix(sudo:session): session closed for user root Dec 13 09:03:51.803473 sshd[1824]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:51.808618 systemd[1]: sshd@6-188.245.203.154:22-139.178.89.65:34494.service: Deactivated successfully. Dec 13 09:03:51.810945 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 09:03:51.811378 systemd[1]: session-7.scope: Consumed 7.447s CPU time, 185.5M memory peak, 0B memory swap peak. Dec 13 09:03:51.812812 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Dec 13 09:03:51.814680 systemd-logind[1453]: Removed session 7. Dec 13 09:03:59.877318 kubelet[2739]: I1213 09:03:59.877253 2739 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 09:03:59.879248 containerd[1469]: time="2024-12-13T09:03:59.878467433Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 09:03:59.879729 kubelet[2739]: I1213 09:03:59.878863 2739 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 09:04:00.636219 kubelet[2739]: I1213 09:04:00.636032 2739 topology_manager.go:215] "Topology Admit Handler" podUID="346c351c-e9f7-408a-9659-f1f3309b9cf3" podNamespace="kube-system" podName="kube-proxy-h6rcl" Dec 13 09:04:00.649584 systemd[1]: Created slice kubepods-besteffort-pod346c351c_e9f7_408a_9659_f1f3309b9cf3.slice - libcontainer container kubepods-besteffort-pod346c351c_e9f7_408a_9659_f1f3309b9cf3.slice. Dec 13 09:04:00.738950 kubelet[2739]: I1213 09:04:00.738720 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/346c351c-e9f7-408a-9659-f1f3309b9cf3-xtables-lock\") pod \"kube-proxy-h6rcl\" (UID: \"346c351c-e9f7-408a-9659-f1f3309b9cf3\") " pod="kube-system/kube-proxy-h6rcl" Dec 13 09:04:00.738950 kubelet[2739]: I1213 09:04:00.738774 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/346c351c-e9f7-408a-9659-f1f3309b9cf3-lib-modules\") pod \"kube-proxy-h6rcl\" (UID: \"346c351c-e9f7-408a-9659-f1f3309b9cf3\") " pod="kube-system/kube-proxy-h6rcl" Dec 13 09:04:00.738950 kubelet[2739]: I1213 09:04:00.738805 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/346c351c-e9f7-408a-9659-f1f3309b9cf3-kube-proxy\") pod \"kube-proxy-h6rcl\" (UID: \"346c351c-e9f7-408a-9659-f1f3309b9cf3\") " pod="kube-system/kube-proxy-h6rcl" Dec 13 09:04:00.738950 kubelet[2739]: I1213 09:04:00.738831 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2nkc\" (UniqueName: \"kubernetes.io/projected/346c351c-e9f7-408a-9659-f1f3309b9cf3-kube-api-access-l2nkc\") pod \"kube-proxy-h6rcl\" (UID: \"346c351c-e9f7-408a-9659-f1f3309b9cf3\") " pod="kube-system/kube-proxy-h6rcl" Dec 13 09:04:00.959497 containerd[1469]: time="2024-12-13T09:04:00.958879075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6rcl,Uid:346c351c-e9f7-408a-9659-f1f3309b9cf3,Namespace:kube-system,Attempt:0,}" Dec 13 09:04:01.000586 containerd[1469]: time="2024-12-13T09:04:01.000356530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:01.000717 containerd[1469]: time="2024-12-13T09:04:01.000618375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:01.000788 containerd[1469]: time="2024-12-13T09:04:01.000652136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:01.001262 containerd[1469]: time="2024-12-13T09:04:01.000980383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:01.030742 systemd[1]: Started cri-containerd-79045b883b0e2f82258ea3ecd001b526c4af40e660b046e0036c5b0feddbae43.scope - libcontainer container 79045b883b0e2f82258ea3ecd001b526c4af40e660b046e0036c5b0feddbae43. Dec 13 09:04:01.033968 kubelet[2739]: I1213 09:04:01.033707 2739 topology_manager.go:215] "Topology Admit Handler" podUID="f620ef09-a1cb-4554-b6b4-80f77501d485" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-8c4tl" Dec 13 09:04:01.042018 kubelet[2739]: I1213 09:04:01.041551 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkxn7\" (UniqueName: \"kubernetes.io/projected/f620ef09-a1cb-4554-b6b4-80f77501d485-kube-api-access-fkxn7\") pod \"tigera-operator-7bc55997bb-8c4tl\" (UID: \"f620ef09-a1cb-4554-b6b4-80f77501d485\") " pod="tigera-operator/tigera-operator-7bc55997bb-8c4tl" Dec 13 09:04:01.042548 kubelet[2739]: I1213 09:04:01.042350 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f620ef09-a1cb-4554-b6b4-80f77501d485-var-lib-calico\") pod \"tigera-operator-7bc55997bb-8c4tl\" (UID: \"f620ef09-a1cb-4554-b6b4-80f77501d485\") " pod="tigera-operator/tigera-operator-7bc55997bb-8c4tl" Dec 13 09:04:01.048868 systemd[1]: Created slice kubepods-besteffort-podf620ef09_a1cb_4554_b6b4_80f77501d485.slice - libcontainer container kubepods-besteffort-podf620ef09_a1cb_4554_b6b4_80f77501d485.slice. Dec 13 09:04:01.078721 containerd[1469]: time="2024-12-13T09:04:01.078660627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6rcl,Uid:346c351c-e9f7-408a-9659-f1f3309b9cf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"79045b883b0e2f82258ea3ecd001b526c4af40e660b046e0036c5b0feddbae43\"" Dec 13 09:04:01.083750 containerd[1469]: time="2024-12-13T09:04:01.083693648Z" level=info msg="CreateContainer within sandbox \"79045b883b0e2f82258ea3ecd001b526c4af40e660b046e0036c5b0feddbae43\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 09:04:01.105318 containerd[1469]: time="2024-12-13T09:04:01.105263602Z" level=info msg="CreateContainer within sandbox \"79045b883b0e2f82258ea3ecd001b526c4af40e660b046e0036c5b0feddbae43\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7b7b5779982885aaf47a890c840d21ac17392eb6d203b3f818fb582c514799df\"" Dec 13 09:04:01.108690 containerd[1469]: time="2024-12-13T09:04:01.105924256Z" level=info msg="StartContainer for \"7b7b5779982885aaf47a890c840d21ac17392eb6d203b3f818fb582c514799df\"" Dec 13 09:04:01.135521 systemd[1]: Started cri-containerd-7b7b5779982885aaf47a890c840d21ac17392eb6d203b3f818fb582c514799df.scope - libcontainer container 7b7b5779982885aaf47a890c840d21ac17392eb6d203b3f818fb582c514799df. Dec 13 09:04:01.172405 containerd[1469]: time="2024-12-13T09:04:01.172331273Z" level=info msg="StartContainer for \"7b7b5779982885aaf47a890c840d21ac17392eb6d203b3f818fb582c514799df\" returns successfully" Dec 13 09:04:01.354281 containerd[1469]: time="2024-12-13T09:04:01.354175054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8c4tl,Uid:f620ef09-a1cb-4554-b6b4-80f77501d485,Namespace:tigera-operator,Attempt:0,}" Dec 13 09:04:01.388934 containerd[1469]: time="2024-12-13T09:04:01.388425824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:01.388934 containerd[1469]: time="2024-12-13T09:04:01.388491305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:01.388934 containerd[1469]: time="2024-12-13T09:04:01.388507225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:01.388934 containerd[1469]: time="2024-12-13T09:04:01.388614708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:01.417506 systemd[1]: Started cri-containerd-3583b00f424ff47b33b758bcd1faeded32f7f37ad2c2e52769a116abe3765abe.scope - libcontainer container 3583b00f424ff47b33b758bcd1faeded32f7f37ad2c2e52769a116abe3765abe. Dec 13 09:04:01.478312 containerd[1469]: time="2024-12-13T09:04:01.478262473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8c4tl,Uid:f620ef09-a1cb-4554-b6b4-80f77501d485,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3583b00f424ff47b33b758bcd1faeded32f7f37ad2c2e52769a116abe3765abe\"" Dec 13 09:04:01.483100 containerd[1469]: time="2024-12-13T09:04:01.483048889Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 09:04:01.864065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount698178123.mount: Deactivated successfully. Dec 13 09:04:02.060862 kubelet[2739]: I1213 09:04:02.059831 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h6rcl" podStartSLOduration=2.059811915 podStartE2EDuration="2.059811915s" podCreationTimestamp="2024-12-13 09:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:04:02.059480109 +0000 UTC m=+16.217162638" watchObservedRunningTime="2024-12-13 09:04:02.059811915 +0000 UTC m=+16.217494524" Dec 13 09:04:06.115207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079062087.mount: Deactivated successfully. Dec 13 09:04:06.475028 containerd[1469]: time="2024-12-13T09:04:06.472283753Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:06.475444 containerd[1469]: time="2024-12-13T09:04:06.475127004Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125972" Dec 13 09:04:06.475444 containerd[1469]: time="2024-12-13T09:04:06.475377969Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:06.483462 containerd[1469]: time="2024-12-13T09:04:06.483415113Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:06.485067 containerd[1469]: time="2024-12-13T09:04:06.485009462Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 5.001917012s" Dec 13 09:04:06.485067 containerd[1469]: time="2024-12-13T09:04:06.485054703Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 09:04:06.487786 containerd[1469]: time="2024-12-13T09:04:06.487747351Z" level=info msg="CreateContainer within sandbox \"3583b00f424ff47b33b758bcd1faeded32f7f37ad2c2e52769a116abe3765abe\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 09:04:06.506559 containerd[1469]: time="2024-12-13T09:04:06.506423207Z" level=info msg="CreateContainer within sandbox \"3583b00f424ff47b33b758bcd1faeded32f7f37ad2c2e52769a116abe3765abe\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ccdb722425c8865d6a066bd57eb3122d9394090a17c1d7d973362abebdb95185\"" Dec 13 09:04:06.508886 containerd[1469]: time="2024-12-13T09:04:06.507472226Z" level=info msg="StartContainer for \"ccdb722425c8865d6a066bd57eb3122d9394090a17c1d7d973362abebdb95185\"" Dec 13 09:04:06.544506 systemd[1]: Started cri-containerd-ccdb722425c8865d6a066bd57eb3122d9394090a17c1d7d973362abebdb95185.scope - libcontainer container ccdb722425c8865d6a066bd57eb3122d9394090a17c1d7d973362abebdb95185. Dec 13 09:04:06.572844 containerd[1469]: time="2024-12-13T09:04:06.572647159Z" level=info msg="StartContainer for \"ccdb722425c8865d6a066bd57eb3122d9394090a17c1d7d973362abebdb95185\" returns successfully" Dec 13 09:04:10.887971 kubelet[2739]: I1213 09:04:10.887889 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-8c4tl" podStartSLOduration=5.883947218 podStartE2EDuration="10.887871909s" podCreationTimestamp="2024-12-13 09:04:00 +0000 UTC" firstStartedPulling="2024-12-13 09:04:01.481887786 +0000 UTC m=+15.639570315" lastFinishedPulling="2024-12-13 09:04:06.485812477 +0000 UTC m=+20.643495006" observedRunningTime="2024-12-13 09:04:07.069293867 +0000 UTC m=+21.226976436" watchObservedRunningTime="2024-12-13 09:04:10.887871909 +0000 UTC m=+25.045554438" Dec 13 09:04:10.888557 kubelet[2739]: I1213 09:04:10.888029 2739 topology_manager.go:215] "Topology Admit Handler" podUID="07fce06a-0764-4ff9-a3f3-ce807df56785" podNamespace="calico-system" podName="calico-typha-d47f5c4f6-c82nr" Dec 13 09:04:10.897288 systemd[1]: Created slice kubepods-besteffort-pod07fce06a_0764_4ff9_a3f3_ce807df56785.slice - libcontainer container kubepods-besteffort-pod07fce06a_0764_4ff9_a3f3_ce807df56785.slice. Dec 13 09:04:10.910415 kubelet[2739]: I1213 09:04:10.910374 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqwgh\" (UniqueName: \"kubernetes.io/projected/07fce06a-0764-4ff9-a3f3-ce807df56785-kube-api-access-zqwgh\") pod \"calico-typha-d47f5c4f6-c82nr\" (UID: \"07fce06a-0764-4ff9-a3f3-ce807df56785\") " pod="calico-system/calico-typha-d47f5c4f6-c82nr" Dec 13 09:04:10.910415 kubelet[2739]: I1213 09:04:10.910416 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/07fce06a-0764-4ff9-a3f3-ce807df56785-typha-certs\") pod \"calico-typha-d47f5c4f6-c82nr\" (UID: \"07fce06a-0764-4ff9-a3f3-ce807df56785\") " pod="calico-system/calico-typha-d47f5c4f6-c82nr" Dec 13 09:04:10.910589 kubelet[2739]: I1213 09:04:10.910438 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07fce06a-0764-4ff9-a3f3-ce807df56785-tigera-ca-bundle\") pod \"calico-typha-d47f5c4f6-c82nr\" (UID: \"07fce06a-0764-4ff9-a3f3-ce807df56785\") " pod="calico-system/calico-typha-d47f5c4f6-c82nr" Dec 13 09:04:11.006270 kubelet[2739]: I1213 09:04:11.005504 2739 topology_manager.go:215] "Topology Admit Handler" podUID="67d6ce33-fde6-47c0-a23e-dcb137fc2649" podNamespace="calico-system" podName="calico-node-z7qzd" Dec 13 09:04:11.026453 systemd[1]: Created slice kubepods-besteffort-pod67d6ce33_fde6_47c0_a23e_dcb137fc2649.slice - libcontainer container kubepods-besteffort-pod67d6ce33_fde6_47c0_a23e_dcb137fc2649.slice. Dec 13 09:04:11.111417 kubelet[2739]: I1213 09:04:11.111349 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67d6ce33-fde6-47c0-a23e-dcb137fc2649-tigera-ca-bundle\") pod \"calico-node-z7qzd\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " pod="calico-system/calico-node-z7qzd" Dec 13 09:04:11.111571 kubelet[2739]: I1213 09:04:11.111430 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-xtables-lock\") pod \"calico-node-z7qzd\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " pod="calico-system/calico-node-z7qzd" Dec 13 09:04:11.111571 kubelet[2739]: I1213 09:04:11.111469 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-policysync\") pod \"calico-node-z7qzd\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " pod="calico-system/calico-node-z7qzd" Dec 13 09:04:11.111571 kubelet[2739]: I1213 09:04:11.111503 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/67d6ce33-fde6-47c0-a23e-dcb137fc2649-node-certs\") pod \"calico-node-z7qzd\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " pod="calico-system/calico-node-z7qzd" Dec 13 09:04:11.111571 kubelet[2739]: I1213 09:04:11.111538 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-cni-bin-dir\") pod \"calico-node-z7qzd\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " pod="calico-system/calico-node-z7qzd" Dec 13 09:04:11.111660 kubelet[2739]: I1213 09:04:11.111569 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brjhx\" (UniqueName: \"kubernetes.io/projected/67d6ce33-fde6-47c0-a23e-dcb137fc2649-kube-api-access-brjhx\") pod \"calico-node-z7qzd\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " pod="calico-system/calico-node-z7qzd" Dec 13 09:04:11.111660 kubelet[2739]: I1213 09:04:11.111604 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-var-lib-calico\") pod \"calico-node-z7qzd\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " pod="calico-system/calico-node-z7qzd" Dec 13 09:04:11.111660 kubelet[2739]: I1213 09:04:11.111634 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-flexvol-driver-host\") pod \"calico-node-z7qzd\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " pod="calico-system/calico-node-z7qzd" Dec 13 09:04:11.111728 kubelet[2739]: I1213 09:04:11.111711 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-cni-net-dir\") pod \"calico-node-z7qzd\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " pod="calico-system/calico-node-z7qzd" Dec 13 09:04:11.111776 kubelet[2739]: I1213 09:04:11.111750 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-var-run-calico\") pod \"calico-node-z7qzd\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " pod="calico-system/calico-node-z7qzd" Dec 13 09:04:11.111829 kubelet[2739]: I1213 09:04:11.111791 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-lib-modules\") pod \"calico-node-z7qzd\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " pod="calico-system/calico-node-z7qzd" Dec 13 09:04:11.112978 kubelet[2739]: I1213 09:04:11.112939 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-cni-log-dir\") pod \"calico-node-z7qzd\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " pod="calico-system/calico-node-z7qzd" Dec 13 09:04:11.138105 kubelet[2739]: I1213 09:04:11.137953 2739 topology_manager.go:215] "Topology Admit Handler" podUID="a5aed41f-ee8e-4f6b-9d24-6472c4316100" podNamespace="calico-system" podName="csi-node-driver-rzphw" Dec 13 09:04:11.138761 kubelet[2739]: E1213 09:04:11.138287 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzphw" podUID="a5aed41f-ee8e-4f6b-9d24-6472c4316100" Dec 13 09:04:11.205975 containerd[1469]: time="2024-12-13T09:04:11.205921128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d47f5c4f6-c82nr,Uid:07fce06a-0764-4ff9-a3f3-ce807df56785,Namespace:calico-system,Attempt:0,}" Dec 13 09:04:11.214769 kubelet[2739]: I1213 09:04:11.214719 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a5aed41f-ee8e-4f6b-9d24-6472c4316100-kubelet-dir\") pod \"csi-node-driver-rzphw\" (UID: \"a5aed41f-ee8e-4f6b-9d24-6472c4316100\") " pod="calico-system/csi-node-driver-rzphw" Dec 13 09:04:11.214906 kubelet[2739]: I1213 09:04:11.214790 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5aed41f-ee8e-4f6b-9d24-6472c4316100-socket-dir\") pod \"csi-node-driver-rzphw\" (UID: \"a5aed41f-ee8e-4f6b-9d24-6472c4316100\") " pod="calico-system/csi-node-driver-rzphw" Dec 13 09:04:11.214906 kubelet[2739]: I1213 09:04:11.214810 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbp7l\" (UniqueName: \"kubernetes.io/projected/a5aed41f-ee8e-4f6b-9d24-6472c4316100-kube-api-access-sbp7l\") pod \"csi-node-driver-rzphw\" (UID: \"a5aed41f-ee8e-4f6b-9d24-6472c4316100\") " pod="calico-system/csi-node-driver-rzphw" Dec 13 09:04:11.214906 kubelet[2739]: I1213 09:04:11.214852 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5aed41f-ee8e-4f6b-9d24-6472c4316100-registration-dir\") pod \"csi-node-driver-rzphw\" (UID: \"a5aed41f-ee8e-4f6b-9d24-6472c4316100\") " pod="calico-system/csi-node-driver-rzphw" Dec 13 09:04:11.217643 kubelet[2739]: I1213 09:04:11.217609 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a5aed41f-ee8e-4f6b-9d24-6472c4316100-varrun\") pod \"csi-node-driver-rzphw\" (UID: \"a5aed41f-ee8e-4f6b-9d24-6472c4316100\") " pod="calico-system/csi-node-driver-rzphw" Dec 13 09:04:11.218932 kubelet[2739]: E1213 09:04:11.218908 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.218932 kubelet[2739]: W1213 09:04:11.218933 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.219431 kubelet[2739]: E1213 09:04:11.218957 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.219431 kubelet[2739]: E1213 09:04:11.219329 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.219431 kubelet[2739]: W1213 09:04:11.219350 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.219431 kubelet[2739]: E1213 09:04:11.219373 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.219818 kubelet[2739]: E1213 09:04:11.219686 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.219818 kubelet[2739]: W1213 09:04:11.219697 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.219818 kubelet[2739]: E1213 09:04:11.219728 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.220083 kubelet[2739]: E1213 09:04:11.219966 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.220083 kubelet[2739]: W1213 09:04:11.219978 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.220083 kubelet[2739]: E1213 09:04:11.220062 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.220403 kubelet[2739]: E1213 09:04:11.220315 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.220403 kubelet[2739]: W1213 09:04:11.220326 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.220604 kubelet[2739]: E1213 09:04:11.220581 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.220604 kubelet[2739]: E1213 09:04:11.220589 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.220740 kubelet[2739]: W1213 09:04:11.220592 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.220740 kubelet[2739]: E1213 09:04:11.220704 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.225479 kubelet[2739]: E1213 09:04:11.221688 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.225479 kubelet[2739]: W1213 09:04:11.225449 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.227232 kubelet[2739]: E1213 09:04:11.226249 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.227511 kubelet[2739]: E1213 09:04:11.227492 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.229220 kubelet[2739]: W1213 09:04:11.229021 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.231254 kubelet[2739]: E1213 09:04:11.229405 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.231656 kubelet[2739]: E1213 09:04:11.231423 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.231656 kubelet[2739]: W1213 09:04:11.231456 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.231656 kubelet[2739]: E1213 09:04:11.231539 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.231856 kubelet[2739]: E1213 09:04:11.231836 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.231928 kubelet[2739]: W1213 09:04:11.231916 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.232076 kubelet[2739]: E1213 09:04:11.232064 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.232621 kubelet[2739]: E1213 09:04:11.232599 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.232779 kubelet[2739]: W1213 09:04:11.232683 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.232876 kubelet[2739]: E1213 09:04:11.232860 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.233074 kubelet[2739]: E1213 09:04:11.233060 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.233152 kubelet[2739]: W1213 09:04:11.233141 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.233729 kubelet[2739]: E1213 09:04:11.233712 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.236251 kubelet[2739]: E1213 09:04:11.236087 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.236251 kubelet[2739]: W1213 09:04:11.236116 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.236434 kubelet[2739]: E1213 09:04:11.236362 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.236669 kubelet[2739]: E1213 09:04:11.236647 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.236749 kubelet[2739]: W1213 09:04:11.236737 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.236924 kubelet[2739]: E1213 09:04:11.236911 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.237037 kubelet[2739]: E1213 09:04:11.237027 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.237112 kubelet[2739]: W1213 09:04:11.237100 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.237317 kubelet[2739]: E1213 09:04:11.237303 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.237457 kubelet[2739]: E1213 09:04:11.237446 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.237532 kubelet[2739]: W1213 09:04:11.237520 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.237675 kubelet[2739]: E1213 09:04:11.237663 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.242601 kubelet[2739]: E1213 09:04:11.240984 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.242953 kubelet[2739]: W1213 09:04:11.242839 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.243235 kubelet[2739]: E1213 09:04:11.243162 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.243332 kubelet[2739]: W1213 09:04:11.243307 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.245147 kubelet[2739]: E1213 09:04:11.243526 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.245414 kubelet[2739]: E1213 09:04:11.245252 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.245495 kubelet[2739]: E1213 09:04:11.245323 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.245548 kubelet[2739]: W1213 09:04:11.245537 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.245701 kubelet[2739]: E1213 09:04:11.245688 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.245849 kubelet[2739]: E1213 09:04:11.245839 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.245911 kubelet[2739]: W1213 09:04:11.245900 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.246029 kubelet[2739]: E1213 09:04:11.246006 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.250622 kubelet[2739]: E1213 09:04:11.250273 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.250622 kubelet[2739]: W1213 09:04:11.250295 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.250622 kubelet[2739]: E1213 09:04:11.250345 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.250622 kubelet[2739]: E1213 09:04:11.250543 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.250622 kubelet[2739]: W1213 09:04:11.250551 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.254700 kubelet[2739]: E1213 09:04:11.254435 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.254700 kubelet[2739]: E1213 09:04:11.254541 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.254700 kubelet[2739]: W1213 09:04:11.254551 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.254700 kubelet[2739]: E1213 09:04:11.254586 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.256726 kubelet[2739]: E1213 09:04:11.256372 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.257084 kubelet[2739]: W1213 09:04:11.257062 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.258483 kubelet[2739]: E1213 09:04:11.258393 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.258483 kubelet[2739]: W1213 09:04:11.258411 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.258872 kubelet[2739]: E1213 09:04:11.258856 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.266979 kubelet[2739]: W1213 09:04:11.266238 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.266979 kubelet[2739]: E1213 09:04:11.261869 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.266979 kubelet[2739]: E1213 09:04:11.261879 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.266979 kubelet[2739]: E1213 09:04:11.266383 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.269353 kubelet[2739]: E1213 09:04:11.269325 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.270128 kubelet[2739]: W1213 09:04:11.270095 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.270739 kubelet[2739]: E1213 09:04:11.270720 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.270959 kubelet[2739]: E1213 09:04:11.270873 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.270959 kubelet[2739]: W1213 09:04:11.270885 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.271250 kubelet[2739]: E1213 09:04:11.271160 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.271546 kubelet[2739]: E1213 09:04:11.271365 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.271546 kubelet[2739]: W1213 09:04:11.271378 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.271699 kubelet[2739]: E1213 09:04:11.271683 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.271973 kubelet[2739]: E1213 09:04:11.271885 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.271973 kubelet[2739]: W1213 09:04:11.271895 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.272818 kubelet[2739]: E1213 09:04:11.272652 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.273388 kubelet[2739]: E1213 09:04:11.273024 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.273388 kubelet[2739]: W1213 09:04:11.273038 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.274332 kubelet[2739]: E1213 09:04:11.274307 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.274525 kubelet[2739]: E1213 09:04:11.274473 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.274525 kubelet[2739]: W1213 09:04:11.274484 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.274525 kubelet[2739]: E1213 09:04:11.274494 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.287857 containerd[1469]: time="2024-12-13T09:04:11.287436645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:11.287857 containerd[1469]: time="2024-12-13T09:04:11.287543807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:11.287857 containerd[1469]: time="2024-12-13T09:04:11.287555647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:11.287857 containerd[1469]: time="2024-12-13T09:04:11.287678449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:11.310633 systemd[1]: Started cri-containerd-5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f.scope - libcontainer container 5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f. Dec 13 09:04:11.332040 containerd[1469]: time="2024-12-13T09:04:11.331298394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z7qzd,Uid:67d6ce33-fde6-47c0-a23e-dcb137fc2649,Namespace:calico-system,Attempt:0,}" Dec 13 09:04:11.336032 kubelet[2739]: E1213 09:04:11.334049 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.336032 kubelet[2739]: W1213 09:04:11.334078 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.336032 kubelet[2739]: E1213 09:04:11.334111 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.336032 kubelet[2739]: E1213 09:04:11.335459 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.336032 kubelet[2739]: W1213 09:04:11.335482 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.336032 kubelet[2739]: E1213 09:04:11.335565 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.337496 kubelet[2739]: E1213 09:04:11.337467 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.337496 kubelet[2739]: W1213 09:04:11.337491 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.338231 kubelet[2739]: E1213 09:04:11.338212 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.338541 kubelet[2739]: E1213 09:04:11.338497 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.338541 kubelet[2739]: W1213 09:04:11.338520 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.339124 kubelet[2739]: E1213 09:04:11.338684 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.339615 kubelet[2739]: E1213 09:04:11.339293 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.339615 kubelet[2739]: W1213 09:04:11.339403 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.339615 kubelet[2739]: E1213 09:04:11.339429 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.340204 kubelet[2739]: E1213 09:04:11.340039 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.340204 kubelet[2739]: W1213 09:04:11.340072 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.340204 kubelet[2739]: E1213 09:04:11.340133 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.340795 kubelet[2739]: E1213 09:04:11.340664 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.340795 kubelet[2739]: W1213 09:04:11.340680 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.340795 kubelet[2739]: E1213 09:04:11.340701 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.341483 kubelet[2739]: E1213 09:04:11.341423 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.341483 kubelet[2739]: W1213 09:04:11.341446 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.341483 kubelet[2739]: E1213 09:04:11.341484 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.342031 kubelet[2739]: E1213 09:04:11.341862 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.342031 kubelet[2739]: W1213 09:04:11.341880 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.342031 kubelet[2739]: E1213 09:04:11.341924 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.342743 kubelet[2739]: E1213 09:04:11.342717 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.342890 kubelet[2739]: W1213 09:04:11.342750 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.343080 kubelet[2739]: E1213 09:04:11.343058 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.343629 kubelet[2739]: E1213 09:04:11.343590 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.343629 kubelet[2739]: W1213 09:04:11.343624 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.343909 kubelet[2739]: E1213 09:04:11.343788 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.344437 kubelet[2739]: E1213 09:04:11.344392 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.344437 kubelet[2739]: W1213 09:04:11.344415 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.345084 kubelet[2739]: E1213 09:04:11.345014 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.345084 kubelet[2739]: E1213 09:04:11.345046 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.345084 kubelet[2739]: W1213 09:04:11.345060 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.346100 kubelet[2739]: E1213 09:04:11.345384 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.346100 kubelet[2739]: E1213 09:04:11.346042 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.346100 kubelet[2739]: W1213 09:04:11.346058 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.346341 kubelet[2739]: E1213 09:04:11.346245 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.347116 kubelet[2739]: E1213 09:04:11.347086 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.347116 kubelet[2739]: W1213 09:04:11.347109 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.347481 kubelet[2739]: E1213 09:04:11.347287 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.347826 kubelet[2739]: E1213 09:04:11.347795 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.347826 kubelet[2739]: W1213 09:04:11.347823 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.347895 kubelet[2739]: E1213 09:04:11.347884 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.348745 kubelet[2739]: E1213 09:04:11.348715 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.348745 kubelet[2739]: W1213 09:04:11.348742 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.348907 kubelet[2739]: E1213 09:04:11.348818 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.350092 kubelet[2739]: E1213 09:04:11.350057 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.350092 kubelet[2739]: W1213 09:04:11.350089 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.350276 kubelet[2739]: E1213 09:04:11.350251 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.351129 kubelet[2739]: E1213 09:04:11.351092 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.351129 kubelet[2739]: W1213 09:04:11.351123 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.351510 kubelet[2739]: E1213 09:04:11.351320 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.352107 kubelet[2739]: E1213 09:04:11.351776 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.352107 kubelet[2739]: W1213 09:04:11.351911 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.352107 kubelet[2739]: E1213 09:04:11.351943 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.352653 kubelet[2739]: E1213 09:04:11.352555 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.353094 kubelet[2739]: W1213 09:04:11.352887 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.353094 kubelet[2739]: E1213 09:04:11.352932 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.353542 kubelet[2739]: E1213 09:04:11.353480 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.353976 kubelet[2739]: W1213 09:04:11.353832 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.353976 kubelet[2739]: E1213 09:04:11.353888 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.354935 kubelet[2739]: E1213 09:04:11.354915 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.355149 kubelet[2739]: W1213 09:04:11.355005 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.355149 kubelet[2739]: E1213 09:04:11.355132 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.355398 kubelet[2739]: E1213 09:04:11.355383 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.355468 kubelet[2739]: W1213 09:04:11.355455 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.356416 kubelet[2739]: E1213 09:04:11.356395 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.356532 kubelet[2739]: W1213 09:04:11.356493 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.356532 kubelet[2739]: E1213 09:04:11.356509 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.356708 kubelet[2739]: E1213 09:04:11.356514 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.382521 kubelet[2739]: E1213 09:04:11.382491 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:11.383354 kubelet[2739]: W1213 09:04:11.383273 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:11.383354 kubelet[2739]: E1213 09:04:11.383312 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:11.384428 containerd[1469]: time="2024-12-13T09:04:11.383728441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d47f5c4f6-c82nr,Uid:07fce06a-0764-4ff9-a3f3-ce807df56785,Namespace:calico-system,Attempt:0,} returns sandbox id \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\"" Dec 13 09:04:11.389483 containerd[1469]: time="2024-12-13T09:04:11.388082992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 09:04:11.395266 containerd[1469]: time="2024-12-13T09:04:11.386972814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:11.395266 containerd[1469]: time="2024-12-13T09:04:11.394926902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:11.396473 containerd[1469]: time="2024-12-13T09:04:11.395675275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:11.398041 containerd[1469]: time="2024-12-13T09:04:11.397703547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:11.420716 systemd[1]: Started cri-containerd-d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7.scope - libcontainer container d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7. Dec 13 09:04:11.453934 containerd[1469]: time="2024-12-13T09:04:11.453893575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z7qzd,Uid:67d6ce33-fde6-47c0-a23e-dcb137fc2649,Namespace:calico-system,Attempt:0,} returns sandbox id \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\"" Dec 13 09:04:12.928031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447720719.mount: Deactivated successfully. Dec 13 09:04:12.951345 kubelet[2739]: E1213 09:04:12.950960 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzphw" podUID="a5aed41f-ee8e-4f6b-9d24-6472c4316100" Dec 13 09:04:13.856955 containerd[1469]: time="2024-12-13T09:04:13.856883601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:13.858129 containerd[1469]: time="2024-12-13T09:04:13.858033579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 09:04:13.859313 containerd[1469]: time="2024-12-13T09:04:13.859270318Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:13.862288 containerd[1469]: time="2024-12-13T09:04:13.862230164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:13.863271 containerd[1469]: time="2024-12-13T09:04:13.862984456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.474856263s" Dec 13 09:04:13.863271 containerd[1469]: time="2024-12-13T09:04:13.863024136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 09:04:13.865226 containerd[1469]: time="2024-12-13T09:04:13.865065648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 09:04:13.880768 containerd[1469]: time="2024-12-13T09:04:13.880571608Z" level=info msg="CreateContainer within sandbox \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 09:04:13.906695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931701463.mount: Deactivated successfully. Dec 13 09:04:13.911393 containerd[1469]: time="2024-12-13T09:04:13.910943199Z" level=info msg="CreateContainer within sandbox \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e\"" Dec 13 09:04:13.914346 containerd[1469]: time="2024-12-13T09:04:13.914150489Z" level=info msg="StartContainer for \"0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e\"" Dec 13 09:04:13.951034 systemd[1]: Started cri-containerd-0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e.scope - libcontainer container 0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e. Dec 13 09:04:13.994636 containerd[1469]: time="2024-12-13T09:04:13.994587897Z" level=info msg="StartContainer for \"0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e\" returns successfully" Dec 13 09:04:14.121562 kubelet[2739]: E1213 09:04:14.121382 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.121562 kubelet[2739]: W1213 09:04:14.121467 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.122373 kubelet[2739]: E1213 09:04:14.121581 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.122373 kubelet[2739]: E1213 09:04:14.121935 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.122373 kubelet[2739]: W1213 09:04:14.121946 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.122373 kubelet[2739]: E1213 09:04:14.121976 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.123719 kubelet[2739]: E1213 09:04:14.123652 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.123719 kubelet[2739]: W1213 09:04:14.123678 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.123830 kubelet[2739]: E1213 09:04:14.123795 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.124461 kubelet[2739]: E1213 09:04:14.124439 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.124461 kubelet[2739]: W1213 09:04:14.124460 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.124556 kubelet[2739]: E1213 09:04:14.124474 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.125813 kubelet[2739]: E1213 09:04:14.125773 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.125813 kubelet[2739]: W1213 09:04:14.125795 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.125813 kubelet[2739]: E1213 09:04:14.125809 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.126747 kubelet[2739]: E1213 09:04:14.126706 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.126747 kubelet[2739]: W1213 09:04:14.126728 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.126747 kubelet[2739]: E1213 09:04:14.126752 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.127561 kubelet[2739]: E1213 09:04:14.127522 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.127561 kubelet[2739]: W1213 09:04:14.127543 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.127561 kubelet[2739]: E1213 09:04:14.127556 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.131452 kubelet[2739]: E1213 09:04:14.131417 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.131452 kubelet[2739]: W1213 09:04:14.131440 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.131585 kubelet[2739]: E1213 09:04:14.131464 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.131839 kubelet[2739]: E1213 09:04:14.131775 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.131839 kubelet[2739]: W1213 09:04:14.131791 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.131839 kubelet[2739]: E1213 09:04:14.131802 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.132148 kubelet[2739]: E1213 09:04:14.132117 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.132148 kubelet[2739]: W1213 09:04:14.132137 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.132255 kubelet[2739]: E1213 09:04:14.132149 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.132584 kubelet[2739]: E1213 09:04:14.132500 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.132584 kubelet[2739]: W1213 09:04:14.132519 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.132584 kubelet[2739]: E1213 09:04:14.132530 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.133617 kubelet[2739]: E1213 09:04:14.133538 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.133617 kubelet[2739]: W1213 09:04:14.133555 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.133882 kubelet[2739]: E1213 09:04:14.133570 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.134713 kubelet[2739]: E1213 09:04:14.134628 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.134713 kubelet[2739]: W1213 09:04:14.134654 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.134713 kubelet[2739]: E1213 09:04:14.134666 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.135183 kubelet[2739]: E1213 09:04:14.135128 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.135183 kubelet[2739]: W1213 09:04:14.135156 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.135183 kubelet[2739]: E1213 09:04:14.135168 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.136090 kubelet[2739]: E1213 09:04:14.136067 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.136090 kubelet[2739]: W1213 09:04:14.136086 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.136170 kubelet[2739]: E1213 09:04:14.136099 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.166472 kubelet[2739]: E1213 09:04:14.166433 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.166472 kubelet[2739]: W1213 09:04:14.166462 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.166472 kubelet[2739]: E1213 09:04:14.166484 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.167124 kubelet[2739]: E1213 09:04:14.167080 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.167124 kubelet[2739]: W1213 09:04:14.167099 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.167124 kubelet[2739]: E1213 09:04:14.167116 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.167598 kubelet[2739]: E1213 09:04:14.167579 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.167598 kubelet[2739]: W1213 09:04:14.167594 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.167739 kubelet[2739]: E1213 09:04:14.167713 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.168133 kubelet[2739]: E1213 09:04:14.168011 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.168133 kubelet[2739]: W1213 09:04:14.168029 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.168133 kubelet[2739]: E1213 09:04:14.168044 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.169305 kubelet[2739]: E1213 09:04:14.169278 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.169305 kubelet[2739]: W1213 09:04:14.169302 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.169475 kubelet[2739]: E1213 09:04:14.169462 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.169648 kubelet[2739]: E1213 09:04:14.169630 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.169648 kubelet[2739]: W1213 09:04:14.169646 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.169855 kubelet[2739]: E1213 09:04:14.169745 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.170180 kubelet[2739]: E1213 09:04:14.170165 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.170314 kubelet[2739]: W1213 09:04:14.170181 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.170478 kubelet[2739]: E1213 09:04:14.170459 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.170770 kubelet[2739]: E1213 09:04:14.170753 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.170770 kubelet[2739]: W1213 09:04:14.170770 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.171375 kubelet[2739]: E1213 09:04:14.171279 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.171630 kubelet[2739]: E1213 09:04:14.171537 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.171630 kubelet[2739]: W1213 09:04:14.171549 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.171630 kubelet[2739]: E1213 09:04:14.171566 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.172250 kubelet[2739]: E1213 09:04:14.172172 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.172303 kubelet[2739]: W1213 09:04:14.172273 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.172450 kubelet[2739]: E1213 09:04:14.172373 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.172500 kubelet[2739]: E1213 09:04:14.172485 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.172500 kubelet[2739]: W1213 09:04:14.172494 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.172584 kubelet[2739]: E1213 09:04:14.172567 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.172784 kubelet[2739]: E1213 09:04:14.172758 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.172784 kubelet[2739]: W1213 09:04:14.172776 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.173382 kubelet[2739]: E1213 09:04:14.173350 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.173768 kubelet[2739]: E1213 09:04:14.173751 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.173768 kubelet[2739]: W1213 09:04:14.173767 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.173847 kubelet[2739]: E1213 09:04:14.173784 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.174326 kubelet[2739]: E1213 09:04:14.174282 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.174326 kubelet[2739]: W1213 09:04:14.174305 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.175392 kubelet[2739]: E1213 09:04:14.175334 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.175512 kubelet[2739]: E1213 09:04:14.175491 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.175512 kubelet[2739]: W1213 09:04:14.175507 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.175690 kubelet[2739]: E1213 09:04:14.175599 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.175768 kubelet[2739]: E1213 09:04:14.175745 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.175768 kubelet[2739]: W1213 09:04:14.175760 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.175833 kubelet[2739]: E1213 09:04:14.175775 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.176002 kubelet[2739]: E1213 09:04:14.175938 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.176002 kubelet[2739]: W1213 09:04:14.175997 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.176069 kubelet[2739]: E1213 09:04:14.176009 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.176625 kubelet[2739]: E1213 09:04:14.176599 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:14.176625 kubelet[2739]: W1213 09:04:14.176615 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:14.176625 kubelet[2739]: E1213 09:04:14.176627 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:14.950919 kubelet[2739]: E1213 09:04:14.950839 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzphw" podUID="a5aed41f-ee8e-4f6b-9d24-6472c4316100" Dec 13 09:04:15.097221 kubelet[2739]: I1213 09:04:15.097154 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:04:15.142884 kubelet[2739]: E1213 09:04:15.142848 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.142884 kubelet[2739]: W1213 09:04:15.142879 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.143395 kubelet[2739]: E1213 09:04:15.142906 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.143621 kubelet[2739]: E1213 09:04:15.143598 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.143673 kubelet[2739]: W1213 09:04:15.143623 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.143673 kubelet[2739]: E1213 09:04:15.143641 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.143957 kubelet[2739]: E1213 09:04:15.143937 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.144009 kubelet[2739]: W1213 09:04:15.143957 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.144009 kubelet[2739]: E1213 09:04:15.143975 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.144867 kubelet[2739]: E1213 09:04:15.144844 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.144867 kubelet[2739]: W1213 09:04:15.144867 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.144981 kubelet[2739]: E1213 09:04:15.144884 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.145523 kubelet[2739]: E1213 09:04:15.145476 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.145523 kubelet[2739]: W1213 09:04:15.145500 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.145523 kubelet[2739]: E1213 09:04:15.145517 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.145921 kubelet[2739]: E1213 09:04:15.145899 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.145921 kubelet[2739]: W1213 09:04:15.145918 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.146036 kubelet[2739]: E1213 09:04:15.145933 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.146181 kubelet[2739]: E1213 09:04:15.146148 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.146181 kubelet[2739]: W1213 09:04:15.146165 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.146181 kubelet[2739]: E1213 09:04:15.146178 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.146532 kubelet[2739]: E1213 09:04:15.146465 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.146532 kubelet[2739]: W1213 09:04:15.146477 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.146532 kubelet[2739]: E1213 09:04:15.146488 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.146771 kubelet[2739]: E1213 09:04:15.146747 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.146771 kubelet[2739]: W1213 09:04:15.146768 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.146891 kubelet[2739]: E1213 09:04:15.146781 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.146970 kubelet[2739]: E1213 09:04:15.146954 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.146970 kubelet[2739]: W1213 09:04:15.146969 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.147059 kubelet[2739]: E1213 09:04:15.146980 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.147170 kubelet[2739]: E1213 09:04:15.147153 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.147320 kubelet[2739]: W1213 09:04:15.147170 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.147320 kubelet[2739]: E1213 09:04:15.147180 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.147462 kubelet[2739]: E1213 09:04:15.147441 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.147462 kubelet[2739]: W1213 09:04:15.147459 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.147550 kubelet[2739]: E1213 09:04:15.147472 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.147665 kubelet[2739]: E1213 09:04:15.147651 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.147665 kubelet[2739]: W1213 09:04:15.147663 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.147726 kubelet[2739]: E1213 09:04:15.147672 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.147990 kubelet[2739]: E1213 09:04:15.147972 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.147990 kubelet[2739]: W1213 09:04:15.147988 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.148069 kubelet[2739]: E1213 09:04:15.147997 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.148178 kubelet[2739]: E1213 09:04:15.148159 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.148178 kubelet[2739]: W1213 09:04:15.148175 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.148298 kubelet[2739]: E1213 09:04:15.148255 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.174659 kubelet[2739]: E1213 09:04:15.174605 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.175135 kubelet[2739]: W1213 09:04:15.174844 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.175135 kubelet[2739]: E1213 09:04:15.174892 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.175517 kubelet[2739]: E1213 09:04:15.175440 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.175731 kubelet[2739]: W1213 09:04:15.175660 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.175905 kubelet[2739]: E1213 09:04:15.175707 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.176089 kubelet[2739]: E1213 09:04:15.175987 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.176089 kubelet[2739]: W1213 09:04:15.176040 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.176089 kubelet[2739]: E1213 09:04:15.176067 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.176504 kubelet[2739]: E1213 09:04:15.176481 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.176846 kubelet[2739]: W1213 09:04:15.176577 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.176846 kubelet[2739]: E1213 09:04:15.176634 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.176846 kubelet[2739]: E1213 09:04:15.176805 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.176846 kubelet[2739]: W1213 09:04:15.176813 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.176846 kubelet[2739]: E1213 09:04:15.176823 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.177140 kubelet[2739]: E1213 09:04:15.176988 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.177140 kubelet[2739]: W1213 09:04:15.176996 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.177140 kubelet[2739]: E1213 09:04:15.177006 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.177884 kubelet[2739]: E1213 09:04:15.177366 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.177884 kubelet[2739]: W1213 09:04:15.177377 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.177884 kubelet[2739]: E1213 09:04:15.177423 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.177884 kubelet[2739]: E1213 09:04:15.177543 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.177884 kubelet[2739]: W1213 09:04:15.177551 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.177884 kubelet[2739]: E1213 09:04:15.177588 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.177884 kubelet[2739]: E1213 09:04:15.177684 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.177884 kubelet[2739]: W1213 09:04:15.177692 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.177884 kubelet[2739]: E1213 09:04:15.177707 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.178860 kubelet[2739]: E1213 09:04:15.177943 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.178860 kubelet[2739]: W1213 09:04:15.177954 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.178860 kubelet[2739]: E1213 09:04:15.177972 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.178860 kubelet[2739]: E1213 09:04:15.178132 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.178860 kubelet[2739]: W1213 09:04:15.178141 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.178860 kubelet[2739]: E1213 09:04:15.178156 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.178860 kubelet[2739]: E1213 09:04:15.178363 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.178860 kubelet[2739]: W1213 09:04:15.178373 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.178860 kubelet[2739]: E1213 09:04:15.178389 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.179431 kubelet[2739]: E1213 09:04:15.179399 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.179483 kubelet[2739]: W1213 09:04:15.179428 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.179483 kubelet[2739]: E1213 09:04:15.179457 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.179822 kubelet[2739]: E1213 09:04:15.179750 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.179822 kubelet[2739]: W1213 09:04:15.179775 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.179989 kubelet[2739]: E1213 09:04:15.179906 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.180113 kubelet[2739]: E1213 09:04:15.180079 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.180113 kubelet[2739]: W1213 09:04:15.180102 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.180229 kubelet[2739]: E1213 09:04:15.180122 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.180439 kubelet[2739]: E1213 09:04:15.180421 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.180480 kubelet[2739]: W1213 09:04:15.180438 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.180521 kubelet[2739]: E1213 09:04:15.180488 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.180719 kubelet[2739]: E1213 09:04:15.180705 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.180764 kubelet[2739]: W1213 09:04:15.180719 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.180764 kubelet[2739]: E1213 09:04:15.180732 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.181353 kubelet[2739]: E1213 09:04:15.181331 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:04:15.181435 kubelet[2739]: W1213 09:04:15.181354 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:04:15.181435 kubelet[2739]: E1213 09:04:15.181374 2739 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:04:15.352033 containerd[1469]: time="2024-12-13T09:04:15.351681176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:15.356752 containerd[1469]: time="2024-12-13T09:04:15.356375445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 09:04:15.356752 containerd[1469]: time="2024-12-13T09:04:15.356626049Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:15.362912 containerd[1469]: time="2024-12-13T09:04:15.362648379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:15.363541 containerd[1469]: time="2024-12-13T09:04:15.363253908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.4981381s" Dec 13 09:04:15.363541 containerd[1469]: time="2024-12-13T09:04:15.363297789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 09:04:15.371669 containerd[1469]: time="2024-12-13T09:04:15.371552351Z" level=info msg="CreateContainer within sandbox \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 09:04:15.393410 containerd[1469]: time="2024-12-13T09:04:15.393103752Z" level=info msg="CreateContainer within sandbox \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f\"" Dec 13 09:04:15.394578 containerd[1469]: time="2024-12-13T09:04:15.394417932Z" level=info msg="StartContainer for \"59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f\"" Dec 13 09:04:15.430403 systemd[1]: Started cri-containerd-59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f.scope - libcontainer container 59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f. Dec 13 09:04:15.464889 containerd[1469]: time="2024-12-13T09:04:15.464554497Z" level=info msg="StartContainer for \"59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f\" returns successfully" Dec 13 09:04:15.482666 systemd[1]: cri-containerd-59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f.scope: Deactivated successfully. Dec 13 09:04:15.602455 containerd[1469]: time="2024-12-13T09:04:15.602247068Z" level=info msg="shim disconnected" id=59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f namespace=k8s.io Dec 13 09:04:15.602455 containerd[1469]: time="2024-12-13T09:04:15.602333149Z" level=warning msg="cleaning up after shim disconnected" id=59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f namespace=k8s.io Dec 13 09:04:15.602455 containerd[1469]: time="2024-12-13T09:04:15.602344589Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:04:15.874395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f-rootfs.mount: Deactivated successfully. Dec 13 09:04:16.103459 containerd[1469]: time="2024-12-13T09:04:16.103389062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 09:04:16.133664 kubelet[2739]: I1213 09:04:16.132805 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d47f5c4f6-c82nr" podStartSLOduration=3.656091719 podStartE2EDuration="6.132788211s" podCreationTimestamp="2024-12-13 09:04:10 +0000 UTC" firstStartedPulling="2024-12-13 09:04:11.387644505 +0000 UTC m=+25.545326994" lastFinishedPulling="2024-12-13 09:04:13.864340957 +0000 UTC m=+28.022023486" observedRunningTime="2024-12-13 09:04:14.129068302 +0000 UTC m=+28.286750831" watchObservedRunningTime="2024-12-13 09:04:16.132788211 +0000 UTC m=+30.290470740" Dec 13 09:04:16.950524 kubelet[2739]: E1213 09:04:16.950001 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzphw" podUID="a5aed41f-ee8e-4f6b-9d24-6472c4316100" Dec 13 09:04:18.644231 containerd[1469]: time="2024-12-13T09:04:18.643858715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:18.647896 containerd[1469]: time="2024-12-13T09:04:18.647843291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 09:04:18.649328 containerd[1469]: time="2024-12-13T09:04:18.649292672Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:18.652312 containerd[1469]: time="2024-12-13T09:04:18.652249193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:18.653426 containerd[1469]: time="2024-12-13T09:04:18.652977123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.54952026s" Dec 13 09:04:18.653426 containerd[1469]: time="2024-12-13T09:04:18.653010644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 09:04:18.656007 containerd[1469]: time="2024-12-13T09:04:18.655974805Z" level=info msg="CreateContainer within sandbox \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 09:04:18.672660 containerd[1469]: time="2024-12-13T09:04:18.672604439Z" level=info msg="CreateContainer within sandbox \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d\"" Dec 13 09:04:18.675261 containerd[1469]: time="2024-12-13T09:04:18.673793896Z" level=info msg="StartContainer for \"6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d\"" Dec 13 09:04:18.725120 systemd[1]: Started cri-containerd-6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d.scope - libcontainer container 6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d. Dec 13 09:04:18.801120 containerd[1469]: time="2024-12-13T09:04:18.801068963Z" level=info msg="StartContainer for \"6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d\" returns successfully" Dec 13 09:04:18.953353 kubelet[2739]: E1213 09:04:18.950407 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzphw" podUID="a5aed41f-ee8e-4f6b-9d24-6472c4316100" Dec 13 09:04:19.364470 systemd[1]: cri-containerd-6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d.scope: Deactivated successfully. Dec 13 09:04:19.379929 kubelet[2739]: I1213 09:04:19.378442 2739 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 09:04:19.403415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d-rootfs.mount: Deactivated successfully. Dec 13 09:04:19.430224 kubelet[2739]: I1213 09:04:19.428128 2739 topology_manager.go:215] "Topology Admit Handler" podUID="e5cf6eb7-a899-4823-b6d3-c77cbab40250" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8vqm4" Dec 13 09:04:19.433918 kubelet[2739]: I1213 09:04:19.433789 2739 topology_manager.go:215] "Topology Admit Handler" podUID="65b6b39e-1d72-462b-8014-1227230aa5b7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-92kcx" Dec 13 09:04:19.442018 kubelet[2739]: I1213 09:04:19.441832 2739 topology_manager.go:215] "Topology Admit Handler" podUID="f31f3787-a93b-48e8-9d06-71efae4d1e4f" podNamespace="calico-apiserver" podName="calico-apiserver-5dc6fbbbbc-bmlsz" Dec 13 09:04:19.442554 kubelet[2739]: I1213 09:04:19.442531 2739 topology_manager.go:215] "Topology Admit Handler" podUID="98dfd1c7-089d-4dd5-bd73-26a0d273295c" podNamespace="calico-system" podName="calico-kube-controllers-85c4855bd8-km44p" Dec 13 09:04:19.447343 kubelet[2739]: I1213 09:04:19.447308 2739 topology_manager.go:215] "Topology Admit Handler" podUID="8eb77da9-ce92-437d-a506-03a84d1e2646" podNamespace="calico-apiserver" podName="calico-apiserver-5dc6fbbbbc-n6v2x" Dec 13 09:04:19.456922 systemd[1]: Created slice kubepods-burstable-pode5cf6eb7_a899_4823_b6d3_c77cbab40250.slice - libcontainer container kubepods-burstable-pode5cf6eb7_a899_4823_b6d3_c77cbab40250.slice. Dec 13 09:04:19.475568 systemd[1]: Created slice kubepods-besteffort-pod8eb77da9_ce92_437d_a506_03a84d1e2646.slice - libcontainer container kubepods-besteffort-pod8eb77da9_ce92_437d_a506_03a84d1e2646.slice. Dec 13 09:04:19.483628 systemd[1]: Created slice kubepods-burstable-pod65b6b39e_1d72_462b_8014_1227230aa5b7.slice - libcontainer container kubepods-burstable-pod65b6b39e_1d72_462b_8014_1227230aa5b7.slice. Dec 13 09:04:19.500296 systemd[1]: Created slice kubepods-besteffort-pod98dfd1c7_089d_4dd5_bd73_26a0d273295c.slice - libcontainer container kubepods-besteffort-pod98dfd1c7_089d_4dd5_bd73_26a0d273295c.slice. Dec 13 09:04:19.511499 containerd[1469]: time="2024-12-13T09:04:19.510168906Z" level=info msg="shim disconnected" id=6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d namespace=k8s.io Dec 13 09:04:19.511499 containerd[1469]: time="2024-12-13T09:04:19.510815595Z" level=warning msg="cleaning up after shim disconnected" id=6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d namespace=k8s.io Dec 13 09:04:19.511499 containerd[1469]: time="2024-12-13T09:04:19.510827635Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:04:19.511676 kubelet[2739]: I1213 09:04:19.510837 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98dfd1c7-089d-4dd5-bd73-26a0d273295c-tigera-ca-bundle\") pod \"calico-kube-controllers-85c4855bd8-km44p\" (UID: \"98dfd1c7-089d-4dd5-bd73-26a0d273295c\") " pod="calico-system/calico-kube-controllers-85c4855bd8-km44p" Dec 13 09:04:19.511676 kubelet[2739]: I1213 09:04:19.510881 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5cf6eb7-a899-4823-b6d3-c77cbab40250-config-volume\") pod \"coredns-7db6d8ff4d-8vqm4\" (UID: \"e5cf6eb7-a899-4823-b6d3-c77cbab40250\") " pod="kube-system/coredns-7db6d8ff4d-8vqm4" Dec 13 09:04:19.511676 kubelet[2739]: I1213 09:04:19.510907 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65b6b39e-1d72-462b-8014-1227230aa5b7-config-volume\") pod \"coredns-7db6d8ff4d-92kcx\" (UID: \"65b6b39e-1d72-462b-8014-1227230aa5b7\") " pod="kube-system/coredns-7db6d8ff4d-92kcx" Dec 13 09:04:19.511676 kubelet[2739]: I1213 09:04:19.510924 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8eb77da9-ce92-437d-a506-03a84d1e2646-calico-apiserver-certs\") pod \"calico-apiserver-5dc6fbbbbc-n6v2x\" (UID: \"8eb77da9-ce92-437d-a506-03a84d1e2646\") " pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-n6v2x" Dec 13 09:04:19.511676 kubelet[2739]: I1213 09:04:19.510952 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxbj2\" (UniqueName: \"kubernetes.io/projected/98dfd1c7-089d-4dd5-bd73-26a0d273295c-kube-api-access-zxbj2\") pod \"calico-kube-controllers-85c4855bd8-km44p\" (UID: \"98dfd1c7-089d-4dd5-bd73-26a0d273295c\") " pod="calico-system/calico-kube-controllers-85c4855bd8-km44p" Dec 13 09:04:19.513075 kubelet[2739]: I1213 09:04:19.510975 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f31f3787-a93b-48e8-9d06-71efae4d1e4f-calico-apiserver-certs\") pod \"calico-apiserver-5dc6fbbbbc-bmlsz\" (UID: \"f31f3787-a93b-48e8-9d06-71efae4d1e4f\") " pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-bmlsz" Dec 13 09:04:19.513075 kubelet[2739]: I1213 09:04:19.510993 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj6mm\" (UniqueName: \"kubernetes.io/projected/65b6b39e-1d72-462b-8014-1227230aa5b7-kube-api-access-bj6mm\") pod \"coredns-7db6d8ff4d-92kcx\" (UID: \"65b6b39e-1d72-462b-8014-1227230aa5b7\") " pod="kube-system/coredns-7db6d8ff4d-92kcx" Dec 13 09:04:19.513075 kubelet[2739]: I1213 09:04:19.511010 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxjl\" (UniqueName: \"kubernetes.io/projected/f31f3787-a93b-48e8-9d06-71efae4d1e4f-kube-api-access-lzxjl\") pod \"calico-apiserver-5dc6fbbbbc-bmlsz\" (UID: \"f31f3787-a93b-48e8-9d06-71efae4d1e4f\") " pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-bmlsz" Dec 13 09:04:19.513075 kubelet[2739]: I1213 09:04:19.511037 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j48k8\" (UniqueName: \"kubernetes.io/projected/e5cf6eb7-a899-4823-b6d3-c77cbab40250-kube-api-access-j48k8\") pod \"coredns-7db6d8ff4d-8vqm4\" (UID: \"e5cf6eb7-a899-4823-b6d3-c77cbab40250\") " pod="kube-system/coredns-7db6d8ff4d-8vqm4" Dec 13 09:04:19.513075 kubelet[2739]: I1213 09:04:19.511054 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s775b\" (UniqueName: \"kubernetes.io/projected/8eb77da9-ce92-437d-a506-03a84d1e2646-kube-api-access-s775b\") pod \"calico-apiserver-5dc6fbbbbc-n6v2x\" (UID: \"8eb77da9-ce92-437d-a506-03a84d1e2646\") " pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-n6v2x" Dec 13 09:04:19.517253 systemd[1]: Created slice kubepods-besteffort-podf31f3787_a93b_48e8_9d06_71efae4d1e4f.slice - libcontainer container kubepods-besteffort-podf31f3787_a93b_48e8_9d06_71efae4d1e4f.slice. Dec 13 09:04:19.767364 containerd[1469]: time="2024-12-13T09:04:19.766442277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8vqm4,Uid:e5cf6eb7-a899-4823-b6d3-c77cbab40250,Namespace:kube-system,Attempt:0,}" Dec 13 09:04:19.783580 containerd[1469]: time="2024-12-13T09:04:19.783530072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc6fbbbbc-n6v2x,Uid:8eb77da9-ce92-437d-a506-03a84d1e2646,Namespace:calico-apiserver,Attempt:0,}" Dec 13 09:04:19.792461 containerd[1469]: time="2024-12-13T09:04:19.792417955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-92kcx,Uid:65b6b39e-1d72-462b-8014-1227230aa5b7,Namespace:kube-system,Attempt:0,}" Dec 13 09:04:19.813864 containerd[1469]: time="2024-12-13T09:04:19.813550126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c4855bd8-km44p,Uid:98dfd1c7-089d-4dd5-bd73-26a0d273295c,Namespace:calico-system,Attempt:0,}" Dec 13 09:04:19.827049 containerd[1469]: time="2024-12-13T09:04:19.826002898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc6fbbbbc-bmlsz,Uid:f31f3787-a93b-48e8-9d06-71efae4d1e4f,Namespace:calico-apiserver,Attempt:0,}" Dec 13 09:04:19.970078 containerd[1469]: time="2024-12-13T09:04:19.970011322Z" level=error msg="Failed to destroy network for sandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:19.971527 containerd[1469]: time="2024-12-13T09:04:19.971470422Z" level=error msg="encountered an error cleaning up failed sandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:19.971663 containerd[1469]: time="2024-12-13T09:04:19.971633064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc6fbbbbc-n6v2x,Uid:8eb77da9-ce92-437d-a506-03a84d1e2646,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:19.972008 kubelet[2739]: E1213 09:04:19.971962 2739 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:19.972366 kubelet[2739]: E1213 09:04:19.972044 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-n6v2x" Dec 13 09:04:19.972366 kubelet[2739]: E1213 09:04:19.972094 2739 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-n6v2x" Dec 13 09:04:19.972366 kubelet[2739]: E1213 09:04:19.972134 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dc6fbbbbc-n6v2x_calico-apiserver(8eb77da9-ce92-437d-a506-03a84d1e2646)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dc6fbbbbc-n6v2x_calico-apiserver(8eb77da9-ce92-437d-a506-03a84d1e2646)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-n6v2x" podUID="8eb77da9-ce92-437d-a506-03a84d1e2646" Dec 13 09:04:19.987590 containerd[1469]: time="2024-12-13T09:04:19.987518843Z" level=error msg="Failed to destroy network for sandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:19.988794 containerd[1469]: time="2024-12-13T09:04:19.988737540Z" level=error msg="encountered an error cleaning up failed sandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:19.992211 containerd[1469]: time="2024-12-13T09:04:19.989632632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8vqm4,Uid:e5cf6eb7-a899-4823-b6d3-c77cbab40250,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:19.992325 kubelet[2739]: E1213 09:04:19.990771 2739 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:19.992325 kubelet[2739]: E1213 09:04:19.990830 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8vqm4" Dec 13 09:04:19.992325 kubelet[2739]: E1213 09:04:19.990849 2739 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8vqm4" Dec 13 09:04:19.992477 kubelet[2739]: E1213 09:04:19.990890 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8vqm4_kube-system(e5cf6eb7-a899-4823-b6d3-c77cbab40250)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8vqm4_kube-system(e5cf6eb7-a899-4823-b6d3-c77cbab40250)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8vqm4" podUID="e5cf6eb7-a899-4823-b6d3-c77cbab40250" Dec 13 09:04:19.993174 containerd[1469]: time="2024-12-13T09:04:19.992665954Z" level=error msg="Failed to destroy network for sandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:19.993174 containerd[1469]: time="2024-12-13T09:04:19.993027799Z" level=error msg="encountered an error cleaning up failed sandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:19.993174 containerd[1469]: time="2024-12-13T09:04:19.993074200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-92kcx,Uid:65b6b39e-1d72-462b-8014-1227230aa5b7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:19.994382 kubelet[2739]: E1213 09:04:19.994335 2739 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:19.994465 kubelet[2739]: E1213 09:04:19.994396 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-92kcx" Dec 13 09:04:19.994465 kubelet[2739]: E1213 09:04:19.994417 2739 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-92kcx" Dec 13 09:04:19.994524 kubelet[2739]: E1213 09:04:19.994459 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-92kcx_kube-system(65b6b39e-1d72-462b-8014-1227230aa5b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-92kcx_kube-system(65b6b39e-1d72-462b-8014-1227230aa5b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-92kcx" podUID="65b6b39e-1d72-462b-8014-1227230aa5b7" Dec 13 09:04:20.011770 containerd[1469]: time="2024-12-13T09:04:20.010490797Z" level=error msg="Failed to destroy network for sandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.011770 containerd[1469]: time="2024-12-13T09:04:20.010869362Z" level=error msg="encountered an error cleaning up failed sandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.011770 containerd[1469]: time="2024-12-13T09:04:20.010930563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c4855bd8-km44p,Uid:98dfd1c7-089d-4dd5-bd73-26a0d273295c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.012002 kubelet[2739]: E1213 09:04:20.011138 2739 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.012002 kubelet[2739]: E1213 09:04:20.011209 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85c4855bd8-km44p" Dec 13 09:04:20.012002 kubelet[2739]: E1213 09:04:20.011250 2739 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85c4855bd8-km44p" Dec 13 09:04:20.012118 kubelet[2739]: E1213 09:04:20.011289 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85c4855bd8-km44p_calico-system(98dfd1c7-089d-4dd5-bd73-26a0d273295c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85c4855bd8-km44p_calico-system(98dfd1c7-089d-4dd5-bd73-26a0d273295c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85c4855bd8-km44p" podUID="98dfd1c7-089d-4dd5-bd73-26a0d273295c" Dec 13 09:04:20.019239 containerd[1469]: time="2024-12-13T09:04:20.019078673Z" level=error msg="Failed to destroy network for sandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.020590 containerd[1469]: time="2024-12-13T09:04:20.020536853Z" level=error msg="encountered an error cleaning up failed sandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.020781 containerd[1469]: time="2024-12-13T09:04:20.020702775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc6fbbbbc-bmlsz,Uid:f31f3787-a93b-48e8-9d06-71efae4d1e4f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.021185 kubelet[2739]: E1213 09:04:20.021145 2739 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.021324 kubelet[2739]: E1213 09:04:20.021295 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-bmlsz" Dec 13 09:04:20.021400 kubelet[2739]: E1213 09:04:20.021320 2739 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-bmlsz" Dec 13 09:04:20.021400 kubelet[2739]: E1213 09:04:20.021377 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dc6fbbbbc-bmlsz_calico-apiserver(f31f3787-a93b-48e8-9d06-71efae4d1e4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dc6fbbbbc-bmlsz_calico-apiserver(f31f3787-a93b-48e8-9d06-71efae4d1e4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-bmlsz" podUID="f31f3787-a93b-48e8-9d06-71efae4d1e4f" Dec 13 09:04:20.118480 containerd[1469]: time="2024-12-13T09:04:20.118389496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 09:04:20.120145 kubelet[2739]: I1213 09:04:20.120103 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:20.123460 containerd[1469]: time="2024-12-13T09:04:20.123202641Z" level=info msg="StopPodSandbox for \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\"" Dec 13 09:04:20.123569 kubelet[2739]: I1213 09:04:20.123471 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:20.125098 containerd[1469]: time="2024-12-13T09:04:20.124171414Z" level=info msg="Ensure that sandbox cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8 in task-service has been cleanup successfully" Dec 13 09:04:20.125098 containerd[1469]: time="2024-12-13T09:04:20.124865463Z" level=info msg="StopPodSandbox for \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\"" Dec 13 09:04:20.126286 containerd[1469]: time="2024-12-13T09:04:20.125963758Z" level=info msg="Ensure that sandbox b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005 in task-service has been cleanup successfully" Dec 13 09:04:20.135452 kubelet[2739]: I1213 09:04:20.135349 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:20.137639 containerd[1469]: time="2024-12-13T09:04:20.137602836Z" level=info msg="StopPodSandbox for \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\"" Dec 13 09:04:20.138097 containerd[1469]: time="2024-12-13T09:04:20.137971241Z" level=info msg="Ensure that sandbox 5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983 in task-service has been cleanup successfully" Dec 13 09:04:20.145486 kubelet[2739]: I1213 09:04:20.145374 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:20.150852 containerd[1469]: time="2024-12-13T09:04:20.150638812Z" level=info msg="StopPodSandbox for \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\"" Dec 13 09:04:20.156214 containerd[1469]: time="2024-12-13T09:04:20.155179593Z" level=info msg="Ensure that sandbox cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58 in task-service has been cleanup successfully" Dec 13 09:04:20.161778 kubelet[2739]: I1213 09:04:20.161715 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:20.164769 containerd[1469]: time="2024-12-13T09:04:20.163597747Z" level=info msg="StopPodSandbox for \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\"" Dec 13 09:04:20.164769 containerd[1469]: time="2024-12-13T09:04:20.163884391Z" level=info msg="Ensure that sandbox c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112 in task-service has been cleanup successfully" Dec 13 09:04:20.213657 containerd[1469]: time="2024-12-13T09:04:20.213599743Z" level=error msg="StopPodSandbox for \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\" failed" error="failed to destroy network for sandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.213904 kubelet[2739]: E1213 09:04:20.213864 2739 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:20.213986 kubelet[2739]: E1213 09:04:20.213924 2739 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005"} Dec 13 09:04:20.214021 kubelet[2739]: E1213 09:04:20.213998 2739 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"98dfd1c7-089d-4dd5-bd73-26a0d273295c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:04:20.214079 kubelet[2739]: E1213 09:04:20.214030 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"98dfd1c7-089d-4dd5-bd73-26a0d273295c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85c4855bd8-km44p" podUID="98dfd1c7-089d-4dd5-bd73-26a0d273295c" Dec 13 09:04:20.222481 containerd[1469]: time="2024-12-13T09:04:20.222423142Z" level=error msg="StopPodSandbox for \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\" failed" error="failed to destroy network for sandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.222871 kubelet[2739]: E1213 09:04:20.222825 2739 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:20.222946 kubelet[2739]: E1213 09:04:20.222878 2739 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58"} Dec 13 09:04:20.222946 kubelet[2739]: E1213 09:04:20.222911 2739 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8eb77da9-ce92-437d-a506-03a84d1e2646\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:04:20.222946 kubelet[2739]: E1213 09:04:20.222938 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8eb77da9-ce92-437d-a506-03a84d1e2646\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-n6v2x" podUID="8eb77da9-ce92-437d-a506-03a84d1e2646" Dec 13 09:04:20.226171 containerd[1469]: time="2024-12-13T09:04:20.226125193Z" level=error msg="StopPodSandbox for \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\" failed" error="failed to destroy network for sandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.226665 kubelet[2739]: E1213 09:04:20.226606 2739 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:20.226665 kubelet[2739]: E1213 09:04:20.226657 2739 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983"} Dec 13 09:04:20.226774 kubelet[2739]: E1213 09:04:20.226701 2739 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"65b6b39e-1d72-462b-8014-1227230aa5b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:04:20.226774 kubelet[2739]: E1213 09:04:20.226723 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"65b6b39e-1d72-462b-8014-1227230aa5b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-92kcx" podUID="65b6b39e-1d72-462b-8014-1227230aa5b7" Dec 13 09:04:20.228477 containerd[1469]: time="2024-12-13T09:04:20.228357983Z" level=error msg="StopPodSandbox for \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\" failed" error="failed to destroy network for sandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.228627 kubelet[2739]: E1213 09:04:20.228572 2739 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:20.228627 kubelet[2739]: E1213 09:04:20.228618 2739 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8"} Dec 13 09:04:20.228709 kubelet[2739]: E1213 09:04:20.228653 2739 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f31f3787-a93b-48e8-9d06-71efae4d1e4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:04:20.228709 kubelet[2739]: E1213 09:04:20.228675 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f31f3787-a93b-48e8-9d06-71efae4d1e4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-bmlsz" podUID="f31f3787-a93b-48e8-9d06-71efae4d1e4f" Dec 13 09:04:20.239610 containerd[1469]: time="2024-12-13T09:04:20.239543614Z" level=error msg="StopPodSandbox for \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\" failed" error="failed to destroy network for sandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:20.239815 kubelet[2739]: E1213 09:04:20.239767 2739 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:20.239894 kubelet[2739]: E1213 09:04:20.239820 2739 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112"} Dec 13 09:04:20.239894 kubelet[2739]: E1213 09:04:20.239854 2739 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5cf6eb7-a899-4823-b6d3-c77cbab40250\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:04:20.239894 kubelet[2739]: E1213 09:04:20.239880 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5cf6eb7-a899-4823-b6d3-c77cbab40250\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8vqm4" podUID="e5cf6eb7-a899-4823-b6d3-c77cbab40250" Dec 13 09:04:20.672557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58-shm.mount: Deactivated successfully. Dec 13 09:04:20.672689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112-shm.mount: Deactivated successfully. Dec 13 09:04:20.958754 systemd[1]: Created slice kubepods-besteffort-poda5aed41f_ee8e_4f6b_9d24_6472c4316100.slice - libcontainer container kubepods-besteffort-poda5aed41f_ee8e_4f6b_9d24_6472c4316100.slice. Dec 13 09:04:20.963926 containerd[1469]: time="2024-12-13T09:04:20.963477442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rzphw,Uid:a5aed41f-ee8e-4f6b-9d24-6472c4316100,Namespace:calico-system,Attempt:0,}" Dec 13 09:04:21.035138 containerd[1469]: time="2024-12-13T09:04:21.034281831Z" level=error msg="Failed to destroy network for sandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:21.035752 containerd[1469]: time="2024-12-13T09:04:21.035559048Z" level=error msg="encountered an error cleaning up failed sandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:21.035752 containerd[1469]: time="2024-12-13T09:04:21.035625369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rzphw,Uid:a5aed41f-ee8e-4f6b-9d24-6472c4316100,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:21.037707 kubelet[2739]: E1213 09:04:21.037368 2739 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:21.039032 kubelet[2739]: E1213 09:04:21.037440 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rzphw" Dec 13 09:04:21.039032 kubelet[2739]: E1213 09:04:21.038090 2739 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rzphw" Dec 13 09:04:21.039032 kubelet[2739]: E1213 09:04:21.038159 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rzphw_calico-system(a5aed41f-ee8e-4f6b-9d24-6472c4316100)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rzphw_calico-system(a5aed41f-ee8e-4f6b-9d24-6472c4316100)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rzphw" podUID="a5aed41f-ee8e-4f6b-9d24-6472c4316100" Dec 13 09:04:21.038662 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6-shm.mount: Deactivated successfully. Dec 13 09:04:21.165263 kubelet[2739]: I1213 09:04:21.165066 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:21.166423 containerd[1469]: time="2024-12-13T09:04:21.166307343Z" level=info msg="StopPodSandbox for \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\"" Dec 13 09:04:21.166848 containerd[1469]: time="2024-12-13T09:04:21.166720988Z" level=info msg="Ensure that sandbox 6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6 in task-service has been cleanup successfully" Dec 13 09:04:21.196497 containerd[1469]: time="2024-12-13T09:04:21.196344022Z" level=error msg="StopPodSandbox for \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\" failed" error="failed to destroy network for sandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:04:21.196661 kubelet[2739]: E1213 09:04:21.196602 2739 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:21.196747 kubelet[2739]: E1213 09:04:21.196648 2739 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6"} Dec 13 09:04:21.196747 kubelet[2739]: E1213 09:04:21.196691 2739 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5aed41f-ee8e-4f6b-9d24-6472c4316100\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:04:21.196747 kubelet[2739]: E1213 09:04:21.196711 2739 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5aed41f-ee8e-4f6b-9d24-6472c4316100\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rzphw" podUID="a5aed41f-ee8e-4f6b-9d24-6472c4316100" Dec 13 09:04:24.192789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1060057257.mount: Deactivated successfully. Dec 13 09:04:24.229209 containerd[1469]: time="2024-12-13T09:04:24.229123350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:24.230780 containerd[1469]: time="2024-12-13T09:04:24.230701130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 09:04:24.231909 containerd[1469]: time="2024-12-13T09:04:24.231821864Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:24.234983 containerd[1469]: time="2024-12-13T09:04:24.234941184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:24.235932 containerd[1469]: time="2024-12-13T09:04:24.235467990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.117027694s" Dec 13 09:04:24.235932 containerd[1469]: time="2024-12-13T09:04:24.235502111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 09:04:24.252423 containerd[1469]: time="2024-12-13T09:04:24.252366803Z" level=info msg="CreateContainer within sandbox \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 09:04:24.282616 containerd[1469]: time="2024-12-13T09:04:24.282556382Z" level=info msg="CreateContainer within sandbox \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e\"" Dec 13 09:04:24.284440 containerd[1469]: time="2024-12-13T09:04:24.283713556Z" level=info msg="StartContainer for \"12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e\"" Dec 13 09:04:24.314390 systemd[1]: Started cri-containerd-12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e.scope - libcontainer container 12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e. Dec 13 09:04:24.354934 containerd[1469]: time="2024-12-13T09:04:24.354809530Z" level=info msg="StartContainer for \"12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e\" returns successfully" Dec 13 09:04:24.475149 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 09:04:24.475541 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 09:04:25.200205 kubelet[2739]: I1213 09:04:25.199923 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z7qzd" podStartSLOduration=2.419117869 podStartE2EDuration="15.199902987s" podCreationTimestamp="2024-12-13 09:04:10 +0000 UTC" firstStartedPulling="2024-12-13 09:04:11.455801526 +0000 UTC m=+25.613484015" lastFinishedPulling="2024-12-13 09:04:24.236586644 +0000 UTC m=+38.394269133" observedRunningTime="2024-12-13 09:04:25.198396768 +0000 UTC m=+39.356079337" watchObservedRunningTime="2024-12-13 09:04:25.199902987 +0000 UTC m=+39.357585516" Dec 13 09:04:31.953072 containerd[1469]: time="2024-12-13T09:04:31.952502817Z" level=info msg="StopPodSandbox for \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\"" Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.031 [INFO][4171] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.031 [INFO][4171] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" iface="eth0" netns="/var/run/netns/cni-5c2a5968-6566-965b-8178-1544f63578f9" Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.032 [INFO][4171] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" iface="eth0" netns="/var/run/netns/cni-5c2a5968-6566-965b-8178-1544f63578f9" Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.033 [INFO][4171] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" iface="eth0" netns="/var/run/netns/cni-5c2a5968-6566-965b-8178-1544f63578f9" Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.033 [INFO][4171] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.033 [INFO][4171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.075 [INFO][4177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" HandleID="k8s-pod-network.c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.075 [INFO][4177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.075 [INFO][4177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.089 [WARNING][4177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" HandleID="k8s-pod-network.c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.089 [INFO][4177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" HandleID="k8s-pod-network.c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.091 [INFO][4177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:32.095119 containerd[1469]: 2024-12-13 09:04:32.093 [INFO][4171] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:32.098425 containerd[1469]: time="2024-12-13T09:04:32.097323177Z" level=info msg="TearDown network for sandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\" successfully" Dec 13 09:04:32.098425 containerd[1469]: time="2024-12-13T09:04:32.097359738Z" level=info msg="StopPodSandbox for \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\" returns successfully" Dec 13 09:04:32.098425 containerd[1469]: time="2024-12-13T09:04:32.098062345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8vqm4,Uid:e5cf6eb7-a899-4823-b6d3-c77cbab40250,Namespace:kube-system,Attempt:1,}" Dec 13 09:04:32.100619 systemd[1]: run-netns-cni\x2d5c2a5968\x2d6566\x2d965b\x2d8178\x2d1544f63578f9.mount: Deactivated successfully. Dec 13 09:04:32.254473 systemd-networkd[1379]: calicf8f2dc739b: Link UP Dec 13 09:04:32.262986 systemd-networkd[1379]: calicf8f2dc739b: Gained carrier Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.136 [INFO][4188] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.152 [INFO][4188] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0 coredns-7db6d8ff4d- kube-system e5cf6eb7-a899-4823-b6d3-c77cbab40250 808 0 2024-12-13 09:04:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-6-29baf1648e coredns-7db6d8ff4d-8vqm4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicf8f2dc739b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vqm4" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-" Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.152 [INFO][4188] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vqm4" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.186 [INFO][4196] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" HandleID="k8s-pod-network.4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.204 [INFO][4196] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" HandleID="k8s-pod-network.4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004ceb30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-6-29baf1648e", "pod":"coredns-7db6d8ff4d-8vqm4", "timestamp":"2024-12-13 09:04:32.186854441 +0000 UTC"}, Hostname:"ci-4081-2-1-6-29baf1648e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.204 [INFO][4196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.204 [INFO][4196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.204 [INFO][4196] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-29baf1648e' Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.207 [INFO][4196] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.213 [INFO][4196] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.218 [INFO][4196] ipam/ipam.go 489: Trying affinity for 192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.221 [INFO][4196] ipam/ipam.go 155: Attempting to load block cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.223 [INFO][4196] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.224 [INFO][4196] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.225 [INFO][4196] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21 Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.231 [INFO][4196] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.240 [INFO][4196] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.36.65/26] block=192.168.36.64/26 handle="k8s-pod-network.4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.240 [INFO][4196] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.65/26] handle="k8s-pod-network.4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.240 [INFO][4196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:32.282944 containerd[1469]: 2024-12-13 09:04:32.240 [INFO][4196] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.65/26] IPv6=[] ContainerID="4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" HandleID="k8s-pod-network.4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:32.285718 containerd[1469]: 2024-12-13 09:04:32.244 [INFO][4188] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vqm4" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e5cf6eb7-a899-4823-b6d3-c77cbab40250", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"", Pod:"coredns-7db6d8ff4d-8vqm4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf8f2dc739b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:32.285718 containerd[1469]: 2024-12-13 09:04:32.244 [INFO][4188] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.36.65/32] ContainerID="4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vqm4" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:32.285718 containerd[1469]: 2024-12-13 09:04:32.244 [INFO][4188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf8f2dc739b ContainerID="4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vqm4" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:32.285718 containerd[1469]: 2024-12-13 09:04:32.260 [INFO][4188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vqm4" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:32.285718 containerd[1469]: 2024-12-13 09:04:32.262 [INFO][4188] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vqm4" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e5cf6eb7-a899-4823-b6d3-c77cbab40250", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21", Pod:"coredns-7db6d8ff4d-8vqm4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf8f2dc739b", MAC:"fa:63:89:be:7a:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:32.285718 containerd[1469]: 2024-12-13 09:04:32.277 [INFO][4188] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8vqm4" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:32.312171 containerd[1469]: time="2024-12-13T09:04:32.310297838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:32.312171 containerd[1469]: time="2024-12-13T09:04:32.310365158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:32.312171 containerd[1469]: time="2024-12-13T09:04:32.310381679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:32.312171 containerd[1469]: time="2024-12-13T09:04:32.310474880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:32.332576 systemd[1]: Started cri-containerd-4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21.scope - libcontainer container 4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21. Dec 13 09:04:32.369627 containerd[1469]: time="2024-12-13T09:04:32.369516408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8vqm4,Uid:e5cf6eb7-a899-4823-b6d3-c77cbab40250,Namespace:kube-system,Attempt:1,} returns sandbox id \"4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21\"" Dec 13 09:04:32.373006 containerd[1469]: time="2024-12-13T09:04:32.372932806Z" level=info msg="CreateContainer within sandbox \"4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 09:04:32.394509 containerd[1469]: time="2024-12-13T09:04:32.394444922Z" level=info msg="CreateContainer within sandbox \"4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8923f0ec3b23089dadd45c96ca538afb6161b68ebeacb0883b522ca5ddfed297\"" Dec 13 09:04:32.399563 containerd[1469]: time="2024-12-13T09:04:32.399504778Z" level=info msg="StartContainer for \"8923f0ec3b23089dadd45c96ca538afb6161b68ebeacb0883b522ca5ddfed297\"" Dec 13 09:04:32.435441 systemd[1]: Started cri-containerd-8923f0ec3b23089dadd45c96ca538afb6161b68ebeacb0883b522ca5ddfed297.scope - libcontainer container 8923f0ec3b23089dadd45c96ca538afb6161b68ebeacb0883b522ca5ddfed297. Dec 13 09:04:32.473765 containerd[1469]: time="2024-12-13T09:04:32.473721274Z" level=info msg="StartContainer for \"8923f0ec3b23089dadd45c96ca538afb6161b68ebeacb0883b522ca5ddfed297\" returns successfully" Dec 13 09:04:32.952553 containerd[1469]: time="2024-12-13T09:04:32.952484535Z" level=info msg="StopPodSandbox for \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\"" Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.005 [INFO][4326] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.007 [INFO][4326] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" iface="eth0" netns="/var/run/netns/cni-f8ef2942-a379-23d4-eb2b-0bdedeb0cab8" Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.008 [INFO][4326] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" iface="eth0" netns="/var/run/netns/cni-f8ef2942-a379-23d4-eb2b-0bdedeb0cab8" Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.008 [INFO][4326] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" iface="eth0" netns="/var/run/netns/cni-f8ef2942-a379-23d4-eb2b-0bdedeb0cab8" Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.008 [INFO][4326] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.008 [INFO][4326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.029 [INFO][4333] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" HandleID="k8s-pod-network.6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Workload="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.029 [INFO][4333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.029 [INFO][4333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.039 [WARNING][4333] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" HandleID="k8s-pod-network.6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Workload="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.039 [INFO][4333] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" HandleID="k8s-pod-network.6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Workload="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.042 [INFO][4333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:33.045868 containerd[1469]: 2024-12-13 09:04:33.043 [INFO][4326] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:33.045868 containerd[1469]: time="2024-12-13T09:04:33.045293667Z" level=info msg="TearDown network for sandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\" successfully" Dec 13 09:04:33.045868 containerd[1469]: time="2024-12-13T09:04:33.045322588Z" level=info msg="StopPodSandbox for \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\" returns successfully" Dec 13 09:04:33.047568 containerd[1469]: time="2024-12-13T09:04:33.046871244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rzphw,Uid:a5aed41f-ee8e-4f6b-9d24-6472c4316100,Namespace:calico-system,Attempt:1,}" Dec 13 09:04:33.101922 systemd[1]: run-netns-cni\x2df8ef2942\x2da379\x2d23d4\x2deb2b\x2d0bdedeb0cab8.mount: Deactivated successfully. Dec 13 09:04:33.190891 systemd-networkd[1379]: cali0b221ecdd8e: Link UP Dec 13 09:04:33.191717 systemd-networkd[1379]: cali0b221ecdd8e: Gained carrier Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.078 [INFO][4339] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.094 [INFO][4339] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0 csi-node-driver- calico-system a5aed41f-ee8e-4f6b-9d24-6472c4316100 817 0 2024-12-13 09:04:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-6-29baf1648e csi-node-driver-rzphw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0b221ecdd8e [] []}} ContainerID="c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" Namespace="calico-system" Pod="csi-node-driver-rzphw" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-" Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.095 [INFO][4339] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" Namespace="calico-system" Pod="csi-node-driver-rzphw" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.135 [INFO][4350] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" HandleID="k8s-pod-network.c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" Workload="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.149 [INFO][4350] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" HandleID="k8s-pod-network.c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" Workload="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d770), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-6-29baf1648e", "pod":"csi-node-driver-rzphw", "timestamp":"2024-12-13 09:04:33.13517144 +0000 UTC"}, Hostname:"ci-4081-2-1-6-29baf1648e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.149 [INFO][4350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.149 [INFO][4350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.149 [INFO][4350] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-29baf1648e' Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.152 [INFO][4350] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.157 [INFO][4350] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.162 [INFO][4350] ipam/ipam.go 489: Trying affinity for 192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.165 [INFO][4350] ipam/ipam.go 155: Attempting to load block cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.168 [INFO][4350] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.168 [INFO][4350] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.170 [INFO][4350] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.175 [INFO][4350] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.183 [INFO][4350] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.36.66/26] block=192.168.36.64/26 handle="k8s-pod-network.c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.183 [INFO][4350] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.66/26] handle="k8s-pod-network.c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.184 [INFO][4350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:33.215612 containerd[1469]: 2024-12-13 09:04:33.184 [INFO][4350] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.66/26] IPv6=[] ContainerID="c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" HandleID="k8s-pod-network.c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" Workload="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:33.216687 containerd[1469]: 2024-12-13 09:04:33.187 [INFO][4339] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" Namespace="calico-system" Pod="csi-node-driver-rzphw" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5aed41f-ee8e-4f6b-9d24-6472c4316100", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"", Pod:"csi-node-driver-rzphw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b221ecdd8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:33.216687 containerd[1469]: 2024-12-13 09:04:33.187 [INFO][4339] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.36.66/32] ContainerID="c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" Namespace="calico-system" Pod="csi-node-driver-rzphw" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:33.216687 containerd[1469]: 2024-12-13 09:04:33.187 [INFO][4339] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b221ecdd8e ContainerID="c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" Namespace="calico-system" Pod="csi-node-driver-rzphw" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:33.216687 containerd[1469]: 2024-12-13 09:04:33.189 [INFO][4339] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" Namespace="calico-system" Pod="csi-node-driver-rzphw" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:33.216687 containerd[1469]: 2024-12-13 09:04:33.190 [INFO][4339] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" Namespace="calico-system" Pod="csi-node-driver-rzphw" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5aed41f-ee8e-4f6b-9d24-6472c4316100", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa", Pod:"csi-node-driver-rzphw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b221ecdd8e", MAC:"8e:c5:64:8c:31:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:33.216687 containerd[1469]: 2024-12-13 09:04:33.212 [INFO][4339] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa" Namespace="calico-system" Pod="csi-node-driver-rzphw" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:33.220350 kubelet[2739]: I1213 09:04:33.219897 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8vqm4" podStartSLOduration=33.219855116 podStartE2EDuration="33.219855116s" podCreationTimestamp="2024-12-13 09:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:04:33.219729235 +0000 UTC m=+47.377411764" watchObservedRunningTime="2024-12-13 09:04:33.219855116 +0000 UTC m=+47.377537645" Dec 13 09:04:33.272921 containerd[1469]: time="2024-12-13T09:04:33.272036521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:33.272921 containerd[1469]: time="2024-12-13T09:04:33.272105361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:33.272921 containerd[1469]: time="2024-12-13T09:04:33.272120401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:33.272921 containerd[1469]: time="2024-12-13T09:04:33.272224003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:33.301083 systemd[1]: Started cri-containerd-c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa.scope - libcontainer container c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa. Dec 13 09:04:33.338712 containerd[1469]: time="2024-12-13T09:04:33.338581360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rzphw,Uid:a5aed41f-ee8e-4f6b-9d24-6472c4316100,Namespace:calico-system,Attempt:1,} returns sandbox id \"c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa\"" Dec 13 09:04:33.340860 containerd[1469]: time="2024-12-13T09:04:33.340814105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 09:04:33.953172 containerd[1469]: time="2024-12-13T09:04:33.952947407Z" level=info msg="StopPodSandbox for \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\"" Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.007 [INFO][4446] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.008 [INFO][4446] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" iface="eth0" netns="/var/run/netns/cni-bb16060f-e0e2-132d-ddbc-78e07359f194" Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.009 [INFO][4446] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" iface="eth0" netns="/var/run/netns/cni-bb16060f-e0e2-132d-ddbc-78e07359f194" Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.009 [INFO][4446] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" iface="eth0" netns="/var/run/netns/cni-bb16060f-e0e2-132d-ddbc-78e07359f194" Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.009 [INFO][4446] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.009 [INFO][4446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.032 [INFO][4452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" HandleID="k8s-pod-network.cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.032 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.032 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.042 [WARNING][4452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" HandleID="k8s-pod-network.cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.042 [INFO][4452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" HandleID="k8s-pod-network.cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.045 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:34.048993 containerd[1469]: 2024-12-13 09:04:34.047 [INFO][4446] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:34.054949 containerd[1469]: time="2024-12-13T09:04:34.049238881Z" level=info msg="TearDown network for sandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\" successfully" Dec 13 09:04:34.054949 containerd[1469]: time="2024-12-13T09:04:34.049296362Z" level=info msg="StopPodSandbox for \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\" returns successfully" Dec 13 09:04:34.054949 containerd[1469]: time="2024-12-13T09:04:34.051426264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc6fbbbbc-bmlsz,Uid:f31f3787-a93b-48e8-9d06-71efae4d1e4f,Namespace:calico-apiserver,Attempt:1,}" Dec 13 09:04:34.053952 systemd[1]: run-netns-cni\x2dbb16060f\x2de0e2\x2d132d\x2dddbc\x2d78e07359f194.mount: Deactivated successfully. Dec 13 09:04:34.208499 systemd-networkd[1379]: cali2eff1a0bee0: Link UP Dec 13 09:04:34.212883 systemd-networkd[1379]: cali2eff1a0bee0: Gained carrier Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.089 [INFO][4458] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.108 [INFO][4458] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0 calico-apiserver-5dc6fbbbbc- calico-apiserver f31f3787-a93b-48e8-9d06-71efae4d1e4f 828 0 2024-12-13 09:04:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dc6fbbbbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-6-29baf1648e calico-apiserver-5dc6fbbbbc-bmlsz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2eff1a0bee0 [] []}} ContainerID="930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-bmlsz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-" Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.108 [INFO][4458] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-bmlsz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.142 [INFO][4469] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" HandleID="k8s-pod-network.930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.157 [INFO][4469] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" HandleID="k8s-pod-network.930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003849f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-6-29baf1648e", "pod":"calico-apiserver-5dc6fbbbbc-bmlsz", "timestamp":"2024-12-13 09:04:34.142293673 +0000 UTC"}, Hostname:"ci-4081-2-1-6-29baf1648e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.157 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.157 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.157 [INFO][4469] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-29baf1648e' Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.160 [INFO][4469] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.166 [INFO][4469] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.173 [INFO][4469] ipam/ipam.go 489: Trying affinity for 192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.176 [INFO][4469] ipam/ipam.go 155: Attempting to load block cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.179 [INFO][4469] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.179 [INFO][4469] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.181 [INFO][4469] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.187 [INFO][4469] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.198 [INFO][4469] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.36.67/26] block=192.168.36.64/26 handle="k8s-pod-network.930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.198 [INFO][4469] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.67/26] handle="k8s-pod-network.930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.198 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:34.239648 containerd[1469]: 2024-12-13 09:04:34.198 [INFO][4469] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.67/26] IPv6=[] ContainerID="930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" HandleID="k8s-pod-network.930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:34.241494 containerd[1469]: 2024-12-13 09:04:34.202 [INFO][4458] cni-plugin/k8s.go 386: Populated endpoint ContainerID="930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-bmlsz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0", GenerateName:"calico-apiserver-5dc6fbbbbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f31f3787-a93b-48e8-9d06-71efae4d1e4f", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc6fbbbbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"", Pod:"calico-apiserver-5dc6fbbbbc-bmlsz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2eff1a0bee0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:34.241494 containerd[1469]: 2024-12-13 09:04:34.202 [INFO][4458] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.36.67/32] ContainerID="930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-bmlsz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:34.241494 containerd[1469]: 2024-12-13 09:04:34.203 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2eff1a0bee0 ContainerID="930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-bmlsz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:34.241494 containerd[1469]: 2024-12-13 09:04:34.215 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-bmlsz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:34.241494 containerd[1469]: 2024-12-13 09:04:34.216 [INFO][4458] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-bmlsz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0", GenerateName:"calico-apiserver-5dc6fbbbbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f31f3787-a93b-48e8-9d06-71efae4d1e4f", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc6fbbbbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d", Pod:"calico-apiserver-5dc6fbbbbc-bmlsz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2eff1a0bee0", MAC:"b2:d0:90:96:25:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:34.241494 containerd[1469]: 2024-12-13 09:04:34.232 [INFO][4458] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-bmlsz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:34.272704 containerd[1469]: time="2024-12-13T09:04:34.272513940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:34.272704 containerd[1469]: time="2024-12-13T09:04:34.272596301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:34.272704 containerd[1469]: time="2024-12-13T09:04:34.272612381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:34.273251 containerd[1469]: time="2024-12-13T09:04:34.273069786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:34.283443 systemd-networkd[1379]: calicf8f2dc739b: Gained IPv6LL Dec 13 09:04:34.300430 systemd[1]: Started cri-containerd-930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d.scope - libcontainer container 930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d. Dec 13 09:04:34.346209 containerd[1469]: time="2024-12-13T09:04:34.345740680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc6fbbbbc-bmlsz,Uid:f31f3787-a93b-48e8-9d06-71efae4d1e4f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d\"" Dec 13 09:04:34.411985 systemd-networkd[1379]: cali0b221ecdd8e: Gained IPv6LL Dec 13 09:04:34.828535 containerd[1469]: time="2024-12-13T09:04:34.828472103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:34.830336 containerd[1469]: time="2024-12-13T09:04:34.829993359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 09:04:34.830758 containerd[1469]: time="2024-12-13T09:04:34.830711887Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:34.835674 containerd[1469]: time="2024-12-13T09:04:34.835634379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:34.836422 containerd[1469]: time="2024-12-13T09:04:34.835998503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.495131238s" Dec 13 09:04:34.836822 containerd[1469]: time="2024-12-13T09:04:34.836801672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 09:04:34.839339 containerd[1469]: time="2024-12-13T09:04:34.839309778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 09:04:34.841050 containerd[1469]: time="2024-12-13T09:04:34.841017477Z" level=info msg="CreateContainer within sandbox \"c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 09:04:34.874854 containerd[1469]: time="2024-12-13T09:04:34.874804917Z" level=info msg="CreateContainer within sandbox \"c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"02e961c3aa01b7b920640c3e1282d0406a4cbce19ea64042f1311bb58546ea84\"" Dec 13 09:04:34.875978 containerd[1469]: time="2024-12-13T09:04:34.875930649Z" level=info msg="StartContainer for \"02e961c3aa01b7b920640c3e1282d0406a4cbce19ea64042f1311bb58546ea84\"" Dec 13 09:04:34.910462 systemd[1]: Started cri-containerd-02e961c3aa01b7b920640c3e1282d0406a4cbce19ea64042f1311bb58546ea84.scope - libcontainer container 02e961c3aa01b7b920640c3e1282d0406a4cbce19ea64042f1311bb58546ea84. Dec 13 09:04:34.944354 containerd[1469]: time="2024-12-13T09:04:34.944299257Z" level=info msg="StartContainer for \"02e961c3aa01b7b920640c3e1282d0406a4cbce19ea64042f1311bb58546ea84\" returns successfully" Dec 13 09:04:34.950971 containerd[1469]: time="2024-12-13T09:04:34.950906127Z" level=info msg="StopPodSandbox for \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\"" Dec 13 09:04:34.951914 containerd[1469]: time="2024-12-13T09:04:34.951348852Z" level=info msg="StopPodSandbox for \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\"" Dec 13 09:04:34.952219 containerd[1469]: time="2024-12-13T09:04:34.952169501Z" level=info msg="StopPodSandbox for \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\"" Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.076 [INFO][4630] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.078 [INFO][4630] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" iface="eth0" netns="/var/run/netns/cni-e2eac1ea-f2dd-f0aa-834e-03358eab2a30" Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.078 [INFO][4630] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" iface="eth0" netns="/var/run/netns/cni-e2eac1ea-f2dd-f0aa-834e-03358eab2a30" Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.079 [INFO][4630] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" iface="eth0" netns="/var/run/netns/cni-e2eac1ea-f2dd-f0aa-834e-03358eab2a30" Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.079 [INFO][4630] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.079 [INFO][4630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.149 [INFO][4651] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" HandleID="k8s-pod-network.5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.149 [INFO][4651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.149 [INFO][4651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.175 [WARNING][4651] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" HandleID="k8s-pod-network.5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.175 [INFO][4651] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" HandleID="k8s-pod-network.5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.185 [INFO][4651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:35.190020 containerd[1469]: 2024-12-13 09:04:35.188 [INFO][4630] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:35.192352 containerd[1469]: time="2024-12-13T09:04:35.192294668Z" level=info msg="TearDown network for sandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\" successfully" Dec 13 09:04:35.194106 containerd[1469]: time="2024-12-13T09:04:35.192453470Z" level=info msg="StopPodSandbox for \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\" returns successfully" Dec 13 09:04:35.194153 systemd[1]: run-netns-cni\x2de2eac1ea\x2df2dd\x2df0aa\x2d834e\x2d03358eab2a30.mount: Deactivated successfully. Dec 13 09:04:35.203592 containerd[1469]: time="2024-12-13T09:04:35.203365545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-92kcx,Uid:65b6b39e-1d72-462b-8014-1227230aa5b7,Namespace:kube-system,Attempt:1,}" Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.064 [INFO][4629] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.065 [INFO][4629] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" iface="eth0" netns="/var/run/netns/cni-3597611b-a466-af0a-d1f8-bb3e00ce7dc6" Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.065 [INFO][4629] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" iface="eth0" netns="/var/run/netns/cni-3597611b-a466-af0a-d1f8-bb3e00ce7dc6" Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.065 [INFO][4629] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" iface="eth0" netns="/var/run/netns/cni-3597611b-a466-af0a-d1f8-bb3e00ce7dc6" Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.065 [INFO][4629] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.065 [INFO][4629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.146 [INFO][4647] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" HandleID="k8s-pod-network.cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.154 [INFO][4647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.185 [INFO][4647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.204 [WARNING][4647] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" HandleID="k8s-pod-network.cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.204 [INFO][4647] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" HandleID="k8s-pod-network.cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.207 [INFO][4647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:35.216752 containerd[1469]: 2024-12-13 09:04:35.212 [INFO][4629] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:35.219701 systemd[1]: run-netns-cni\x2d3597611b\x2da466\x2daf0a\x2dd1f8\x2dbb3e00ce7dc6.mount: Deactivated successfully. Dec 13 09:04:35.221881 containerd[1469]: time="2024-12-13T09:04:35.221829978Z" level=info msg="TearDown network for sandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\" successfully" Dec 13 09:04:35.221881 containerd[1469]: time="2024-12-13T09:04:35.221866379Z" level=info msg="StopPodSandbox for \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\" returns successfully" Dec 13 09:04:35.229617 containerd[1469]: time="2024-12-13T09:04:35.229463498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc6fbbbbc-n6v2x,Uid:8eb77da9-ce92-437d-a506-03a84d1e2646,Namespace:calico-apiserver,Attempt:1,}" Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.085 [INFO][4625] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.088 [INFO][4625] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" iface="eth0" netns="/var/run/netns/cni-a4da9146-e53d-5d26-9e56-34d7085585a9" Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.089 [INFO][4625] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" iface="eth0" netns="/var/run/netns/cni-a4da9146-e53d-5d26-9e56-34d7085585a9" Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.089 [INFO][4625] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" iface="eth0" netns="/var/run/netns/cni-a4da9146-e53d-5d26-9e56-34d7085585a9" Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.089 [INFO][4625] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.090 [INFO][4625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.176 [INFO][4655] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" HandleID="k8s-pod-network.b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.176 [INFO][4655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.209 [INFO][4655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.233 [WARNING][4655] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" HandleID="k8s-pod-network.b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.234 [INFO][4655] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" HandleID="k8s-pod-network.b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.241 [INFO][4655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:35.246673 containerd[1469]: 2024-12-13 09:04:35.243 [INFO][4625] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:35.248629 containerd[1469]: time="2024-12-13T09:04:35.248445098Z" level=info msg="TearDown network for sandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\" successfully" Dec 13 09:04:35.249415 containerd[1469]: time="2024-12-13T09:04:35.249346587Z" level=info msg="StopPodSandbox for \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\" returns successfully" Dec 13 09:04:35.249852 systemd[1]: run-netns-cni\x2da4da9146\x2de53d\x2d5d26\x2d9e56\x2d34d7085585a9.mount: Deactivated successfully. Dec 13 09:04:35.251836 containerd[1469]: time="2024-12-13T09:04:35.250679161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c4855bd8-km44p,Uid:98dfd1c7-089d-4dd5-bd73-26a0d273295c,Namespace:calico-system,Attempt:1,}" Dec 13 09:04:35.450844 systemd-networkd[1379]: cali9850acc98a8: Link UP Dec 13 09:04:35.453054 systemd-networkd[1379]: cali9850acc98a8: Gained carrier Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.271 [INFO][4668] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.292 [INFO][4668] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0 coredns-7db6d8ff4d- kube-system 65b6b39e-1d72-462b-8014-1227230aa5b7 849 0 2024-12-13 09:04:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-6-29baf1648e coredns-7db6d8ff4d-92kcx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9850acc98a8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-92kcx" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-" Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.292 [INFO][4668] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-92kcx" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.359 [INFO][4703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" HandleID="k8s-pod-network.6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.387 [INFO][4703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" HandleID="k8s-pod-network.6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318960), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-6-29baf1648e", "pod":"coredns-7db6d8ff4d-92kcx", "timestamp":"2024-12-13 09:04:35.359555464 +0000 UTC"}, Hostname:"ci-4081-2-1-6-29baf1648e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.387 [INFO][4703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.387 [INFO][4703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.387 [INFO][4703] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-29baf1648e' Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.390 [INFO][4703] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.405 [INFO][4703] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.413 [INFO][4703] ipam/ipam.go 489: Trying affinity for 192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.416 [INFO][4703] ipam/ipam.go 155: Attempting to load block cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.422 [INFO][4703] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.422 [INFO][4703] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.425 [INFO][4703] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6 Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.432 [INFO][4703] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.442 [INFO][4703] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.36.68/26] block=192.168.36.64/26 handle="k8s-pod-network.6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.443 [INFO][4703] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.68/26] handle="k8s-pod-network.6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.443 [INFO][4703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:35.480385 containerd[1469]: 2024-12-13 09:04:35.443 [INFO][4703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.68/26] IPv6=[] ContainerID="6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" HandleID="k8s-pod-network.6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:35.481084 containerd[1469]: 2024-12-13 09:04:35.445 [INFO][4668] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-92kcx" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"65b6b39e-1d72-462b-8014-1227230aa5b7", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"", Pod:"coredns-7db6d8ff4d-92kcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9850acc98a8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:35.481084 containerd[1469]: 2024-12-13 09:04:35.445 [INFO][4668] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.36.68/32] ContainerID="6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-92kcx" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:35.481084 containerd[1469]: 2024-12-13 09:04:35.445 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9850acc98a8 ContainerID="6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-92kcx" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:35.481084 containerd[1469]: 2024-12-13 09:04:35.449 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-92kcx" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:35.481084 containerd[1469]: 2024-12-13 09:04:35.449 [INFO][4668] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-92kcx" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"65b6b39e-1d72-462b-8014-1227230aa5b7", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6", Pod:"coredns-7db6d8ff4d-92kcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9850acc98a8", MAC:"a2:38:64:4f:45:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:35.481084 containerd[1469]: 2024-12-13 09:04:35.475 [INFO][4668] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-92kcx" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:35.522854 systemd-networkd[1379]: cali93ba1381c45: Link UP Dec 13 09:04:35.527435 systemd-networkd[1379]: cali93ba1381c45: Gained carrier Dec 13 09:04:35.537642 containerd[1469]: time="2024-12-13T09:04:35.537303729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:35.537642 containerd[1469]: time="2024-12-13T09:04:35.537369850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:35.537642 containerd[1469]: time="2024-12-13T09:04:35.537389090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:35.538440 containerd[1469]: time="2024-12-13T09:04:35.538340060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:35.563794 systemd[1]: Started cri-containerd-6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6.scope - libcontainer container 6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6. Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.304 [INFO][4679] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.327 [INFO][4679] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0 calico-apiserver-5dc6fbbbbc- calico-apiserver 8eb77da9-ce92-437d-a506-03a84d1e2646 848 0 2024-12-13 09:04:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dc6fbbbbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-6-29baf1648e calico-apiserver-5dc6fbbbbc-n6v2x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali93ba1381c45 [] []}} ContainerID="4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-n6v2x" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-" Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.327 [INFO][4679] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-n6v2x" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.372 [INFO][4710] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" HandleID="k8s-pod-network.4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.403 [INFO][4710] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" HandleID="k8s-pod-network.4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003175f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-6-29baf1648e", "pod":"calico-apiserver-5dc6fbbbbc-n6v2x", "timestamp":"2024-12-13 09:04:35.372603161 +0000 UTC"}, Hostname:"ci-4081-2-1-6-29baf1648e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.403 [INFO][4710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.443 [INFO][4710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.443 [INFO][4710] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-29baf1648e' Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.447 [INFO][4710] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.457 [INFO][4710] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.479 [INFO][4710] ipam/ipam.go 489: Trying affinity for 192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.483 [INFO][4710] ipam/ipam.go 155: Attempting to load block cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.490 [INFO][4710] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.490 [INFO][4710] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.493 [INFO][4710] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0 Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.499 [INFO][4710] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.511 [INFO][4710] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.36.69/26] block=192.168.36.64/26 handle="k8s-pod-network.4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.512 [INFO][4710] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.69/26] handle="k8s-pod-network.4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.512 [INFO][4710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:35.576312 containerd[1469]: 2024-12-13 09:04:35.512 [INFO][4710] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.69/26] IPv6=[] ContainerID="4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" HandleID="k8s-pod-network.4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:35.576866 containerd[1469]: 2024-12-13 09:04:35.514 [INFO][4679] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-n6v2x" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0", GenerateName:"calico-apiserver-5dc6fbbbbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8eb77da9-ce92-437d-a506-03a84d1e2646", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc6fbbbbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"", Pod:"calico-apiserver-5dc6fbbbbc-n6v2x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93ba1381c45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:35.576866 containerd[1469]: 2024-12-13 09:04:35.515 [INFO][4679] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.36.69/32] ContainerID="4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-n6v2x" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:35.576866 containerd[1469]: 2024-12-13 09:04:35.515 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93ba1381c45 ContainerID="4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-n6v2x" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:35.576866 containerd[1469]: 2024-12-13 09:04:35.526 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-n6v2x" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:35.576866 containerd[1469]: 2024-12-13 09:04:35.528 [INFO][4679] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-n6v2x" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0", GenerateName:"calico-apiserver-5dc6fbbbbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8eb77da9-ce92-437d-a506-03a84d1e2646", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc6fbbbbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0", Pod:"calico-apiserver-5dc6fbbbbc-n6v2x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93ba1381c45", MAC:"26:97:df:2a:11:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:35.576866 containerd[1469]: 2024-12-13 09:04:35.562 [INFO][4679] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0" Namespace="calico-apiserver" Pod="calico-apiserver-5dc6fbbbbc-n6v2x" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:35.637383 containerd[1469]: time="2024-12-13T09:04:35.635408198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:35.637383 containerd[1469]: time="2024-12-13T09:04:35.636527130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:35.637383 containerd[1469]: time="2024-12-13T09:04:35.636549690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:35.637383 containerd[1469]: time="2024-12-13T09:04:35.636634131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:35.639381 systemd-networkd[1379]: cali3a0fab50088: Link UP Dec 13 09:04:35.640252 systemd-networkd[1379]: cali3a0fab50088: Gained carrier Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.322 [INFO][4689] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.352 [INFO][4689] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0 calico-kube-controllers-85c4855bd8- calico-system 98dfd1c7-089d-4dd5-bd73-26a0d273295c 850 0 2024-12-13 09:04:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85c4855bd8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-6-29baf1648e calico-kube-controllers-85c4855bd8-km44p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3a0fab50088 [] []}} ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Namespace="calico-system" Pod="calico-kube-controllers-85c4855bd8-km44p" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-" Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.352 [INFO][4689] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Namespace="calico-system" Pod="calico-kube-controllers-85c4855bd8-km44p" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.414 [INFO][4718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" HandleID="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.436 [INFO][4718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" HandleID="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003167f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-6-29baf1648e", "pod":"calico-kube-controllers-85c4855bd8-km44p", "timestamp":"2024-12-13 09:04:35.414840884 +0000 UTC"}, Hostname:"ci-4081-2-1-6-29baf1648e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.436 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.512 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.513 [INFO][4718] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-29baf1648e' Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.519 [INFO][4718] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.540 [INFO][4718] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.568 [INFO][4718] ipam/ipam.go 489: Trying affinity for 192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.581 [INFO][4718] ipam/ipam.go 155: Attempting to load block cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.597 [INFO][4718] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.598 [INFO][4718] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.607 [INFO][4718] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065 Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.616 [INFO][4718] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.629 [INFO][4718] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.36.70/26] block=192.168.36.64/26 handle="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.629 [INFO][4718] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.70/26] handle="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.629 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:35.671031 containerd[1469]: 2024-12-13 09:04:35.629 [INFO][4718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.70/26] IPv6=[] ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" HandleID="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:35.671919 containerd[1469]: 2024-12-13 09:04:35.632 [INFO][4689] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Namespace="calico-system" Pod="calico-kube-controllers-85c4855bd8-km44p" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0", GenerateName:"calico-kube-controllers-85c4855bd8-", Namespace:"calico-system", SelfLink:"", UID:"98dfd1c7-089d-4dd5-bd73-26a0d273295c", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c4855bd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"", Pod:"calico-kube-controllers-85c4855bd8-km44p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a0fab50088", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:35.671919 containerd[1469]: 2024-12-13 09:04:35.632 [INFO][4689] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.36.70/32] ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Namespace="calico-system" Pod="calico-kube-controllers-85c4855bd8-km44p" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:35.671919 containerd[1469]: 2024-12-13 09:04:35.632 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a0fab50088 ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Namespace="calico-system" Pod="calico-kube-controllers-85c4855bd8-km44p" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:35.671919 containerd[1469]: 2024-12-13 09:04:35.641 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Namespace="calico-system" Pod="calico-kube-controllers-85c4855bd8-km44p" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:35.671919 containerd[1469]: 2024-12-13 09:04:35.641 [INFO][4689] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Namespace="calico-system" Pod="calico-kube-controllers-85c4855bd8-km44p" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0", GenerateName:"calico-kube-controllers-85c4855bd8-", Namespace:"calico-system", SelfLink:"", UID:"98dfd1c7-089d-4dd5-bd73-26a0d273295c", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c4855bd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065", Pod:"calico-kube-controllers-85c4855bd8-km44p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a0fab50088", MAC:"32:45:0b:0a:ad:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:35.671919 containerd[1469]: 2024-12-13 09:04:35.662 [INFO][4689] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Namespace="calico-system" Pod="calico-kube-controllers-85c4855bd8-km44p" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:35.684208 containerd[1469]: time="2024-12-13T09:04:35.682337411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-92kcx,Uid:65b6b39e-1d72-462b-8014-1227230aa5b7,Namespace:kube-system,Attempt:1,} returns sandbox id \"6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6\"" Dec 13 09:04:35.687447 systemd[1]: Started cri-containerd-4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0.scope - libcontainer container 4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0. Dec 13 09:04:35.696200 containerd[1469]: time="2024-12-13T09:04:35.696105955Z" level=info msg="CreateContainer within sandbox \"6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 09:04:35.730095 containerd[1469]: time="2024-12-13T09:04:35.729366264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:35.730095 containerd[1469]: time="2024-12-13T09:04:35.729473385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:35.730095 containerd[1469]: time="2024-12-13T09:04:35.729492746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:35.737459 containerd[1469]: time="2024-12-13T09:04:35.733616269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:35.768169 systemd[1]: Started cri-containerd-eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065.scope - libcontainer container eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065. Dec 13 09:04:35.790278 containerd[1469]: time="2024-12-13T09:04:35.788622286Z" level=info msg="CreateContainer within sandbox \"6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac843b685c1c94e4ffc20040a842f739ef32377126a7552c53f75949c0e4ba9d\"" Dec 13 09:04:35.791372 containerd[1469]: time="2024-12-13T09:04:35.790701308Z" level=info msg="StartContainer for \"ac843b685c1c94e4ffc20040a842f739ef32377126a7552c53f75949c0e4ba9d\"" Dec 13 09:04:35.809750 containerd[1469]: time="2024-12-13T09:04:35.809705547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc6fbbbbc-n6v2x,Uid:8eb77da9-ce92-437d-a506-03a84d1e2646,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0\"" Dec 13 09:04:35.819442 systemd-networkd[1379]: cali2eff1a0bee0: Gained IPv6LL Dec 13 09:04:35.845481 systemd[1]: Started cri-containerd-ac843b685c1c94e4ffc20040a842f739ef32377126a7552c53f75949c0e4ba9d.scope - libcontainer container ac843b685c1c94e4ffc20040a842f739ef32377126a7552c53f75949c0e4ba9d. Dec 13 09:04:35.892650 containerd[1469]: time="2024-12-13T09:04:35.892129612Z" level=info msg="StartContainer for \"ac843b685c1c94e4ffc20040a842f739ef32377126a7552c53f75949c0e4ba9d\" returns successfully" Dec 13 09:04:35.911078 containerd[1469]: time="2024-12-13T09:04:35.910724927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c4855bd8-km44p,Uid:98dfd1c7-089d-4dd5-bd73-26a0d273295c,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\"" Dec 13 09:04:36.297938 kubelet[2739]: I1213 09:04:36.296350 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-92kcx" podStartSLOduration=36.296329608 podStartE2EDuration="36.296329608s" podCreationTimestamp="2024-12-13 09:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:04:36.275879077 +0000 UTC m=+50.433561646" watchObservedRunningTime="2024-12-13 09:04:36.296329608 +0000 UTC m=+50.454012137" Dec 13 09:04:36.446255 kubelet[2739]: I1213 09:04:36.445121 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:04:36.652304 systemd-networkd[1379]: cali93ba1381c45: Gained IPv6LL Dec 13 09:04:36.779523 systemd-networkd[1379]: cali9850acc98a8: Gained IPv6LL Dec 13 09:04:37.419525 systemd-networkd[1379]: cali3a0fab50088: Gained IPv6LL Dec 13 09:04:37.569234 kernel: bpftool[5010]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 09:04:37.764814 systemd-networkd[1379]: vxlan.calico: Link UP Dec 13 09:04:37.764827 systemd-networkd[1379]: vxlan.calico: Gained carrier Dec 13 09:04:39.483444 containerd[1469]: time="2024-12-13T09:04:39.482595414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:39.484710 containerd[1469]: time="2024-12-13T09:04:39.484679595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 09:04:39.485901 containerd[1469]: time="2024-12-13T09:04:39.485851246Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:39.489176 containerd[1469]: time="2024-12-13T09:04:39.489112918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:39.490130 containerd[1469]: time="2024-12-13T09:04:39.490098528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.649807619s" Dec 13 09:04:39.490638 containerd[1469]: time="2024-12-13T09:04:39.490535172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 09:04:39.491934 containerd[1469]: time="2024-12-13T09:04:39.491565943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 09:04:39.494426 containerd[1469]: time="2024-12-13T09:04:39.494398491Z" level=info msg="CreateContainer within sandbox \"930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 09:04:39.512608 containerd[1469]: time="2024-12-13T09:04:39.512483390Z" level=info msg="CreateContainer within sandbox \"930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a381c044186d2423e115ce3ea620e263bdd98a4cc2a8f8d0eda9234fec3afcb1\"" Dec 13 09:04:39.514893 containerd[1469]: time="2024-12-13T09:04:39.514849213Z" level=info msg="StartContainer for \"a381c044186d2423e115ce3ea620e263bdd98a4cc2a8f8d0eda9234fec3afcb1\"" Dec 13 09:04:39.532797 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Dec 13 09:04:39.557419 systemd[1]: Started cri-containerd-a381c044186d2423e115ce3ea620e263bdd98a4cc2a8f8d0eda9234fec3afcb1.scope - libcontainer container a381c044186d2423e115ce3ea620e263bdd98a4cc2a8f8d0eda9234fec3afcb1. Dec 13 09:04:39.597935 containerd[1469]: time="2024-12-13T09:04:39.597881595Z" level=info msg="StartContainer for \"a381c044186d2423e115ce3ea620e263bdd98a4cc2a8f8d0eda9234fec3afcb1\" returns successfully" Dec 13 09:04:40.285461 kubelet[2739]: I1213 09:04:40.285008 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-bmlsz" podStartSLOduration=24.143826467 podStartE2EDuration="29.28498652s" podCreationTimestamp="2024-12-13 09:04:11 +0000 UTC" firstStartedPulling="2024-12-13 09:04:34.350231768 +0000 UTC m=+48.507914257" lastFinishedPulling="2024-12-13 09:04:39.491391781 +0000 UTC m=+53.649074310" observedRunningTime="2024-12-13 09:04:40.283812709 +0000 UTC m=+54.441495238" watchObservedRunningTime="2024-12-13 09:04:40.28498652 +0000 UTC m=+54.442669089" Dec 13 09:04:41.072306 containerd[1469]: time="2024-12-13T09:04:41.071389311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:41.074129 containerd[1469]: time="2024-12-13T09:04:41.074059216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 09:04:41.076477 containerd[1469]: time="2024-12-13T09:04:41.076396639Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:41.084115 containerd[1469]: time="2024-12-13T09:04:41.083827950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.592208607s" Dec 13 09:04:41.084115 containerd[1469]: time="2024-12-13T09:04:41.083968912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 09:04:41.088174 containerd[1469]: time="2024-12-13T09:04:41.086688258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 09:04:41.088174 containerd[1469]: time="2024-12-13T09:04:41.088008871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:41.092371 containerd[1469]: time="2024-12-13T09:04:41.092327152Z" level=info msg="CreateContainer within sandbox \"c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 09:04:41.114151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884836715.mount: Deactivated successfully. Dec 13 09:04:41.124370 containerd[1469]: time="2024-12-13T09:04:41.122617404Z" level=info msg="CreateContainer within sandbox \"c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"970ae0140db7000918c574351f688148d0cc126ef5249c070942551f29b7320e\"" Dec 13 09:04:41.126674 containerd[1469]: time="2024-12-13T09:04:41.125009867Z" level=info msg="StartContainer for \"970ae0140db7000918c574351f688148d0cc126ef5249c070942551f29b7320e\"" Dec 13 09:04:41.171507 systemd[1]: run-containerd-runc-k8s.io-970ae0140db7000918c574351f688148d0cc126ef5249c070942551f29b7320e-runc.GZrLQE.mount: Deactivated successfully. Dec 13 09:04:41.180488 systemd[1]: Started cri-containerd-970ae0140db7000918c574351f688148d0cc126ef5249c070942551f29b7320e.scope - libcontainer container 970ae0140db7000918c574351f688148d0cc126ef5249c070942551f29b7320e. Dec 13 09:04:41.262796 containerd[1469]: time="2024-12-13T09:04:41.262681113Z" level=info msg="StartContainer for \"970ae0140db7000918c574351f688148d0cc126ef5249c070942551f29b7320e\" returns successfully" Dec 13 09:04:41.273890 kubelet[2739]: I1213 09:04:41.273844 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:04:41.292713 kubelet[2739]: I1213 09:04:41.292636 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rzphw" podStartSLOduration=22.548309104 podStartE2EDuration="30.292617282s" podCreationTimestamp="2024-12-13 09:04:11 +0000 UTC" firstStartedPulling="2024-12-13 09:04:33.340451941 +0000 UTC m=+47.498134470" lastFinishedPulling="2024-12-13 09:04:41.084760119 +0000 UTC m=+55.242442648" observedRunningTime="2024-12-13 09:04:41.290949666 +0000 UTC m=+55.448632235" watchObservedRunningTime="2024-12-13 09:04:41.292617282 +0000 UTC m=+55.450299811" Dec 13 09:04:41.471317 containerd[1469]: time="2024-12-13T09:04:41.469431585Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:41.471317 containerd[1469]: time="2024-12-13T09:04:41.470249913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 09:04:41.474177 containerd[1469]: time="2024-12-13T09:04:41.474137070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 387.404812ms" Dec 13 09:04:41.474177 containerd[1469]: time="2024-12-13T09:04:41.474184351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 09:04:41.476398 containerd[1469]: time="2024-12-13T09:04:41.476362812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 09:04:41.478864 containerd[1469]: time="2024-12-13T09:04:41.478815075Z" level=info msg="CreateContainer within sandbox \"4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 09:04:41.497349 containerd[1469]: time="2024-12-13T09:04:41.497303134Z" level=info msg="CreateContainer within sandbox \"4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5729dc713b65c86d9f0d9f3aa674b11248d0e8c7330dc1305ae9676bcd38b19e\"" Dec 13 09:04:41.499654 containerd[1469]: time="2024-12-13T09:04:41.499608836Z" level=info msg="StartContainer for \"5729dc713b65c86d9f0d9f3aa674b11248d0e8c7330dc1305ae9676bcd38b19e\"" Dec 13 09:04:41.531435 systemd[1]: Started cri-containerd-5729dc713b65c86d9f0d9f3aa674b11248d0e8c7330dc1305ae9676bcd38b19e.scope - libcontainer container 5729dc713b65c86d9f0d9f3aa674b11248d0e8c7330dc1305ae9676bcd38b19e. Dec 13 09:04:41.571883 containerd[1469]: time="2024-12-13T09:04:41.571532209Z" level=info msg="StartContainer for \"5729dc713b65c86d9f0d9f3aa674b11248d0e8c7330dc1305ae9676bcd38b19e\" returns successfully" Dec 13 09:04:42.068402 kubelet[2739]: I1213 09:04:42.068345 2739 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 09:04:42.075386 kubelet[2739]: I1213 09:04:42.075183 2739 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 09:04:43.142856 kubelet[2739]: I1213 09:04:43.142227 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dc6fbbbbc-n6v2x" podStartSLOduration=26.48220994 podStartE2EDuration="32.141962293s" podCreationTimestamp="2024-12-13 09:04:11 +0000 UTC" firstStartedPulling="2024-12-13 09:04:35.81565377 +0000 UTC m=+49.973336259" lastFinishedPulling="2024-12-13 09:04:41.475406083 +0000 UTC m=+55.633088612" observedRunningTime="2024-12-13 09:04:42.309338036 +0000 UTC m=+56.467020565" watchObservedRunningTime="2024-12-13 09:04:43.141962293 +0000 UTC m=+57.299644782" Dec 13 09:04:44.786972 containerd[1469]: time="2024-12-13T09:04:44.786913190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:44.788361 containerd[1469]: time="2024-12-13T09:04:44.788295922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 09:04:44.789226 containerd[1469]: time="2024-12-13T09:04:44.788977809Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:44.791944 containerd[1469]: time="2024-12-13T09:04:44.791611873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:04:44.792546 containerd[1469]: time="2024-12-13T09:04:44.792501841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.316105229s" Dec 13 09:04:44.792649 containerd[1469]: time="2024-12-13T09:04:44.792544162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 09:04:44.809293 containerd[1469]: time="2024-12-13T09:04:44.808973234Z" level=info msg="CreateContainer within sandbox \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 09:04:44.832649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2784042518.mount: Deactivated successfully. Dec 13 09:04:44.835611 containerd[1469]: time="2024-12-13T09:04:44.835456079Z" level=info msg="CreateContainer within sandbox \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b\"" Dec 13 09:04:44.838672 containerd[1469]: time="2024-12-13T09:04:44.838623508Z" level=info msg="StartContainer for \"e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b\"" Dec 13 09:04:44.874424 systemd[1]: Started cri-containerd-e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b.scope - libcontainer container e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b. Dec 13 09:04:44.935309 containerd[1469]: time="2024-12-13T09:04:44.935075202Z" level=info msg="StartContainer for \"e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b\" returns successfully" Dec 13 09:04:45.333209 kubelet[2739]: I1213 09:04:45.332404 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85c4855bd8-km44p" podStartSLOduration=25.45161298 podStartE2EDuration="34.332385122s" podCreationTimestamp="2024-12-13 09:04:11 +0000 UTC" firstStartedPulling="2024-12-13 09:04:35.913198753 +0000 UTC m=+50.070881282" lastFinishedPulling="2024-12-13 09:04:44.793970895 +0000 UTC m=+58.951653424" observedRunningTime="2024-12-13 09:04:45.330814828 +0000 UTC m=+59.488497357" watchObservedRunningTime="2024-12-13 09:04:45.332385122 +0000 UTC m=+59.490067611" Dec 13 09:04:45.969924 containerd[1469]: time="2024-12-13T09:04:45.969886952Z" level=info msg="StopPodSandbox for \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\"" Dec 13 09:04:46.069867 containerd[1469]: 2024-12-13 09:04:46.023 [WARNING][5300] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e5cf6eb7-a899-4823-b6d3-c77cbab40250", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21", Pod:"coredns-7db6d8ff4d-8vqm4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf8f2dc739b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:46.069867 containerd[1469]: 2024-12-13 09:04:46.023 [INFO][5300] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:46.069867 containerd[1469]: 2024-12-13 09:04:46.023 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" iface="eth0" netns="" Dec 13 09:04:46.069867 containerd[1469]: 2024-12-13 09:04:46.023 [INFO][5300] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:46.069867 containerd[1469]: 2024-12-13 09:04:46.023 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:46.069867 containerd[1469]: 2024-12-13 09:04:46.052 [INFO][5306] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" HandleID="k8s-pod-network.c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:46.069867 containerd[1469]: 2024-12-13 09:04:46.052 [INFO][5306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:46.069867 containerd[1469]: 2024-12-13 09:04:46.052 [INFO][5306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:46.069867 containerd[1469]: 2024-12-13 09:04:46.065 [WARNING][5306] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" HandleID="k8s-pod-network.c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:46.069867 containerd[1469]: 2024-12-13 09:04:46.065 [INFO][5306] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" HandleID="k8s-pod-network.c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:46.069867 containerd[1469]: 2024-12-13 09:04:46.067 [INFO][5306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:46.069867 containerd[1469]: 2024-12-13 09:04:46.068 [INFO][5300] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:46.070570 containerd[1469]: time="2024-12-13T09:04:46.069912899Z" level=info msg="TearDown network for sandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\" successfully" Dec 13 09:04:46.070570 containerd[1469]: time="2024-12-13T09:04:46.069938939Z" level=info msg="StopPodSandbox for \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\" returns successfully" Dec 13 09:04:46.071173 containerd[1469]: time="2024-12-13T09:04:46.071141630Z" level=info msg="RemovePodSandbox for \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\"" Dec 13 09:04:46.078014 containerd[1469]: time="2024-12-13T09:04:46.077951571Z" level=info msg="Forcibly stopping sandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\"" Dec 13 09:04:46.163340 containerd[1469]: 2024-12-13 09:04:46.123 [WARNING][5324] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e5cf6eb7-a899-4823-b6d3-c77cbab40250", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"4f6be0e32855c36c0804b2a3fcaf54af510724ec87d2678ba0595c04dc441b21", Pod:"coredns-7db6d8ff4d-8vqm4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicf8f2dc739b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:46.163340 containerd[1469]: 2024-12-13 09:04:46.124 [INFO][5324] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:46.163340 containerd[1469]: 2024-12-13 09:04:46.124 [INFO][5324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" iface="eth0" netns="" Dec 13 09:04:46.163340 containerd[1469]: 2024-12-13 09:04:46.124 [INFO][5324] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:46.163340 containerd[1469]: 2024-12-13 09:04:46.124 [INFO][5324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:46.163340 containerd[1469]: 2024-12-13 09:04:46.144 [INFO][5330] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" HandleID="k8s-pod-network.c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:46.163340 containerd[1469]: 2024-12-13 09:04:46.144 [INFO][5330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:46.163340 containerd[1469]: 2024-12-13 09:04:46.144 [INFO][5330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:46.163340 containerd[1469]: 2024-12-13 09:04:46.154 [WARNING][5330] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" HandleID="k8s-pod-network.c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:46.163340 containerd[1469]: 2024-12-13 09:04:46.154 [INFO][5330] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" HandleID="k8s-pod-network.c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--8vqm4-eth0" Dec 13 09:04:46.163340 containerd[1469]: 2024-12-13 09:04:46.157 [INFO][5330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:46.163340 containerd[1469]: 2024-12-13 09:04:46.160 [INFO][5324] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112" Dec 13 09:04:46.163340 containerd[1469]: time="2024-12-13T09:04:46.163323102Z" level=info msg="TearDown network for sandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\" successfully" Dec 13 09:04:46.181811 containerd[1469]: time="2024-12-13T09:04:46.180851900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:04:46.183201 containerd[1469]: time="2024-12-13T09:04:46.183106481Z" level=info msg="RemovePodSandbox \"c26ca972bba2c68a33ac3b1bd5bc5ab74b1fc4671383982fc51cffd46a633112\" returns successfully" Dec 13 09:04:46.186630 containerd[1469]: time="2024-12-13T09:04:46.186425471Z" level=info msg="StopPodSandbox for \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\"" Dec 13 09:04:46.282712 containerd[1469]: 2024-12-13 09:04:46.243 [WARNING][5350] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0", GenerateName:"calico-apiserver-5dc6fbbbbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f31f3787-a93b-48e8-9d06-71efae4d1e4f", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc6fbbbbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d", Pod:"calico-apiserver-5dc6fbbbbc-bmlsz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2eff1a0bee0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:46.282712 containerd[1469]: 2024-12-13 09:04:46.244 [INFO][5350] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:46.282712 containerd[1469]: 2024-12-13 09:04:46.244 [INFO][5350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" iface="eth0" netns="" Dec 13 09:04:46.282712 containerd[1469]: 2024-12-13 09:04:46.244 [INFO][5350] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:46.282712 containerd[1469]: 2024-12-13 09:04:46.244 [INFO][5350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:46.282712 containerd[1469]: 2024-12-13 09:04:46.267 [INFO][5357] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" HandleID="k8s-pod-network.cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:46.282712 containerd[1469]: 2024-12-13 09:04:46.268 [INFO][5357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:46.282712 containerd[1469]: 2024-12-13 09:04:46.268 [INFO][5357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:46.282712 containerd[1469]: 2024-12-13 09:04:46.277 [WARNING][5357] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" HandleID="k8s-pod-network.cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:46.282712 containerd[1469]: 2024-12-13 09:04:46.277 [INFO][5357] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" HandleID="k8s-pod-network.cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:46.282712 containerd[1469]: 2024-12-13 09:04:46.279 [INFO][5357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:46.282712 containerd[1469]: 2024-12-13 09:04:46.281 [INFO][5350] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:46.283251 containerd[1469]: time="2024-12-13T09:04:46.282775941Z" level=info msg="TearDown network for sandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\" successfully" Dec 13 09:04:46.283251 containerd[1469]: time="2024-12-13T09:04:46.282804701Z" level=info msg="StopPodSandbox for \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\" returns successfully" Dec 13 09:04:46.283691 containerd[1469]: time="2024-12-13T09:04:46.283662149Z" level=info msg="RemovePodSandbox for \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\"" Dec 13 09:04:46.283739 containerd[1469]: time="2024-12-13T09:04:46.283701549Z" level=info msg="Forcibly stopping sandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\"" Dec 13 09:04:46.377086 containerd[1469]: 2024-12-13 09:04:46.335 [WARNING][5375] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0", GenerateName:"calico-apiserver-5dc6fbbbbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f31f3787-a93b-48e8-9d06-71efae4d1e4f", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc6fbbbbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"930aa36e05bb1c97b451ff3373cc4593fb9e4ce8598a2a704224f8f2ad53102d", Pod:"calico-apiserver-5dc6fbbbbc-bmlsz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2eff1a0bee0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:46.377086 containerd[1469]: 2024-12-13 09:04:46.336 [INFO][5375] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:46.377086 containerd[1469]: 2024-12-13 09:04:46.336 [INFO][5375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" iface="eth0" netns="" Dec 13 09:04:46.377086 containerd[1469]: 2024-12-13 09:04:46.336 [INFO][5375] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:46.377086 containerd[1469]: 2024-12-13 09:04:46.336 [INFO][5375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:46.377086 containerd[1469]: 2024-12-13 09:04:46.358 [INFO][5381] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" HandleID="k8s-pod-network.cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:46.377086 containerd[1469]: 2024-12-13 09:04:46.359 [INFO][5381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:46.377086 containerd[1469]: 2024-12-13 09:04:46.359 [INFO][5381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:46.377086 containerd[1469]: 2024-12-13 09:04:46.372 [WARNING][5381] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" HandleID="k8s-pod-network.cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:46.377086 containerd[1469]: 2024-12-13 09:04:46.372 [INFO][5381] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" HandleID="k8s-pod-network.cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--bmlsz-eth0" Dec 13 09:04:46.377086 containerd[1469]: 2024-12-13 09:04:46.374 [INFO][5381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:46.377086 containerd[1469]: 2024-12-13 09:04:46.375 [INFO][5375] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8" Dec 13 09:04:46.377612 containerd[1469]: time="2024-12-13T09:04:46.377115433Z" level=info msg="TearDown network for sandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\" successfully" Dec 13 09:04:46.383304 containerd[1469]: time="2024-12-13T09:04:46.383247968Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:04:46.383425 containerd[1469]: time="2024-12-13T09:04:46.383341209Z" level=info msg="RemovePodSandbox \"cbd5c2a3370ccd0367eb414fa0c2bb141c3d5b09d72cb1fcc73708be96de1fd8\" returns successfully" Dec 13 09:04:46.383949 containerd[1469]: time="2024-12-13T09:04:46.383898774Z" level=info msg="StopPodSandbox for \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\"" Dec 13 09:04:46.482376 containerd[1469]: 2024-12-13 09:04:46.434 [WARNING][5400] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0", GenerateName:"calico-kube-controllers-85c4855bd8-", Namespace:"calico-system", SelfLink:"", UID:"98dfd1c7-089d-4dd5-bd73-26a0d273295c", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c4855bd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065", Pod:"calico-kube-controllers-85c4855bd8-km44p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a0fab50088", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:46.482376 containerd[1469]: 2024-12-13 09:04:46.435 [INFO][5400] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:46.482376 containerd[1469]: 2024-12-13 09:04:46.435 [INFO][5400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" iface="eth0" netns="" Dec 13 09:04:46.482376 containerd[1469]: 2024-12-13 09:04:46.435 [INFO][5400] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:46.482376 containerd[1469]: 2024-12-13 09:04:46.435 [INFO][5400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:46.482376 containerd[1469]: 2024-12-13 09:04:46.461 [INFO][5406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" HandleID="k8s-pod-network.b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:46.482376 containerd[1469]: 2024-12-13 09:04:46.461 [INFO][5406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:46.482376 containerd[1469]: 2024-12-13 09:04:46.461 [INFO][5406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:46.482376 containerd[1469]: 2024-12-13 09:04:46.474 [WARNING][5406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" HandleID="k8s-pod-network.b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:46.482376 containerd[1469]: 2024-12-13 09:04:46.474 [INFO][5406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" HandleID="k8s-pod-network.b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:46.482376 containerd[1469]: 2024-12-13 09:04:46.477 [INFO][5406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:46.482376 containerd[1469]: 2024-12-13 09:04:46.480 [INFO][5400] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:46.484134 containerd[1469]: time="2024-12-13T09:04:46.483074510Z" level=info msg="TearDown network for sandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\" successfully" Dec 13 09:04:46.484134 containerd[1469]: time="2024-12-13T09:04:46.483102230Z" level=info msg="StopPodSandbox for \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\" returns successfully" Dec 13 09:04:46.484134 containerd[1469]: time="2024-12-13T09:04:46.483743236Z" level=info msg="RemovePodSandbox for \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\"" Dec 13 09:04:46.484134 containerd[1469]: time="2024-12-13T09:04:46.483771436Z" level=info msg="Forcibly stopping sandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\"" Dec 13 09:04:46.583562 containerd[1469]: 2024-12-13 09:04:46.541 [WARNING][5424] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0", GenerateName:"calico-kube-controllers-85c4855bd8-", Namespace:"calico-system", SelfLink:"", UID:"98dfd1c7-089d-4dd5-bd73-26a0d273295c", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c4855bd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065", Pod:"calico-kube-controllers-85c4855bd8-km44p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a0fab50088", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:46.583562 containerd[1469]: 2024-12-13 09:04:46.541 [INFO][5424] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:46.583562 containerd[1469]: 2024-12-13 09:04:46.541 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" iface="eth0" netns="" Dec 13 09:04:46.583562 containerd[1469]: 2024-12-13 09:04:46.541 [INFO][5424] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:46.583562 containerd[1469]: 2024-12-13 09:04:46.541 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:46.583562 containerd[1469]: 2024-12-13 09:04:46.566 [INFO][5430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" HandleID="k8s-pod-network.b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:46.583562 containerd[1469]: 2024-12-13 09:04:46.566 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:46.583562 containerd[1469]: 2024-12-13 09:04:46.567 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:46.583562 containerd[1469]: 2024-12-13 09:04:46.576 [WARNING][5430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" HandleID="k8s-pod-network.b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:46.583562 containerd[1469]: 2024-12-13 09:04:46.576 [INFO][5430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" HandleID="k8s-pod-network.b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:46.583562 containerd[1469]: 2024-12-13 09:04:46.579 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:46.583562 containerd[1469]: 2024-12-13 09:04:46.581 [INFO][5424] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005" Dec 13 09:04:46.585339 containerd[1469]: time="2024-12-13T09:04:46.585284753Z" level=info msg="TearDown network for sandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\" successfully" Dec 13 09:04:46.600094 containerd[1469]: time="2024-12-13T09:04:46.599900525Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:04:46.600094 containerd[1469]: time="2024-12-13T09:04:46.599981166Z" level=info msg="RemovePodSandbox \"b25e367989dc1f43f73b851a3000b1de04a898cb7a9b4fa99a8ee3b732a24005\" returns successfully" Dec 13 09:04:46.600645 containerd[1469]: time="2024-12-13T09:04:46.600612131Z" level=info msg="StopPodSandbox for \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\"" Dec 13 09:04:46.688812 containerd[1469]: 2024-12-13 09:04:46.645 [WARNING][5448] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0", GenerateName:"calico-apiserver-5dc6fbbbbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8eb77da9-ce92-437d-a506-03a84d1e2646", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc6fbbbbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0", Pod:"calico-apiserver-5dc6fbbbbc-n6v2x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93ba1381c45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:46.688812 containerd[1469]: 2024-12-13 09:04:46.645 [INFO][5448] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:46.688812 containerd[1469]: 2024-12-13 09:04:46.645 [INFO][5448] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" iface="eth0" netns="" Dec 13 09:04:46.688812 containerd[1469]: 2024-12-13 09:04:46.645 [INFO][5448] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:46.688812 containerd[1469]: 2024-12-13 09:04:46.645 [INFO][5448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:46.688812 containerd[1469]: 2024-12-13 09:04:46.672 [INFO][5454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" HandleID="k8s-pod-network.cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:46.688812 containerd[1469]: 2024-12-13 09:04:46.672 [INFO][5454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:46.688812 containerd[1469]: 2024-12-13 09:04:46.672 [INFO][5454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:46.688812 containerd[1469]: 2024-12-13 09:04:46.683 [WARNING][5454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" HandleID="k8s-pod-network.cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:46.688812 containerd[1469]: 2024-12-13 09:04:46.683 [INFO][5454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" HandleID="k8s-pod-network.cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:46.688812 containerd[1469]: 2024-12-13 09:04:46.685 [INFO][5454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:46.688812 containerd[1469]: 2024-12-13 09:04:46.687 [INFO][5448] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:46.688812 containerd[1469]: time="2024-12-13T09:04:46.688687407Z" level=info msg="TearDown network for sandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\" successfully" Dec 13 09:04:46.688812 containerd[1469]: time="2024-12-13T09:04:46.688712087Z" level=info msg="StopPodSandbox for \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\" returns successfully" Dec 13 09:04:46.689560 containerd[1469]: time="2024-12-13T09:04:46.689518974Z" level=info msg="RemovePodSandbox for \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\"" Dec 13 09:04:46.689622 containerd[1469]: time="2024-12-13T09:04:46.689571215Z" level=info msg="Forcibly stopping sandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\"" Dec 13 09:04:46.783848 containerd[1469]: 2024-12-13 09:04:46.741 [WARNING][5472] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0", GenerateName:"calico-apiserver-5dc6fbbbbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8eb77da9-ce92-437d-a506-03a84d1e2646", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc6fbbbbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"4b593dae3f0befdae8a8083dc6204e01dcadb1fd122accf1599aae814ed8cbc0", Pod:"calico-apiserver-5dc6fbbbbc-n6v2x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93ba1381c45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:46.783848 containerd[1469]: 2024-12-13 09:04:46.742 [INFO][5472] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:46.783848 containerd[1469]: 2024-12-13 09:04:46.742 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" iface="eth0" netns="" Dec 13 09:04:46.783848 containerd[1469]: 2024-12-13 09:04:46.742 [INFO][5472] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:46.783848 containerd[1469]: 2024-12-13 09:04:46.742 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:46.783848 containerd[1469]: 2024-12-13 09:04:46.765 [INFO][5478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" HandleID="k8s-pod-network.cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:46.783848 containerd[1469]: 2024-12-13 09:04:46.765 [INFO][5478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:46.783848 containerd[1469]: 2024-12-13 09:04:46.766 [INFO][5478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:46.783848 containerd[1469]: 2024-12-13 09:04:46.778 [WARNING][5478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" HandleID="k8s-pod-network.cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:46.783848 containerd[1469]: 2024-12-13 09:04:46.778 [INFO][5478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" HandleID="k8s-pod-network.cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--apiserver--5dc6fbbbbc--n6v2x-eth0" Dec 13 09:04:46.783848 containerd[1469]: 2024-12-13 09:04:46.780 [INFO][5478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:46.783848 containerd[1469]: 2024-12-13 09:04:46.782 [INFO][5472] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58" Dec 13 09:04:46.784637 containerd[1469]: time="2024-12-13T09:04:46.783952827Z" level=info msg="TearDown network for sandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\" successfully" Dec 13 09:04:46.788448 containerd[1469]: time="2024-12-13T09:04:46.788287826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:04:46.788448 containerd[1469]: time="2024-12-13T09:04:46.788362667Z" level=info msg="RemovePodSandbox \"cee7ebb1fba18159ff11307939bdf389a2d589ccec8cc2afd86ae202032c0c58\" returns successfully" Dec 13 09:04:46.788812 containerd[1469]: time="2024-12-13T09:04:46.788791711Z" level=info msg="StopPodSandbox for \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\"" Dec 13 09:04:46.872697 containerd[1469]: 2024-12-13 09:04:46.831 [WARNING][5496] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5aed41f-ee8e-4f6b-9d24-6472c4316100", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa", Pod:"csi-node-driver-rzphw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b221ecdd8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:46.872697 containerd[1469]: 2024-12-13 09:04:46.832 [INFO][5496] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:46.872697 containerd[1469]: 2024-12-13 09:04:46.832 [INFO][5496] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" iface="eth0" netns="" Dec 13 09:04:46.872697 containerd[1469]: 2024-12-13 09:04:46.832 [INFO][5496] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:46.872697 containerd[1469]: 2024-12-13 09:04:46.832 [INFO][5496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:46.872697 containerd[1469]: 2024-12-13 09:04:46.855 [INFO][5502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" HandleID="k8s-pod-network.6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Workload="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:46.872697 containerd[1469]: 2024-12-13 09:04:46.855 [INFO][5502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:46.872697 containerd[1469]: 2024-12-13 09:04:46.855 [INFO][5502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:46.872697 containerd[1469]: 2024-12-13 09:04:46.865 [WARNING][5502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" HandleID="k8s-pod-network.6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Workload="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:46.872697 containerd[1469]: 2024-12-13 09:04:46.865 [INFO][5502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" HandleID="k8s-pod-network.6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Workload="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:46.872697 containerd[1469]: 2024-12-13 09:04:46.868 [INFO][5502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:46.872697 containerd[1469]: 2024-12-13 09:04:46.870 [INFO][5496] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:46.872697 containerd[1469]: time="2024-12-13T09:04:46.871554538Z" level=info msg="TearDown network for sandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\" successfully" Dec 13 09:04:46.872697 containerd[1469]: time="2024-12-13T09:04:46.871592259Z" level=info msg="StopPodSandbox for \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\" returns successfully" Dec 13 09:04:46.872697 containerd[1469]: time="2024-12-13T09:04:46.872369626Z" level=info msg="RemovePodSandbox for \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\"" Dec 13 09:04:46.872697 containerd[1469]: time="2024-12-13T09:04:46.872422386Z" level=info msg="Forcibly stopping sandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\"" Dec 13 09:04:46.958342 containerd[1469]: 2024-12-13 09:04:46.914 [WARNING][5521] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5aed41f-ee8e-4f6b-9d24-6472c4316100", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"c49881a562a1f914bc1c18ed42ce9ee84ce2ec18215b849aae8a711ce8bdf0aa", Pod:"csi-node-driver-rzphw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b221ecdd8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:46.958342 containerd[1469]: 2024-12-13 09:04:46.914 [INFO][5521] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:46.958342 containerd[1469]: 2024-12-13 09:04:46.914 [INFO][5521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" iface="eth0" netns="" Dec 13 09:04:46.958342 containerd[1469]: 2024-12-13 09:04:46.914 [INFO][5521] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:46.958342 containerd[1469]: 2024-12-13 09:04:46.914 [INFO][5521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:46.958342 containerd[1469]: 2024-12-13 09:04:46.940 [INFO][5528] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" HandleID="k8s-pod-network.6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Workload="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:46.958342 containerd[1469]: 2024-12-13 09:04:46.940 [INFO][5528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:46.958342 containerd[1469]: 2024-12-13 09:04:46.940 [INFO][5528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:46.958342 containerd[1469]: 2024-12-13 09:04:46.952 [WARNING][5528] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" HandleID="k8s-pod-network.6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Workload="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:46.958342 containerd[1469]: 2024-12-13 09:04:46.952 [INFO][5528] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" HandleID="k8s-pod-network.6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Workload="ci--4081--2--1--6--29baf1648e-k8s-csi--node--driver--rzphw-eth0" Dec 13 09:04:46.958342 containerd[1469]: 2024-12-13 09:04:46.954 [INFO][5528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:46.958342 containerd[1469]: 2024-12-13 09:04:46.956 [INFO][5521] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6" Dec 13 09:04:46.958342 containerd[1469]: time="2024-12-13T09:04:46.957904518Z" level=info msg="TearDown network for sandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\" successfully" Dec 13 09:04:46.971464 containerd[1469]: time="2024-12-13T09:04:46.971259599Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:04:46.971464 containerd[1469]: time="2024-12-13T09:04:46.971358240Z" level=info msg="RemovePodSandbox \"6cac5e5ccd560f5ddfc5e1ad84038ca7fcb3e0f5b6914b6a6be9959f524278d6\" returns successfully" Dec 13 09:04:46.972450 containerd[1469]: time="2024-12-13T09:04:46.972423929Z" level=info msg="StopPodSandbox for \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\"" Dec 13 09:04:47.053431 containerd[1469]: 2024-12-13 09:04:47.018 [WARNING][5546] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"65b6b39e-1d72-462b-8014-1227230aa5b7", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6", Pod:"coredns-7db6d8ff4d-92kcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9850acc98a8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:47.053431 containerd[1469]: 2024-12-13 09:04:47.018 [INFO][5546] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:47.053431 containerd[1469]: 2024-12-13 09:04:47.018 [INFO][5546] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" iface="eth0" netns="" Dec 13 09:04:47.053431 containerd[1469]: 2024-12-13 09:04:47.018 [INFO][5546] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:47.053431 containerd[1469]: 2024-12-13 09:04:47.019 [INFO][5546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:47.053431 containerd[1469]: 2024-12-13 09:04:47.038 [INFO][5553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" HandleID="k8s-pod-network.5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:47.053431 containerd[1469]: 2024-12-13 09:04:47.038 [INFO][5553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:47.053431 containerd[1469]: 2024-12-13 09:04:47.038 [INFO][5553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:47.053431 containerd[1469]: 2024-12-13 09:04:47.048 [WARNING][5553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" HandleID="k8s-pod-network.5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:47.053431 containerd[1469]: 2024-12-13 09:04:47.048 [INFO][5553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" HandleID="k8s-pod-network.5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:47.053431 containerd[1469]: 2024-12-13 09:04:47.050 [INFO][5553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:47.053431 containerd[1469]: 2024-12-13 09:04:47.051 [INFO][5546] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:47.054114 containerd[1469]: time="2024-12-13T09:04:47.053666857Z" level=info msg="TearDown network for sandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\" successfully" Dec 13 09:04:47.054114 containerd[1469]: time="2024-12-13T09:04:47.053727938Z" level=info msg="StopPodSandbox for \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\" returns successfully" Dec 13 09:04:47.055315 containerd[1469]: time="2024-12-13T09:04:47.054958069Z" level=info msg="RemovePodSandbox for \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\"" Dec 13 09:04:47.055315 containerd[1469]: time="2024-12-13T09:04:47.055004509Z" level=info msg="Forcibly stopping sandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\"" Dec 13 09:04:47.156060 containerd[1469]: 2024-12-13 09:04:47.110 [WARNING][5571] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"65b6b39e-1d72-462b-8014-1227230aa5b7", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"6fbc28ea8231d7f9c9368d72a2ee51a0645b70d3cde62c0690ced45b8d6f5cb6", Pod:"coredns-7db6d8ff4d-92kcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9850acc98a8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:47.156060 containerd[1469]: 2024-12-13 09:04:47.110 [INFO][5571] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:47.156060 containerd[1469]: 2024-12-13 09:04:47.110 [INFO][5571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" iface="eth0" netns="" Dec 13 09:04:47.156060 containerd[1469]: 2024-12-13 09:04:47.111 [INFO][5571] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:47.156060 containerd[1469]: 2024-12-13 09:04:47.111 [INFO][5571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:47.156060 containerd[1469]: 2024-12-13 09:04:47.136 [INFO][5577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" HandleID="k8s-pod-network.5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:47.156060 containerd[1469]: 2024-12-13 09:04:47.136 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:47.156060 containerd[1469]: 2024-12-13 09:04:47.136 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:47.156060 containerd[1469]: 2024-12-13 09:04:47.149 [WARNING][5577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" HandleID="k8s-pod-network.5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:47.156060 containerd[1469]: 2024-12-13 09:04:47.149 [INFO][5577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" HandleID="k8s-pod-network.5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Workload="ci--4081--2--1--6--29baf1648e-k8s-coredns--7db6d8ff4d--92kcx-eth0" Dec 13 09:04:47.156060 containerd[1469]: 2024-12-13 09:04:47.151 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:47.156060 containerd[1469]: 2024-12-13 09:04:47.153 [INFO][5571] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983" Dec 13 09:04:47.156060 containerd[1469]: time="2024-12-13T09:04:47.154754599Z" level=info msg="TearDown network for sandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\" successfully" Dec 13 09:04:47.159249 containerd[1469]: time="2024-12-13T09:04:47.159147198Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:04:47.159345 containerd[1469]: time="2024-12-13T09:04:47.159303880Z" level=info msg="RemovePodSandbox \"5cdea306b810b82f1b7f58d749ae630fb73366e472231977198a9ea51c4d0983\" returns successfully" Dec 13 09:04:52.818429 containerd[1469]: time="2024-12-13T09:04:52.818359495Z" level=info msg="StopContainer for \"0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e\" with timeout 300 (s)" Dec 13 09:04:52.819144 containerd[1469]: time="2024-12-13T09:04:52.818823139Z" level=info msg="Stop container \"0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e\" with signal terminated" Dec 13 09:04:53.011960 containerd[1469]: time="2024-12-13T09:04:53.011904885Z" level=info msg="StopContainer for \"e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b\" with timeout 30 (s)" Dec 13 09:04:53.012591 containerd[1469]: time="2024-12-13T09:04:53.012558770Z" level=info msg="Stop container \"e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b\" with signal terminated" Dec 13 09:04:53.035891 systemd[1]: cri-containerd-e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b.scope: Deactivated successfully. Dec 13 09:04:53.074505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b-rootfs.mount: Deactivated successfully. Dec 13 09:04:53.103443 containerd[1469]: time="2024-12-13T09:04:53.103398967Z" level=info msg="StopContainer for \"12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e\" with timeout 5 (s)" Dec 13 09:04:53.103962 containerd[1469]: time="2024-12-13T09:04:53.103926052Z" level=info msg="Stop container \"12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e\" with signal terminated" Dec 13 09:04:53.125504 containerd[1469]: time="2024-12-13T09:04:53.125412471Z" level=info msg="shim disconnected" id=e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b namespace=k8s.io Dec 13 09:04:53.125504 containerd[1469]: time="2024-12-13T09:04:53.125470391Z" level=warning msg="cleaning up after shim disconnected" id=e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b namespace=k8s.io Dec 13 09:04:53.125504 containerd[1469]: time="2024-12-13T09:04:53.125480152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:04:53.151887 containerd[1469]: time="2024-12-13T09:04:53.151627289Z" level=info msg="StopContainer for \"e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b\" returns successfully" Dec 13 09:04:53.152345 containerd[1469]: time="2024-12-13T09:04:53.152310895Z" level=info msg="StopPodSandbox for \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\"" Dec 13 09:04:53.152418 containerd[1469]: time="2024-12-13T09:04:53.152354455Z" level=info msg="Container to stop \"e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:04:53.158068 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065-shm.mount: Deactivated successfully. Dec 13 09:04:53.168404 systemd[1]: cri-containerd-12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e.scope: Deactivated successfully. Dec 13 09:04:53.168748 systemd[1]: cri-containerd-12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e.scope: Consumed 3.261s CPU time. Dec 13 09:04:53.170741 systemd[1]: cri-containerd-eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065.scope: Deactivated successfully. Dec 13 09:04:53.207605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e-rootfs.mount: Deactivated successfully. Dec 13 09:04:53.210638 containerd[1469]: time="2024-12-13T09:04:53.208364722Z" level=info msg="shim disconnected" id=12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e namespace=k8s.io Dec 13 09:04:53.210638 containerd[1469]: time="2024-12-13T09:04:53.208496443Z" level=warning msg="cleaning up after shim disconnected" id=12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e namespace=k8s.io Dec 13 09:04:53.210638 containerd[1469]: time="2024-12-13T09:04:53.208510043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:04:53.223622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065-rootfs.mount: Deactivated successfully. Dec 13 09:04:53.228597 containerd[1469]: time="2024-12-13T09:04:53.228375089Z" level=info msg="shim disconnected" id=eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065 namespace=k8s.io Dec 13 09:04:53.228597 containerd[1469]: time="2024-12-13T09:04:53.228443890Z" level=warning msg="cleaning up after shim disconnected" id=eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065 namespace=k8s.io Dec 13 09:04:53.228597 containerd[1469]: time="2024-12-13T09:04:53.228454010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:04:53.245495 containerd[1469]: time="2024-12-13T09:04:53.245438271Z" level=info msg="StopContainer for \"12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e\" returns successfully" Dec 13 09:04:53.245985 containerd[1469]: time="2024-12-13T09:04:53.245891515Z" level=info msg="StopPodSandbox for \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\"" Dec 13 09:04:53.245985 containerd[1469]: time="2024-12-13T09:04:53.245927755Z" level=info msg="Container to stop \"12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:04:53.245985 containerd[1469]: time="2024-12-13T09:04:53.245940475Z" level=info msg="Container to stop \"6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:04:53.245985 containerd[1469]: time="2024-12-13T09:04:53.245950675Z" level=info msg="Container to stop \"59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:04:53.273329 systemd[1]: cri-containerd-d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7.scope: Deactivated successfully. Dec 13 09:04:53.312661 containerd[1469]: time="2024-12-13T09:04:53.312601311Z" level=info msg="shim disconnected" id=d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7 namespace=k8s.io Dec 13 09:04:53.312661 containerd[1469]: time="2024-12-13T09:04:53.312657151Z" level=warning msg="cleaning up after shim disconnected" id=d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7 namespace=k8s.io Dec 13 09:04:53.312661 containerd[1469]: time="2024-12-13T09:04:53.312665951Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:04:53.340654 containerd[1469]: time="2024-12-13T09:04:53.340530944Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:04:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:04:53.346533 kubelet[2739]: I1213 09:04:53.346494 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:04:53.352184 containerd[1469]: time="2024-12-13T09:04:53.352098920Z" level=info msg="TearDown network for sandbox \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\" successfully" Dec 13 09:04:53.352184 containerd[1469]: time="2024-12-13T09:04:53.352139880Z" level=info msg="StopPodSandbox for \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\" returns successfully" Dec 13 09:04:53.382924 systemd-networkd[1379]: cali3a0fab50088: Link DOWN Dec 13 09:04:53.382931 systemd-networkd[1379]: cali3a0fab50088: Lost carrier Dec 13 09:04:53.427308 kubelet[2739]: I1213 09:04:53.426645 2739 topology_manager.go:215] "Topology Admit Handler" podUID="33d425e5-14d9-4be8-890f-a325b3727c73" podNamespace="calico-system" podName="calico-node-gb9nw" Dec 13 09:04:53.427308 kubelet[2739]: E1213 09:04:53.426735 2739 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="67d6ce33-fde6-47c0-a23e-dcb137fc2649" containerName="flexvol-driver" Dec 13 09:04:53.427308 kubelet[2739]: E1213 09:04:53.426756 2739 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="67d6ce33-fde6-47c0-a23e-dcb137fc2649" containerName="install-cni" Dec 13 09:04:53.427308 kubelet[2739]: E1213 09:04:53.426764 2739 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="67d6ce33-fde6-47c0-a23e-dcb137fc2649" containerName="calico-node" Dec 13 09:04:53.427308 kubelet[2739]: I1213 09:04:53.426792 2739 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d6ce33-fde6-47c0-a23e-dcb137fc2649" containerName="calico-node" Dec 13 09:04:53.440215 systemd[1]: Created slice kubepods-besteffort-pod33d425e5_14d9_4be8_890f_a325b3727c73.slice - libcontainer container kubepods-besteffort-pod33d425e5_14d9_4be8_890f_a325b3727c73.slice. Dec 13 09:04:53.472470 kubelet[2739]: I1213 09:04:53.472419 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-lib-modules\") pod \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " Dec 13 09:04:53.472610 kubelet[2739]: I1213 09:04:53.472494 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/67d6ce33-fde6-47c0-a23e-dcb137fc2649-node-certs\") pod \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " Dec 13 09:04:53.472610 kubelet[2739]: I1213 09:04:53.472511 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-flexvol-driver-host\") pod \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " Dec 13 09:04:53.472610 kubelet[2739]: I1213 09:04:53.472532 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-var-lib-calico\") pod \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " Dec 13 09:04:53.472610 kubelet[2739]: I1213 09:04:53.472548 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-cni-net-dir\") pod \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " Dec 13 09:04:53.472610 kubelet[2739]: I1213 09:04:53.472563 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-policysync\") pod \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " Dec 13 09:04:53.472610 kubelet[2739]: I1213 09:04:53.472579 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-var-run-calico\") pod \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " Dec 13 09:04:53.472753 kubelet[2739]: I1213 09:04:53.472594 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-cni-bin-dir\") pod \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " Dec 13 09:04:53.472753 kubelet[2739]: I1213 09:04:53.472613 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67d6ce33-fde6-47c0-a23e-dcb137fc2649-tigera-ca-bundle\") pod \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " Dec 13 09:04:53.472753 kubelet[2739]: I1213 09:04:53.472631 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brjhx\" (UniqueName: \"kubernetes.io/projected/67d6ce33-fde6-47c0-a23e-dcb137fc2649-kube-api-access-brjhx\") pod \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " Dec 13 09:04:53.472753 kubelet[2739]: I1213 09:04:53.472645 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-xtables-lock\") pod \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " Dec 13 09:04:53.472753 kubelet[2739]: I1213 09:04:53.472661 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-cni-log-dir\") pod \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\" (UID: \"67d6ce33-fde6-47c0-a23e-dcb137fc2649\") " Dec 13 09:04:53.472753 kubelet[2739]: I1213 09:04:53.472723 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33d425e5-14d9-4be8-890f-a325b3727c73-tigera-ca-bundle\") pod \"calico-node-gb9nw\" (UID: \"33d425e5-14d9-4be8-890f-a325b3727c73\") " pod="calico-system/calico-node-gb9nw" Dec 13 09:04:53.472886 kubelet[2739]: I1213 09:04:53.472747 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/33d425e5-14d9-4be8-890f-a325b3727c73-flexvol-driver-host\") pod \"calico-node-gb9nw\" (UID: \"33d425e5-14d9-4be8-890f-a325b3727c73\") " pod="calico-system/calico-node-gb9nw" Dec 13 09:04:53.472886 kubelet[2739]: I1213 09:04:53.472771 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/33d425e5-14d9-4be8-890f-a325b3727c73-cni-log-dir\") pod \"calico-node-gb9nw\" (UID: \"33d425e5-14d9-4be8-890f-a325b3727c73\") " pod="calico-system/calico-node-gb9nw" Dec 13 09:04:53.472886 kubelet[2739]: I1213 09:04:53.472796 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/33d425e5-14d9-4be8-890f-a325b3727c73-cni-bin-dir\") pod \"calico-node-gb9nw\" (UID: \"33d425e5-14d9-4be8-890f-a325b3727c73\") " pod="calico-system/calico-node-gb9nw" Dec 13 09:04:53.472886 kubelet[2739]: I1213 09:04:53.472816 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/33d425e5-14d9-4be8-890f-a325b3727c73-var-run-calico\") pod \"calico-node-gb9nw\" (UID: \"33d425e5-14d9-4be8-890f-a325b3727c73\") " pod="calico-system/calico-node-gb9nw" Dec 13 09:04:53.472886 kubelet[2739]: I1213 09:04:53.472834 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/33d425e5-14d9-4be8-890f-a325b3727c73-var-lib-calico\") pod \"calico-node-gb9nw\" (UID: \"33d425e5-14d9-4be8-890f-a325b3727c73\") " pod="calico-system/calico-node-gb9nw" Dec 13 09:04:53.472999 kubelet[2739]: I1213 09:04:53.472849 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33d425e5-14d9-4be8-890f-a325b3727c73-xtables-lock\") pod \"calico-node-gb9nw\" (UID: \"33d425e5-14d9-4be8-890f-a325b3727c73\") " pod="calico-system/calico-node-gb9nw" Dec 13 09:04:53.472999 kubelet[2739]: I1213 09:04:53.472866 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33d425e5-14d9-4be8-890f-a325b3727c73-lib-modules\") pod \"calico-node-gb9nw\" (UID: \"33d425e5-14d9-4be8-890f-a325b3727c73\") " pod="calico-system/calico-node-gb9nw" Dec 13 09:04:53.472999 kubelet[2739]: I1213 09:04:53.472889 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/33d425e5-14d9-4be8-890f-a325b3727c73-policysync\") pod \"calico-node-gb9nw\" (UID: \"33d425e5-14d9-4be8-890f-a325b3727c73\") " pod="calico-system/calico-node-gb9nw" Dec 13 09:04:53.472999 kubelet[2739]: I1213 09:04:53.472906 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/33d425e5-14d9-4be8-890f-a325b3727c73-node-certs\") pod \"calico-node-gb9nw\" (UID: \"33d425e5-14d9-4be8-890f-a325b3727c73\") " pod="calico-system/calico-node-gb9nw" Dec 13 09:04:53.472999 kubelet[2739]: I1213 09:04:53.472927 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5sps\" (UniqueName: \"kubernetes.io/projected/33d425e5-14d9-4be8-890f-a325b3727c73-kube-api-access-r5sps\") pod \"calico-node-gb9nw\" (UID: \"33d425e5-14d9-4be8-890f-a325b3727c73\") " pod="calico-system/calico-node-gb9nw" Dec 13 09:04:53.473120 kubelet[2739]: I1213 09:04:53.472946 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/33d425e5-14d9-4be8-890f-a325b3727c73-cni-net-dir\") pod \"calico-node-gb9nw\" (UID: \"33d425e5-14d9-4be8-890f-a325b3727c73\") " pod="calico-system/calico-node-gb9nw" Dec 13 09:04:53.475245 kubelet[2739]: I1213 09:04:53.474613 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "67d6ce33-fde6-47c0-a23e-dcb137fc2649" (UID: "67d6ce33-fde6-47c0-a23e-dcb137fc2649"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:04:53.478544 kubelet[2739]: I1213 09:04:53.478489 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "67d6ce33-fde6-47c0-a23e-dcb137fc2649" (UID: "67d6ce33-fde6-47c0-a23e-dcb137fc2649"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:04:53.479499 kubelet[2739]: I1213 09:04:53.479387 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "67d6ce33-fde6-47c0-a23e-dcb137fc2649" (UID: "67d6ce33-fde6-47c0-a23e-dcb137fc2649"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:04:53.479499 kubelet[2739]: I1213 09:04:53.479423 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "67d6ce33-fde6-47c0-a23e-dcb137fc2649" (UID: "67d6ce33-fde6-47c0-a23e-dcb137fc2649"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:04:53.479499 kubelet[2739]: I1213 09:04:53.479441 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "67d6ce33-fde6-47c0-a23e-dcb137fc2649" (UID: "67d6ce33-fde6-47c0-a23e-dcb137fc2649"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:04:53.479499 kubelet[2739]: I1213 09:04:53.479459 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "67d6ce33-fde6-47c0-a23e-dcb137fc2649" (UID: "67d6ce33-fde6-47c0-a23e-dcb137fc2649"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:04:53.479499 kubelet[2739]: I1213 09:04:53.479475 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-policysync" (OuterVolumeSpecName: "policysync") pod "67d6ce33-fde6-47c0-a23e-dcb137fc2649" (UID: "67d6ce33-fde6-47c0-a23e-dcb137fc2649"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:04:53.494211 kubelet[2739]: I1213 09:04:53.494064 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67d6ce33-fde6-47c0-a23e-dcb137fc2649-kube-api-access-brjhx" (OuterVolumeSpecName: "kube-api-access-brjhx") pod "67d6ce33-fde6-47c0-a23e-dcb137fc2649" (UID: "67d6ce33-fde6-47c0-a23e-dcb137fc2649"). InnerVolumeSpecName "kube-api-access-brjhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 09:04:53.494211 kubelet[2739]: I1213 09:04:53.494156 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "67d6ce33-fde6-47c0-a23e-dcb137fc2649" (UID: "67d6ce33-fde6-47c0-a23e-dcb137fc2649"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:04:53.500092 kubelet[2739]: I1213 09:04:53.500031 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67d6ce33-fde6-47c0-a23e-dcb137fc2649-node-certs" (OuterVolumeSpecName: "node-certs") pod "67d6ce33-fde6-47c0-a23e-dcb137fc2649" (UID: "67d6ce33-fde6-47c0-a23e-dcb137fc2649"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 09:04:53.500307 kubelet[2739]: I1213 09:04:53.500111 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "67d6ce33-fde6-47c0-a23e-dcb137fc2649" (UID: "67d6ce33-fde6-47c0-a23e-dcb137fc2649"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:04:53.516114 kubelet[2739]: I1213 09:04:53.516053 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67d6ce33-fde6-47c0-a23e-dcb137fc2649-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "67d6ce33-fde6-47c0-a23e-dcb137fc2649" (UID: "67d6ce33-fde6-47c0-a23e-dcb137fc2649"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 09:04:53.578134 kubelet[2739]: I1213 09:04:53.574010 2739 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-lib-modules\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.578134 kubelet[2739]: I1213 09:04:53.574045 2739 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-var-lib-calico\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.578134 kubelet[2739]: I1213 09:04:53.574055 2739 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-cni-net-dir\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.578134 kubelet[2739]: I1213 09:04:53.574063 2739 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-cni-bin-dir\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.578134 kubelet[2739]: I1213 09:04:53.574072 2739 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67d6ce33-fde6-47c0-a23e-dcb137fc2649-tigera-ca-bundle\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.578134 kubelet[2739]: I1213 09:04:53.574081 2739 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-xtables-lock\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.578134 kubelet[2739]: I1213 09:04:53.574089 2739 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-cni-log-dir\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.578134 kubelet[2739]: I1213 09:04:53.574098 2739 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/67d6ce33-fde6-47c0-a23e-dcb137fc2649-node-certs\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.578533 kubelet[2739]: I1213 09:04:53.574109 2739 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-flexvol-driver-host\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.578533 kubelet[2739]: I1213 09:04:53.574118 2739 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-policysync\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.578533 kubelet[2739]: I1213 09:04:53.574126 2739 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-brjhx\" (UniqueName: \"kubernetes.io/projected/67d6ce33-fde6-47c0-a23e-dcb137fc2649-kube-api-access-brjhx\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.578533 kubelet[2739]: I1213 09:04:53.574134 2739 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/67d6ce33-fde6-47c0-a23e-dcb137fc2649-var-run-calico\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.380 [INFO][5774] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.380 [INFO][5774] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" iface="eth0" netns="/var/run/netns/cni-8055a180-1bc1-816d-7f08-f98903161970" Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.381 [INFO][5774] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" iface="eth0" netns="/var/run/netns/cni-8055a180-1bc1-816d-7f08-f98903161970" Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.398 [INFO][5774] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" after=17.600347ms iface="eth0" netns="/var/run/netns/cni-8055a180-1bc1-816d-7f08-f98903161970" Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.398 [INFO][5774] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.398 [INFO][5774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.440 [INFO][5806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" HandleID="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.440 [INFO][5806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.440 [INFO][5806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.577 [INFO][5806] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" HandleID="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.577 [INFO][5806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" HandleID="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.580 [INFO][5806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:53.585980 containerd[1469]: 2024-12-13 09:04:53.582 [INFO][5774] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:04:53.586433 containerd[1469]: time="2024-12-13T09:04:53.586359752Z" level=info msg="TearDown network for sandbox \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\" successfully" Dec 13 09:04:53.586433 containerd[1469]: time="2024-12-13T09:04:53.586404353Z" level=info msg="StopPodSandbox for \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\" returns successfully" Dec 13 09:04:53.674878 kubelet[2739]: I1213 09:04:53.674504 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98dfd1c7-089d-4dd5-bd73-26a0d273295c-tigera-ca-bundle\") pod \"98dfd1c7-089d-4dd5-bd73-26a0d273295c\" (UID: \"98dfd1c7-089d-4dd5-bd73-26a0d273295c\") " Dec 13 09:04:53.674878 kubelet[2739]: I1213 09:04:53.674567 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxbj2\" (UniqueName: \"kubernetes.io/projected/98dfd1c7-089d-4dd5-bd73-26a0d273295c-kube-api-access-zxbj2\") pod \"98dfd1c7-089d-4dd5-bd73-26a0d273295c\" (UID: \"98dfd1c7-089d-4dd5-bd73-26a0d273295c\") " Dec 13 09:04:53.682290 kubelet[2739]: I1213 09:04:53.682214 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98dfd1c7-089d-4dd5-bd73-26a0d273295c-kube-api-access-zxbj2" (OuterVolumeSpecName: "kube-api-access-zxbj2") pod "98dfd1c7-089d-4dd5-bd73-26a0d273295c" (UID: "98dfd1c7-089d-4dd5-bd73-26a0d273295c"). InnerVolumeSpecName "kube-api-access-zxbj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 09:04:53.683675 kubelet[2739]: I1213 09:04:53.683486 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98dfd1c7-089d-4dd5-bd73-26a0d273295c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "98dfd1c7-089d-4dd5-bd73-26a0d273295c" (UID: "98dfd1c7-089d-4dd5-bd73-26a0d273295c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 09:04:53.746724 containerd[1469]: time="2024-12-13T09:04:53.746678968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gb9nw,Uid:33d425e5-14d9-4be8-890f-a325b3727c73,Namespace:calico-system,Attempt:0,}" Dec 13 09:04:53.772219 containerd[1469]: time="2024-12-13T09:04:53.771096372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:53.772219 containerd[1469]: time="2024-12-13T09:04:53.771160613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:53.772219 containerd[1469]: time="2024-12-13T09:04:53.771176293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:53.772219 containerd[1469]: time="2024-12-13T09:04:53.771309414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:53.775815 kubelet[2739]: I1213 09:04:53.775774 2739 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98dfd1c7-089d-4dd5-bd73-26a0d273295c-tigera-ca-bundle\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.775815 kubelet[2739]: I1213 09:04:53.775810 2739 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zxbj2\" (UniqueName: \"kubernetes.io/projected/98dfd1c7-089d-4dd5-bd73-26a0d273295c-kube-api-access-zxbj2\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:53.798432 systemd[1]: Started cri-containerd-17efbe0346e1ea93b16c1ffddcb8f53db27b460a280f658c049aa5ea3a30495e.scope - libcontainer container 17efbe0346e1ea93b16c1ffddcb8f53db27b460a280f658c049aa5ea3a30495e. Dec 13 09:04:53.843162 containerd[1469]: time="2024-12-13T09:04:53.842935171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gb9nw,Uid:33d425e5-14d9-4be8-890f-a325b3727c73,Namespace:calico-system,Attempt:0,} returns sandbox id \"17efbe0346e1ea93b16c1ffddcb8f53db27b460a280f658c049aa5ea3a30495e\"" Dec 13 09:04:53.848070 containerd[1469]: time="2024-12-13T09:04:53.846174638Z" level=info msg="CreateContainer within sandbox \"17efbe0346e1ea93b16c1ffddcb8f53db27b460a280f658c049aa5ea3a30495e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 09:04:53.867715 containerd[1469]: time="2024-12-13T09:04:53.867367934Z" level=info msg="CreateContainer within sandbox \"17efbe0346e1ea93b16c1ffddcb8f53db27b460a280f658c049aa5ea3a30495e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"27c9632d868c6c4e587bdfd95f0a0d59210cf73fd22f5b9fdcf40c9e2f533b21\"" Dec 13 09:04:53.869859 containerd[1469]: time="2024-12-13T09:04:53.868693345Z" level=info msg="StartContainer for \"27c9632d868c6c4e587bdfd95f0a0d59210cf73fd22f5b9fdcf40c9e2f533b21\"" Dec 13 09:04:53.910065 systemd[1]: Started cri-containerd-27c9632d868c6c4e587bdfd95f0a0d59210cf73fd22f5b9fdcf40c9e2f533b21.scope - libcontainer container 27c9632d868c6c4e587bdfd95f0a0d59210cf73fd22f5b9fdcf40c9e2f533b21. Dec 13 09:04:53.939428 systemd[1]: var-lib-kubelet-pods-98dfd1c7\x2d089d\x2d4dd5\x2dbd73\x2d26a0d273295c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Dec 13 09:04:53.939571 systemd[1]: run-netns-cni\x2d8055a180\x2d1bc1\x2d816d\x2d7f08\x2df98903161970.mount: Deactivated successfully. Dec 13 09:04:53.939651 systemd[1]: var-lib-kubelet-pods-67d6ce33\x2dfde6\x2d47c0\x2da23e\x2ddcb137fc2649-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Dec 13 09:04:53.939737 systemd[1]: var-lib-kubelet-pods-98dfd1c7\x2d089d\x2d4dd5\x2dbd73\x2d26a0d273295c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzxbj2.mount: Deactivated successfully. Dec 13 09:04:53.940575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7-rootfs.mount: Deactivated successfully. Dec 13 09:04:53.940783 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7-shm.mount: Deactivated successfully. Dec 13 09:04:53.940941 systemd[1]: var-lib-kubelet-pods-67d6ce33\x2dfde6\x2d47c0\x2da23e\x2ddcb137fc2649-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Dec 13 09:04:53.941110 systemd[1]: var-lib-kubelet-pods-67d6ce33\x2dfde6\x2d47c0\x2da23e\x2ddcb137fc2649-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbrjhx.mount: Deactivated successfully. Dec 13 09:04:53.966452 systemd[1]: Removed slice kubepods-besteffort-pod67d6ce33_fde6_47c0_a23e_dcb137fc2649.slice - libcontainer container kubepods-besteffort-pod67d6ce33_fde6_47c0_a23e_dcb137fc2649.slice. Dec 13 09:04:53.966562 systemd[1]: kubepods-besteffort-pod67d6ce33_fde6_47c0_a23e_dcb137fc2649.slice: Consumed 3.747s CPU time. Dec 13 09:04:53.971493 systemd[1]: Removed slice kubepods-besteffort-pod98dfd1c7_089d_4dd5_bd73_26a0d273295c.slice - libcontainer container kubepods-besteffort-pod98dfd1c7_089d_4dd5_bd73_26a0d273295c.slice. Dec 13 09:04:53.984364 containerd[1469]: time="2024-12-13T09:04:53.984172108Z" level=info msg="StartContainer for \"27c9632d868c6c4e587bdfd95f0a0d59210cf73fd22f5b9fdcf40c9e2f533b21\" returns successfully" Dec 13 09:04:54.011561 systemd[1]: cri-containerd-27c9632d868c6c4e587bdfd95f0a0d59210cf73fd22f5b9fdcf40c9e2f533b21.scope: Deactivated successfully. Dec 13 09:04:54.044643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27c9632d868c6c4e587bdfd95f0a0d59210cf73fd22f5b9fdcf40c9e2f533b21-rootfs.mount: Deactivated successfully. Dec 13 09:04:54.053781 containerd[1469]: time="2024-12-13T09:04:54.053421400Z" level=info msg="shim disconnected" id=27c9632d868c6c4e587bdfd95f0a0d59210cf73fd22f5b9fdcf40c9e2f533b21 namespace=k8s.io Dec 13 09:04:54.053781 containerd[1469]: time="2024-12-13T09:04:54.053486641Z" level=warning msg="cleaning up after shim disconnected" id=27c9632d868c6c4e587bdfd95f0a0d59210cf73fd22f5b9fdcf40c9e2f533b21 namespace=k8s.io Dec 13 09:04:54.053781 containerd[1469]: time="2024-12-13T09:04:54.053496361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:04:54.265047 systemd[1]: cri-containerd-0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e.scope: Deactivated successfully. Dec 13 09:04:54.292041 containerd[1469]: time="2024-12-13T09:04:54.291914567Z" level=info msg="shim disconnected" id=0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e namespace=k8s.io Dec 13 09:04:54.292041 containerd[1469]: time="2024-12-13T09:04:54.291994928Z" level=warning msg="cleaning up after shim disconnected" id=0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e namespace=k8s.io Dec 13 09:04:54.292041 containerd[1469]: time="2024-12-13T09:04:54.292003968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:04:54.295840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e-rootfs.mount: Deactivated successfully. Dec 13 09:04:54.310592 containerd[1469]: time="2024-12-13T09:04:54.310409119Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:04:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:04:54.320547 containerd[1469]: time="2024-12-13T09:04:54.320379202Z" level=info msg="StopContainer for \"0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e\" returns successfully" Dec 13 09:04:54.321412 containerd[1469]: time="2024-12-13T09:04:54.320876846Z" level=info msg="StopPodSandbox for \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\"" Dec 13 09:04:54.321412 containerd[1469]: time="2024-12-13T09:04:54.320922246Z" level=info msg="Container to stop \"0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:04:54.326515 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f-shm.mount: Deactivated successfully. Dec 13 09:04:54.334063 systemd[1]: cri-containerd-5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f.scope: Deactivated successfully. Dec 13 09:04:54.365698 containerd[1469]: time="2024-12-13T09:04:54.365326892Z" level=info msg="CreateContainer within sandbox \"17efbe0346e1ea93b16c1ffddcb8f53db27b460a280f658c049aa5ea3a30495e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 09:04:54.370748 kubelet[2739]: I1213 09:04:54.370663 2739 scope.go:117] "RemoveContainer" containerID="12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e" Dec 13 09:04:54.387660 containerd[1469]: time="2024-12-13T09:04:54.384321169Z" level=info msg="RemoveContainer for \"12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e\"" Dec 13 09:04:54.385163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f-rootfs.mount: Deactivated successfully. Dec 13 09:04:54.389827 containerd[1469]: time="2024-12-13T09:04:54.388444603Z" level=info msg="shim disconnected" id=5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f namespace=k8s.io Dec 13 09:04:54.389827 containerd[1469]: time="2024-12-13T09:04:54.388498283Z" level=warning msg="cleaning up after shim disconnected" id=5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f namespace=k8s.io Dec 13 09:04:54.389827 containerd[1469]: time="2024-12-13T09:04:54.388506923Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:04:54.398290 containerd[1469]: time="2024-12-13T09:04:54.397553478Z" level=info msg="RemoveContainer for \"12aeb792bb0a704f33d2cebbdfea3618d7a8622c62cfffe2c3f3c1fd5ff24a8e\" returns successfully" Dec 13 09:04:54.398418 kubelet[2739]: I1213 09:04:54.398349 2739 scope.go:117] "RemoveContainer" containerID="6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d" Dec 13 09:04:54.404367 containerd[1469]: time="2024-12-13T09:04:54.404118092Z" level=info msg="RemoveContainer for \"6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d\"" Dec 13 09:04:54.428961 containerd[1469]: time="2024-12-13T09:04:54.427179842Z" level=info msg="RemoveContainer for \"6f4435c7eea1d715f09c7bc05c49de59f198c88ceacc9f6510bbeca957e08e5d\" returns successfully" Dec 13 09:04:54.429720 kubelet[2739]: I1213 09:04:54.429573 2739 scope.go:117] "RemoveContainer" containerID="59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f" Dec 13 09:04:54.432342 containerd[1469]: time="2024-12-13T09:04:54.431407117Z" level=info msg="RemoveContainer for \"59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f\"" Dec 13 09:04:54.437747 containerd[1469]: time="2024-12-13T09:04:54.437698769Z" level=info msg="CreateContainer within sandbox \"17efbe0346e1ea93b16c1ffddcb8f53db27b460a280f658c049aa5ea3a30495e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a667e3a197ebc381c62e829d0a2e301384f25635bdd22864483c6ac044b14b92\"" Dec 13 09:04:54.439582 containerd[1469]: time="2024-12-13T09:04:54.439536464Z" level=info msg="TearDown network for sandbox \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\" successfully" Dec 13 09:04:54.439582 containerd[1469]: time="2024-12-13T09:04:54.439573745Z" level=info msg="StopPodSandbox for \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\" returns successfully" Dec 13 09:04:54.441014 containerd[1469]: time="2024-12-13T09:04:54.440160189Z" level=info msg="StartContainer for \"a667e3a197ebc381c62e829d0a2e301384f25635bdd22864483c6ac044b14b92\"" Dec 13 09:04:54.445236 containerd[1469]: time="2024-12-13T09:04:54.445174991Z" level=info msg="RemoveContainer for \"59125eb6e45c8de6488721cde2ea69c03f3763da9777ca2cbbdf018abc9b524f\" returns successfully" Dec 13 09:04:54.483744 kubelet[2739]: I1213 09:04:54.483695 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07fce06a-0764-4ff9-a3f3-ce807df56785-tigera-ca-bundle\") pod \"07fce06a-0764-4ff9-a3f3-ce807df56785\" (UID: \"07fce06a-0764-4ff9-a3f3-ce807df56785\") " Dec 13 09:04:54.483744 kubelet[2739]: I1213 09:04:54.483734 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/07fce06a-0764-4ff9-a3f3-ce807df56785-typha-certs\") pod \"07fce06a-0764-4ff9-a3f3-ce807df56785\" (UID: \"07fce06a-0764-4ff9-a3f3-ce807df56785\") " Dec 13 09:04:54.483908 kubelet[2739]: I1213 09:04:54.483787 2739 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqwgh\" (UniqueName: \"kubernetes.io/projected/07fce06a-0764-4ff9-a3f3-ce807df56785-kube-api-access-zqwgh\") pod \"07fce06a-0764-4ff9-a3f3-ce807df56785\" (UID: \"07fce06a-0764-4ff9-a3f3-ce807df56785\") " Dec 13 09:04:54.501382 kubelet[2739]: I1213 09:04:54.501232 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07fce06a-0764-4ff9-a3f3-ce807df56785-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "07fce06a-0764-4ff9-a3f3-ce807df56785" (UID: "07fce06a-0764-4ff9-a3f3-ce807df56785"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 09:04:54.501382 kubelet[2739]: I1213 09:04:54.501339 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07fce06a-0764-4ff9-a3f3-ce807df56785-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "07fce06a-0764-4ff9-a3f3-ce807df56785" (UID: "07fce06a-0764-4ff9-a3f3-ce807df56785"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 09:04:54.502561 kubelet[2739]: I1213 09:04:54.502527 2739 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07fce06a-0764-4ff9-a3f3-ce807df56785-kube-api-access-zqwgh" (OuterVolumeSpecName: "kube-api-access-zqwgh") pod "07fce06a-0764-4ff9-a3f3-ce807df56785" (UID: "07fce06a-0764-4ff9-a3f3-ce807df56785"). InnerVolumeSpecName "kube-api-access-zqwgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 09:04:54.508743 systemd[1]: Started cri-containerd-a667e3a197ebc381c62e829d0a2e301384f25635bdd22864483c6ac044b14b92.scope - libcontainer container a667e3a197ebc381c62e829d0a2e301384f25635bdd22864483c6ac044b14b92. Dec 13 09:04:54.584534 containerd[1469]: time="2024-12-13T09:04:54.584392779Z" level=info msg="StartContainer for \"a667e3a197ebc381c62e829d0a2e301384f25635bdd22864483c6ac044b14b92\" returns successfully" Dec 13 09:04:54.585618 kubelet[2739]: I1213 09:04:54.585469 2739 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zqwgh\" (UniqueName: \"kubernetes.io/projected/07fce06a-0764-4ff9-a3f3-ce807df56785-kube-api-access-zqwgh\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:54.585618 kubelet[2739]: I1213 09:04:54.585549 2739 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07fce06a-0764-4ff9-a3f3-ce807df56785-tigera-ca-bundle\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:54.585618 kubelet[2739]: I1213 09:04:54.585564 2739 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/07fce06a-0764-4ff9-a3f3-ce807df56785-typha-certs\") on node \"ci-4081-2-1-6-29baf1648e\" DevicePath \"\"" Dec 13 09:04:54.933549 systemd[1]: var-lib-kubelet-pods-07fce06a\x2d0764\x2d4ff9\x2da3f3\x2dce807df56785-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Dec 13 09:04:54.934057 systemd[1]: var-lib-kubelet-pods-07fce06a\x2d0764\x2d4ff9\x2da3f3\x2dce807df56785-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzqwgh.mount: Deactivated successfully. Dec 13 09:04:54.934114 systemd[1]: var-lib-kubelet-pods-07fce06a\x2d0764\x2d4ff9\x2da3f3\x2dce807df56785-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Dec 13 09:04:55.401534 kubelet[2739]: I1213 09:04:55.401502 2739 scope.go:117] "RemoveContainer" containerID="0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e" Dec 13 09:04:55.407424 containerd[1469]: time="2024-12-13T09:04:55.406370003Z" level=info msg="RemoveContainer for \"0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e\"" Dec 13 09:04:55.412796 systemd[1]: Removed slice kubepods-besteffort-pod07fce06a_0764_4ff9_a3f3_ce807df56785.slice - libcontainer container kubepods-besteffort-pod07fce06a_0764_4ff9_a3f3_ce807df56785.slice. Dec 13 09:04:55.414979 containerd[1469]: time="2024-12-13T09:04:55.414872592Z" level=info msg="RemoveContainer for \"0ef993726fdd75b2005d95ebef97ff60cdb83f63eccfe8fdf15c9fd4d9d1136e\" returns successfully" Dec 13 09:04:55.577907 systemd[1]: cri-containerd-a667e3a197ebc381c62e829d0a2e301384f25635bdd22864483c6ac044b14b92.scope: Deactivated successfully. Dec 13 09:04:55.615983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a667e3a197ebc381c62e829d0a2e301384f25635bdd22864483c6ac044b14b92-rootfs.mount: Deactivated successfully. Dec 13 09:04:55.627839 containerd[1469]: time="2024-12-13T09:04:55.627730289Z" level=info msg="shim disconnected" id=a667e3a197ebc381c62e829d0a2e301384f25635bdd22864483c6ac044b14b92 namespace=k8s.io Dec 13 09:04:55.628060 containerd[1469]: time="2024-12-13T09:04:55.627833090Z" level=warning msg="cleaning up after shim disconnected" id=a667e3a197ebc381c62e829d0a2e301384f25635bdd22864483c6ac044b14b92 namespace=k8s.io Dec 13 09:04:55.628060 containerd[1469]: time="2024-12-13T09:04:55.627873771Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:04:55.646206 containerd[1469]: time="2024-12-13T09:04:55.646145160Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:04:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:04:55.953715 kubelet[2739]: I1213 09:04:55.953678 2739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07fce06a-0764-4ff9-a3f3-ce807df56785" path="/var/lib/kubelet/pods/07fce06a-0764-4ff9-a3f3-ce807df56785/volumes" Dec 13 09:04:55.954094 kubelet[2739]: I1213 09:04:55.954075 2739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67d6ce33-fde6-47c0-a23e-dcb137fc2649" path="/var/lib/kubelet/pods/67d6ce33-fde6-47c0-a23e-dcb137fc2649/volumes" Dec 13 09:04:55.954626 kubelet[2739]: I1213 09:04:55.954601 2739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98dfd1c7-089d-4dd5-bd73-26a0d273295c" path="/var/lib/kubelet/pods/98dfd1c7-089d-4dd5-bd73-26a0d273295c/volumes" Dec 13 09:04:56.302981 kubelet[2739]: I1213 09:04:56.302914 2739 topology_manager.go:215] "Topology Admit Handler" podUID="ee3c0f5e-e6e6-453e-a8a7-65e2a4f7dc00" podNamespace="calico-system" podName="calico-typha-567dc64879-8bth8" Dec 13 09:04:56.303244 kubelet[2739]: E1213 09:04:56.303005 2739 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98dfd1c7-089d-4dd5-bd73-26a0d273295c" containerName="calico-kube-controllers" Dec 13 09:04:56.303244 kubelet[2739]: E1213 09:04:56.303021 2739 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07fce06a-0764-4ff9-a3f3-ce807df56785" containerName="calico-typha" Dec 13 09:04:56.303244 kubelet[2739]: I1213 09:04:56.303060 2739 memory_manager.go:354] "RemoveStaleState removing state" podUID="98dfd1c7-089d-4dd5-bd73-26a0d273295c" containerName="calico-kube-controllers" Dec 13 09:04:56.303244 kubelet[2739]: I1213 09:04:56.303072 2739 memory_manager.go:354] "RemoveStaleState removing state" podUID="07fce06a-0764-4ff9-a3f3-ce807df56785" containerName="calico-typha" Dec 13 09:04:56.315568 systemd[1]: Created slice kubepods-besteffort-podee3c0f5e_e6e6_453e_a8a7_65e2a4f7dc00.slice - libcontainer container kubepods-besteffort-podee3c0f5e_e6e6_453e_a8a7_65e2a4f7dc00.slice. Dec 13 09:04:56.399691 kubelet[2739]: I1213 09:04:56.399552 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee3c0f5e-e6e6-453e-a8a7-65e2a4f7dc00-tigera-ca-bundle\") pod \"calico-typha-567dc64879-8bth8\" (UID: \"ee3c0f5e-e6e6-453e-a8a7-65e2a4f7dc00\") " pod="calico-system/calico-typha-567dc64879-8bth8" Dec 13 09:04:56.399691 kubelet[2739]: I1213 09:04:56.399601 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qhrd\" (UniqueName: \"kubernetes.io/projected/ee3c0f5e-e6e6-453e-a8a7-65e2a4f7dc00-kube-api-access-8qhrd\") pod \"calico-typha-567dc64879-8bth8\" (UID: \"ee3c0f5e-e6e6-453e-a8a7-65e2a4f7dc00\") " pod="calico-system/calico-typha-567dc64879-8bth8" Dec 13 09:04:56.399691 kubelet[2739]: I1213 09:04:56.399628 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ee3c0f5e-e6e6-453e-a8a7-65e2a4f7dc00-typha-certs\") pod \"calico-typha-567dc64879-8bth8\" (UID: \"ee3c0f5e-e6e6-453e-a8a7-65e2a4f7dc00\") " pod="calico-system/calico-typha-567dc64879-8bth8" Dec 13 09:04:56.435407 containerd[1469]: time="2024-12-13T09:04:56.434754400Z" level=info msg="CreateContainer within sandbox \"17efbe0346e1ea93b16c1ffddcb8f53db27b460a280f658c049aa5ea3a30495e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 09:04:56.467656 containerd[1469]: time="2024-12-13T09:04:56.467353384Z" level=info msg="CreateContainer within sandbox \"17efbe0346e1ea93b16c1ffddcb8f53db27b460a280f658c049aa5ea3a30495e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7c0d05c1c49592161450a57b94c29b818fea4617072119ed2b38cc4f24cd3ddc\"" Dec 13 09:04:56.469020 containerd[1469]: time="2024-12-13T09:04:56.468612674Z" level=info msg="StartContainer for \"7c0d05c1c49592161450a57b94c29b818fea4617072119ed2b38cc4f24cd3ddc\"" Dec 13 09:04:56.509444 systemd[1]: Started cri-containerd-7c0d05c1c49592161450a57b94c29b818fea4617072119ed2b38cc4f24cd3ddc.scope - libcontainer container 7c0d05c1c49592161450a57b94c29b818fea4617072119ed2b38cc4f24cd3ddc. Dec 13 09:04:56.570875 containerd[1469]: time="2024-12-13T09:04:56.570738699Z" level=info msg="StartContainer for \"7c0d05c1c49592161450a57b94c29b818fea4617072119ed2b38cc4f24cd3ddc\" returns successfully" Dec 13 09:04:56.620410 containerd[1469]: time="2024-12-13T09:04:56.620309019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-567dc64879-8bth8,Uid:ee3c0f5e-e6e6-453e-a8a7-65e2a4f7dc00,Namespace:calico-system,Attempt:0,}" Dec 13 09:04:56.648303 containerd[1469]: time="2024-12-13T09:04:56.647092516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:56.648501 containerd[1469]: time="2024-12-13T09:04:56.648248045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:56.649482 containerd[1469]: time="2024-12-13T09:04:56.649439735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:56.649831 containerd[1469]: time="2024-12-13T09:04:56.649766457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:56.669681 systemd[1]: Started cri-containerd-aac2e631e7ee7156fc231ea5b013add488c22e449999c96ca3d43d3e18aa4f61.scope - libcontainer container aac2e631e7ee7156fc231ea5b013add488c22e449999c96ca3d43d3e18aa4f61. Dec 13 09:04:56.717427 containerd[1469]: time="2024-12-13T09:04:56.717340323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-567dc64879-8bth8,Uid:ee3c0f5e-e6e6-453e-a8a7-65e2a4f7dc00,Namespace:calico-system,Attempt:0,} returns sandbox id \"aac2e631e7ee7156fc231ea5b013add488c22e449999c96ca3d43d3e18aa4f61\"" Dec 13 09:04:56.728621 containerd[1469]: time="2024-12-13T09:04:56.728570254Z" level=info msg="CreateContainer within sandbox \"aac2e631e7ee7156fc231ea5b013add488c22e449999c96ca3d43d3e18aa4f61\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 09:04:56.742401 containerd[1469]: time="2024-12-13T09:04:56.742168084Z" level=info msg="CreateContainer within sandbox \"aac2e631e7ee7156fc231ea5b013add488c22e449999c96ca3d43d3e18aa4f61\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9c0cea98ca0fb108489f210ff81b16379ed74fa83862e696edc3b50abbbb4ddb\"" Dec 13 09:04:56.743425 containerd[1469]: time="2024-12-13T09:04:56.743230172Z" level=info msg="StartContainer for \"9c0cea98ca0fb108489f210ff81b16379ed74fa83862e696edc3b50abbbb4ddb\"" Dec 13 09:04:56.764137 kubelet[2739]: I1213 09:04:56.762643 2739 topology_manager.go:215] "Topology Admit Handler" podUID="b9cb5b3e-abcf-4cbc-ad06-31b3d4c5e62a" podNamespace="calico-system" podName="calico-kube-controllers-66bfc974c9-2wwmz" Dec 13 09:04:56.771960 systemd[1]: Created slice kubepods-besteffort-podb9cb5b3e_abcf_4cbc_ad06_31b3d4c5e62a.slice - libcontainer container kubepods-besteffort-podb9cb5b3e_abcf_4cbc_ad06_31b3d4c5e62a.slice. Dec 13 09:04:56.790952 systemd[1]: Started cri-containerd-9c0cea98ca0fb108489f210ff81b16379ed74fa83862e696edc3b50abbbb4ddb.scope - libcontainer container 9c0cea98ca0fb108489f210ff81b16379ed74fa83862e696edc3b50abbbb4ddb. Dec 13 09:04:56.803531 kubelet[2739]: I1213 09:04:56.803486 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8cxp\" (UniqueName: \"kubernetes.io/projected/b9cb5b3e-abcf-4cbc-ad06-31b3d4c5e62a-kube-api-access-m8cxp\") pod \"calico-kube-controllers-66bfc974c9-2wwmz\" (UID: \"b9cb5b3e-abcf-4cbc-ad06-31b3d4c5e62a\") " pod="calico-system/calico-kube-controllers-66bfc974c9-2wwmz" Dec 13 09:04:56.803782 kubelet[2739]: I1213 09:04:56.803742 2739 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9cb5b3e-abcf-4cbc-ad06-31b3d4c5e62a-tigera-ca-bundle\") pod \"calico-kube-controllers-66bfc974c9-2wwmz\" (UID: \"b9cb5b3e-abcf-4cbc-ad06-31b3d4c5e62a\") " pod="calico-system/calico-kube-controllers-66bfc974c9-2wwmz" Dec 13 09:04:56.840854 containerd[1469]: time="2024-12-13T09:04:56.840324197Z" level=info msg="StartContainer for \"9c0cea98ca0fb108489f210ff81b16379ed74fa83862e696edc3b50abbbb4ddb\" returns successfully" Dec 13 09:04:57.080487 containerd[1469]: time="2024-12-13T09:04:57.079899686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66bfc974c9-2wwmz,Uid:b9cb5b3e-abcf-4cbc-ad06-31b3d4c5e62a,Namespace:calico-system,Attempt:0,}" Dec 13 09:04:57.261037 systemd-networkd[1379]: calic810f66fc55: Link UP Dec 13 09:04:57.262968 systemd-networkd[1379]: calic810f66fc55: Gained carrier Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.149 [INFO][6192] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0 calico-kube-controllers-66bfc974c9- calico-system b9cb5b3e-abcf-4cbc-ad06-31b3d4c5e62a 1073 0 2024-12-13 09:04:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66bfc974c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-6-29baf1648e calico-kube-controllers-66bfc974c9-2wwmz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic810f66fc55 [] []}} ContainerID="a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" Namespace="calico-system" Pod="calico-kube-controllers-66bfc974c9-2wwmz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-" Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.149 [INFO][6192] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" Namespace="calico-system" Pod="calico-kube-controllers-66bfc974c9-2wwmz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0" Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.185 [INFO][6202] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" HandleID="k8s-pod-network.a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0" Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.201 [INFO][6202] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" HandleID="k8s-pod-network.a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317430), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-6-29baf1648e", "pod":"calico-kube-controllers-66bfc974c9-2wwmz", "timestamp":"2024-12-13 09:04:57.185475491 +0000 UTC"}, Hostname:"ci-4081-2-1-6-29baf1648e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.201 [INFO][6202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.201 [INFO][6202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.201 [INFO][6202] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-29baf1648e' Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.204 [INFO][6202] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.217 [INFO][6202] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.226 [INFO][6202] ipam/ipam.go 489: Trying affinity for 192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.229 [INFO][6202] ipam/ipam.go 155: Attempting to load block cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.235 [INFO][6202] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.235 [INFO][6202] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.240 [INFO][6202] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7 Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.246 [INFO][6202] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.255 [INFO][6202] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.36.71/26] block=192.168.36.64/26 handle="k8s-pod-network.a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.255 [INFO][6202] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.71/26] handle="k8s-pod-network.a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" host="ci-4081-2-1-6-29baf1648e" Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.255 [INFO][6202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:04:57.289772 containerd[1469]: 2024-12-13 09:04:57.255 [INFO][6202] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.71/26] IPv6=[] ContainerID="a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" HandleID="k8s-pod-network.a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0" Dec 13 09:04:57.290371 containerd[1469]: 2024-12-13 09:04:57.258 [INFO][6192] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" Namespace="calico-system" Pod="calico-kube-controllers-66bfc974c9-2wwmz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0", GenerateName:"calico-kube-controllers-66bfc974c9-", Namespace:"calico-system", SelfLink:"", UID:"b9cb5b3e-abcf-4cbc-ad06-31b3d4c5e62a", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66bfc974c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"", Pod:"calico-kube-controllers-66bfc974c9-2wwmz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic810f66fc55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:57.290371 containerd[1469]: 2024-12-13 09:04:57.258 [INFO][6192] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.36.71/32] ContainerID="a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" Namespace="calico-system" Pod="calico-kube-controllers-66bfc974c9-2wwmz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0" Dec 13 09:04:57.290371 containerd[1469]: 2024-12-13 09:04:57.258 [INFO][6192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic810f66fc55 ContainerID="a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" Namespace="calico-system" Pod="calico-kube-controllers-66bfc974c9-2wwmz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0" Dec 13 09:04:57.290371 containerd[1469]: 2024-12-13 09:04:57.262 [INFO][6192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" Namespace="calico-system" Pod="calico-kube-controllers-66bfc974c9-2wwmz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0" Dec 13 09:04:57.290371 containerd[1469]: 2024-12-13 09:04:57.263 [INFO][6192] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" Namespace="calico-system" Pod="calico-kube-controllers-66bfc974c9-2wwmz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0", GenerateName:"calico-kube-controllers-66bfc974c9-", Namespace:"calico-system", SelfLink:"", UID:"b9cb5b3e-abcf-4cbc-ad06-31b3d4c5e62a", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 4, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66bfc974c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-29baf1648e", ContainerID:"a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7", Pod:"calico-kube-controllers-66bfc974c9-2wwmz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic810f66fc55", MAC:"a2:00:fb:80:da:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:04:57.290371 containerd[1469]: 2024-12-13 09:04:57.280 [INFO][6192] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7" Namespace="calico-system" Pod="calico-kube-controllers-66bfc974c9-2wwmz" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--66bfc974c9--2wwmz-eth0" Dec 13 09:04:57.320391 containerd[1469]: time="2024-12-13T09:04:57.319981407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:04:57.320391 containerd[1469]: time="2024-12-13T09:04:57.320040447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:04:57.320391 containerd[1469]: time="2024-12-13T09:04:57.320059967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:57.320391 containerd[1469]: time="2024-12-13T09:04:57.320166168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:04:57.347922 systemd[1]: Started cri-containerd-a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7.scope - libcontainer container a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7. Dec 13 09:04:57.395642 containerd[1469]: time="2024-12-13T09:04:57.395521371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66bfc974c9-2wwmz,Uid:b9cb5b3e-abcf-4cbc-ad06-31b3d4c5e62a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7\"" Dec 13 09:04:57.414600 containerd[1469]: time="2024-12-13T09:04:57.413699917Z" level=info msg="CreateContainer within sandbox \"a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 09:04:57.437962 containerd[1469]: time="2024-12-13T09:04:57.437882430Z" level=info msg="CreateContainer within sandbox \"a2d3c060c737f9a5a86b41eda85b6c5ef9a8263ec5b71a7cdd306e6cb813f4d7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cf216f36907da90bc7e117b70f53f507328336fffbfc37a90db8f6faf28db49e\"" Dec 13 09:04:57.439203 containerd[1469]: time="2024-12-13T09:04:57.439046239Z" level=info msg="StartContainer for \"cf216f36907da90bc7e117b70f53f507328336fffbfc37a90db8f6faf28db49e\"" Dec 13 09:04:57.519780 systemd[1]: run-containerd-runc-k8s.io-7c0d05c1c49592161450a57b94c29b818fea4617072119ed2b38cc4f24cd3ddc-runc.fCVrzG.mount: Deactivated successfully. Dec 13 09:04:57.525024 kubelet[2739]: I1213 09:04:57.524971 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gb9nw" podStartSLOduration=4.5249537669999995 podStartE2EDuration="4.524953767s" podCreationTimestamp="2024-12-13 09:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:04:57.523309753 +0000 UTC m=+71.680992282" watchObservedRunningTime="2024-12-13 09:04:57.524953767 +0000 UTC m=+71.682636256" Dec 13 09:04:57.526606 kubelet[2739]: I1213 09:04:57.526553 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-567dc64879-8bth8" podStartSLOduration=5.526529179 podStartE2EDuration="5.526529179s" podCreationTimestamp="2024-12-13 09:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:04:57.492654988 +0000 UTC m=+71.650337597" watchObservedRunningTime="2024-12-13 09:04:57.526529179 +0000 UTC m=+71.684211748" Dec 13 09:04:57.527406 systemd[1]: Started cri-containerd-cf216f36907da90bc7e117b70f53f507328336fffbfc37a90db8f6faf28db49e.scope - libcontainer container cf216f36907da90bc7e117b70f53f507328336fffbfc37a90db8f6faf28db49e. Dec 13 09:04:57.630311 containerd[1469]: time="2024-12-13T09:04:57.630141448Z" level=info msg="StartContainer for \"cf216f36907da90bc7e117b70f53f507328336fffbfc37a90db8f6faf28db49e\" returns successfully" Dec 13 09:04:59.307559 systemd-networkd[1379]: calic810f66fc55: Gained IPv6LL Dec 13 09:05:04.969314 kubelet[2739]: I1213 09:05:04.969132 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:05:05.000721 kubelet[2739]: I1213 09:05:05.000572 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66bfc974c9-2wwmz" podStartSLOduration=11.0005545 podStartE2EDuration="11.0005545s" podCreationTimestamp="2024-12-13 09:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:04:58.518664516 +0000 UTC m=+72.676347045" watchObservedRunningTime="2024-12-13 09:05:05.0005545 +0000 UTC m=+79.158237029" Dec 13 09:05:27.107597 systemd[1]: run-containerd-runc-k8s.io-cf216f36907da90bc7e117b70f53f507328336fffbfc37a90db8f6faf28db49e-runc.y5nAlm.mount: Deactivated successfully. Dec 13 09:05:47.162512 kubelet[2739]: I1213 09:05:47.162475 2739 scope.go:117] "RemoveContainer" containerID="e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b" Dec 13 09:05:47.165229 containerd[1469]: time="2024-12-13T09:05:47.165170749Z" level=info msg="RemoveContainer for \"e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b\"" Dec 13 09:05:47.171055 containerd[1469]: time="2024-12-13T09:05:47.171005544Z" level=info msg="RemoveContainer for \"e0ae24c76998a49aba10c46f776b6ba3b7c35072e61535a61b18741772b9489b\" returns successfully" Dec 13 09:05:47.174109 containerd[1469]: time="2024-12-13T09:05:47.174059403Z" level=info msg="StopPodSandbox for \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\"" Dec 13 09:05:47.175322 containerd[1469]: time="2024-12-13T09:05:47.174181043Z" level=info msg="TearDown network for sandbox \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\" successfully" Dec 13 09:05:47.175322 containerd[1469]: time="2024-12-13T09:05:47.174206084Z" level=info msg="StopPodSandbox for \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\" returns successfully" Dec 13 09:05:47.175322 containerd[1469]: time="2024-12-13T09:05:47.174538246Z" level=info msg="RemovePodSandbox for \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\"" Dec 13 09:05:47.175322 containerd[1469]: time="2024-12-13T09:05:47.174568566Z" level=info msg="Forcibly stopping sandbox \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\"" Dec 13 09:05:47.175322 containerd[1469]: time="2024-12-13T09:05:47.174632286Z" level=info msg="TearDown network for sandbox \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\" successfully" Dec 13 09:05:47.179212 containerd[1469]: time="2024-12-13T09:05:47.178900992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:05:47.179212 containerd[1469]: time="2024-12-13T09:05:47.178993033Z" level=info msg="RemovePodSandbox \"5da39d0dd440e56e8cbe2bbdee5796ddff16dff0e27dbbba85666819e7f17c9f\" returns successfully" Dec 13 09:05:47.180331 containerd[1469]: time="2024-12-13T09:05:47.179984519Z" level=info msg="StopPodSandbox for \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\"" Dec 13 09:05:47.180331 containerd[1469]: time="2024-12-13T09:05:47.180084999Z" level=info msg="TearDown network for sandbox \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\" successfully" Dec 13 09:05:47.180331 containerd[1469]: time="2024-12-13T09:05:47.180097399Z" level=info msg="StopPodSandbox for \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\" returns successfully" Dec 13 09:05:47.181986 containerd[1469]: time="2024-12-13T09:05:47.180899084Z" level=info msg="RemovePodSandbox for \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\"" Dec 13 09:05:47.181986 containerd[1469]: time="2024-12-13T09:05:47.180936564Z" level=info msg="Forcibly stopping sandbox \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\"" Dec 13 09:05:47.181986 containerd[1469]: time="2024-12-13T09:05:47.180996165Z" level=info msg="TearDown network for sandbox \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\" successfully" Dec 13 09:05:47.186995 containerd[1469]: time="2024-12-13T09:05:47.186937840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:05:47.187323 containerd[1469]: time="2024-12-13T09:05:47.187298403Z" level=info msg="RemovePodSandbox \"d985588af55440e5d8a20514ed24d40b18c5505e9260c06bd4dac9b94837ebc7\" returns successfully" Dec 13 09:05:47.189268 containerd[1469]: time="2024-12-13T09:05:47.188139728Z" level=info msg="StopPodSandbox for \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\"" Dec 13 09:05:47.298851 containerd[1469]: 2024-12-13 09:05:47.257 [WARNING][6668] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:05:47.298851 containerd[1469]: 2024-12-13 09:05:47.257 [INFO][6668] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:05:47.298851 containerd[1469]: 2024-12-13 09:05:47.257 [INFO][6668] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" iface="eth0" netns="" Dec 13 09:05:47.298851 containerd[1469]: 2024-12-13 09:05:47.257 [INFO][6668] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:05:47.298851 containerd[1469]: 2024-12-13 09:05:47.257 [INFO][6668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:05:47.298851 containerd[1469]: 2024-12-13 09:05:47.279 [INFO][6675] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" HandleID="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:05:47.298851 containerd[1469]: 2024-12-13 09:05:47.279 [INFO][6675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:05:47.298851 containerd[1469]: 2024-12-13 09:05:47.279 [INFO][6675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:05:47.298851 containerd[1469]: 2024-12-13 09:05:47.292 [WARNING][6675] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" HandleID="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:05:47.298851 containerd[1469]: 2024-12-13 09:05:47.292 [INFO][6675] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" HandleID="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:05:47.298851 containerd[1469]: 2024-12-13 09:05:47.294 [INFO][6675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:05:47.298851 containerd[1469]: 2024-12-13 09:05:47.296 [INFO][6668] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:05:47.300026 containerd[1469]: time="2024-12-13T09:05:47.299585760Z" level=info msg="TearDown network for sandbox \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\" successfully" Dec 13 09:05:47.300026 containerd[1469]: time="2024-12-13T09:05:47.299629801Z" level=info msg="StopPodSandbox for \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\" returns successfully" Dec 13 09:05:47.301076 containerd[1469]: time="2024-12-13T09:05:47.300665367Z" level=info msg="RemovePodSandbox for \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\"" Dec 13 09:05:47.301076 containerd[1469]: time="2024-12-13T09:05:47.300710687Z" level=info msg="Forcibly stopping sandbox \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\"" Dec 13 09:05:47.390089 containerd[1469]: 2024-12-13 09:05:47.349 [WARNING][6693] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" WorkloadEndpoint="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:05:47.390089 containerd[1469]: 2024-12-13 09:05:47.349 [INFO][6693] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:05:47.390089 containerd[1469]: 2024-12-13 09:05:47.349 [INFO][6693] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" iface="eth0" netns="" Dec 13 09:05:47.390089 containerd[1469]: 2024-12-13 09:05:47.349 [INFO][6693] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:05:47.390089 containerd[1469]: 2024-12-13 09:05:47.349 [INFO][6693] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:05:47.390089 containerd[1469]: 2024-12-13 09:05:47.373 [INFO][6699] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" HandleID="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:05:47.390089 containerd[1469]: 2024-12-13 09:05:47.373 [INFO][6699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:05:47.390089 containerd[1469]: 2024-12-13 09:05:47.373 [INFO][6699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:05:47.390089 containerd[1469]: 2024-12-13 09:05:47.384 [WARNING][6699] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" HandleID="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:05:47.390089 containerd[1469]: 2024-12-13 09:05:47.384 [INFO][6699] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" HandleID="k8s-pod-network.eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Workload="ci--4081--2--1--6--29baf1648e-k8s-calico--kube--controllers--85c4855bd8--km44p-eth0" Dec 13 09:05:47.390089 containerd[1469]: 2024-12-13 09:05:47.386 [INFO][6699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:05:47.390089 containerd[1469]: 2024-12-13 09:05:47.388 [INFO][6693] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065" Dec 13 09:05:47.391815 containerd[1469]: time="2024-12-13T09:05:47.390774551Z" level=info msg="TearDown network for sandbox \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\" successfully" Dec 13 09:05:47.398913 containerd[1469]: time="2024-12-13T09:05:47.398851480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:05:47.399181 containerd[1469]: time="2024-12-13T09:05:47.399152642Z" level=info msg="RemovePodSandbox \"eb737452f5c437d030a3c3ec4bc39157fdc1e1569c22ae31e27590c09e4a1065\" returns successfully" Dec 13 09:06:57.117591 systemd[1]: run-containerd-runc-k8s.io-cf216f36907da90bc7e117b70f53f507328336fffbfc37a90db8f6faf28db49e-runc.487sYI.mount: Deactivated successfully. Dec 13 09:07:09.919109 update_engine[1454]: I20241213 09:07:09.918998 1454 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 09:07:09.919109 update_engine[1454]: I20241213 09:07:09.919076 1454 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 09:07:09.919913 update_engine[1454]: I20241213 09:07:09.919438 1454 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 09:07:09.921085 update_engine[1454]: I20241213 09:07:09.920460 1454 omaha_request_params.cc:62] Current group set to stable Dec 13 09:07:09.921085 update_engine[1454]: I20241213 09:07:09.920608 1454 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 09:07:09.921085 update_engine[1454]: I20241213 09:07:09.920624 1454 update_attempter.cc:643] Scheduling an action processor start. Dec 13 09:07:09.921085 update_engine[1454]: I20241213 09:07:09.920646 1454 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 09:07:09.922134 locksmithd[1511]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 09:07:09.922747 update_engine[1454]: I20241213 09:07:09.922700 1454 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 09:07:09.922849 update_engine[1454]: I20241213 09:07:09.922833 1454 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 09:07:09.922902 update_engine[1454]: I20241213 09:07:09.922848 1454 omaha_request_action.cc:272] Request: Dec 13 09:07:09.922902 update_engine[1454]: Dec 13 09:07:09.922902 update_engine[1454]: Dec 13 09:07:09.922902 update_engine[1454]: Dec 13 09:07:09.922902 update_engine[1454]: Dec 13 09:07:09.922902 update_engine[1454]: Dec 13 09:07:09.922902 update_engine[1454]: Dec 13 09:07:09.922902 update_engine[1454]: Dec 13 09:07:09.922902 update_engine[1454]: Dec 13 09:07:09.922902 update_engine[1454]: I20241213 09:07:09.922859 1454 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:07:09.929018 update_engine[1454]: I20241213 09:07:09.928524 1454 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:07:09.929018 update_engine[1454]: I20241213 09:07:09.928959 1454 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:07:09.930673 update_engine[1454]: E20241213 09:07:09.930567 1454 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:07:09.930673 update_engine[1454]: I20241213 09:07:09.930644 1454 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 09:07:19.829280 update_engine[1454]: I20241213 09:07:19.828445 1454 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:07:19.829280 update_engine[1454]: I20241213 09:07:19.828743 1454 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:07:19.829280 update_engine[1454]: I20241213 09:07:19.829123 1454 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:07:19.830098 update_engine[1454]: E20241213 09:07:19.829919 1454 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:07:19.830098 update_engine[1454]: I20241213 09:07:19.829988 1454 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 09:07:27.116019 systemd[1]: run-containerd-runc-k8s.io-cf216f36907da90bc7e117b70f53f507328336fffbfc37a90db8f6faf28db49e-runc.pLw6OW.mount: Deactivated successfully. Dec 13 09:07:29.826808 update_engine[1454]: I20241213 09:07:29.826697 1454 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:07:29.827479 update_engine[1454]: I20241213 09:07:29.827046 1454 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:07:29.827479 update_engine[1454]: I20241213 09:07:29.827422 1454 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:07:29.828337 update_engine[1454]: E20241213 09:07:29.828277 1454 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:07:29.828435 update_engine[1454]: I20241213 09:07:29.828361 1454 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 09:07:39.830479 update_engine[1454]: I20241213 09:07:39.829921 1454 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:07:39.830479 update_engine[1454]: I20241213 09:07:39.830277 1454 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:07:39.831175 update_engine[1454]: I20241213 09:07:39.830574 1454 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:07:39.831783 update_engine[1454]: E20241213 09:07:39.831681 1454 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:07:39.831911 update_engine[1454]: I20241213 09:07:39.831820 1454 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 09:07:39.831911 update_engine[1454]: I20241213 09:07:39.831840 1454 omaha_request_action.cc:617] Omaha request response: Dec 13 09:07:39.832044 update_engine[1454]: E20241213 09:07:39.831990 1454 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 09:07:39.832044 update_engine[1454]: I20241213 09:07:39.832021 1454 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 09:07:39.832044 update_engine[1454]: I20241213 09:07:39.832032 1454 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 09:07:39.832044 update_engine[1454]: I20241213 09:07:39.832042 1454 update_attempter.cc:306] Processing Done. Dec 13 09:07:39.832324 update_engine[1454]: E20241213 09:07:39.832064 1454 update_attempter.cc:619] Update failed. Dec 13 09:07:39.832324 update_engine[1454]: I20241213 09:07:39.832074 1454 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 09:07:39.832324 update_engine[1454]: I20241213 09:07:39.832084 1454 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 09:07:39.832324 update_engine[1454]: I20241213 09:07:39.832094 1454 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 09:07:39.832324 update_engine[1454]: I20241213 09:07:39.832207 1454 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 09:07:39.832324 update_engine[1454]: I20241213 09:07:39.832243 1454 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 09:07:39.832324 update_engine[1454]: I20241213 09:07:39.832252 1454 omaha_request_action.cc:272] Request: Dec 13 09:07:39.832324 update_engine[1454]: Dec 13 09:07:39.832324 update_engine[1454]: Dec 13 09:07:39.832324 update_engine[1454]: Dec 13 09:07:39.832324 update_engine[1454]: Dec 13 09:07:39.832324 update_engine[1454]: Dec 13 09:07:39.832324 update_engine[1454]: Dec 13 09:07:39.832324 update_engine[1454]: I20241213 09:07:39.832259 1454 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:07:39.833090 update_engine[1454]: I20241213 09:07:39.832440 1454 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:07:39.833090 update_engine[1454]: I20241213 09:07:39.832630 1454 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:07:39.833516 update_engine[1454]: E20241213 09:07:39.833470 1454 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:07:39.833600 update_engine[1454]: I20241213 09:07:39.833527 1454 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 09:07:39.833600 update_engine[1454]: I20241213 09:07:39.833539 1454 omaha_request_action.cc:617] Omaha request response: Dec 13 09:07:39.833600 update_engine[1454]: I20241213 09:07:39.833545 1454 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 09:07:39.833600 update_engine[1454]: I20241213 09:07:39.833551 1454 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 09:07:39.833600 update_engine[1454]: I20241213 09:07:39.833557 1454 update_attempter.cc:306] Processing Done. Dec 13 09:07:39.833600 update_engine[1454]: I20241213 09:07:39.833563 1454 update_attempter.cc:310] Error event sent. Dec 13 09:07:39.833600 update_engine[1454]: I20241213 09:07:39.833572 1454 update_check_scheduler.cc:74] Next update check in 45m22s Dec 13 09:07:39.834656 locksmithd[1511]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 09:07:39.834656 locksmithd[1511]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 09:08:39.363040 systemd[1]: Started sshd@7-188.245.203.154:22-139.178.89.65:56522.service - OpenSSH per-connection server daemon (139.178.89.65:56522). Dec 13 09:08:40.344742 sshd[7084]: Accepted publickey for core from 139.178.89.65 port 56522 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:08:40.345958 sshd[7084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:08:40.365994 systemd-logind[1453]: New session 8 of user core. Dec 13 09:08:40.371484 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 09:08:41.160753 sshd[7084]: pam_unix(sshd:session): session closed for user core Dec 13 09:08:41.166437 systemd[1]: sshd@7-188.245.203.154:22-139.178.89.65:56522.service: Deactivated successfully. Dec 13 09:08:41.170323 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 09:08:41.171247 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Dec 13 09:08:41.174679 systemd-logind[1453]: Removed session 8. Dec 13 09:08:46.335676 systemd[1]: Started sshd@8-188.245.203.154:22-139.178.89.65:56534.service - OpenSSH per-connection server daemon (139.178.89.65:56534). Dec 13 09:08:47.314954 sshd[7101]: Accepted publickey for core from 139.178.89.65 port 56534 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:08:47.317329 sshd[7101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:08:47.324460 systemd-logind[1453]: New session 9 of user core. Dec 13 09:08:47.332935 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 09:08:48.081909 sshd[7101]: pam_unix(sshd:session): session closed for user core Dec 13 09:08:48.086371 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Dec 13 09:08:48.086619 systemd[1]: sshd@8-188.245.203.154:22-139.178.89.65:56534.service: Deactivated successfully. Dec 13 09:08:48.090501 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 09:08:48.093810 systemd-logind[1453]: Removed session 9. Dec 13 09:08:53.261688 systemd[1]: Started sshd@9-188.245.203.154:22-139.178.89.65:46436.service - OpenSSH per-connection server daemon (139.178.89.65:46436). Dec 13 09:08:54.244594 sshd[7115]: Accepted publickey for core from 139.178.89.65 port 46436 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:08:54.245374 sshd[7115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:08:54.252891 systemd-logind[1453]: New session 10 of user core. Dec 13 09:08:54.258664 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 09:08:55.001818 sshd[7115]: pam_unix(sshd:session): session closed for user core Dec 13 09:08:55.006599 systemd[1]: sshd@9-188.245.203.154:22-139.178.89.65:46436.service: Deactivated successfully. Dec 13 09:08:55.009883 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 09:08:55.012816 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Dec 13 09:08:55.015096 systemd-logind[1453]: Removed session 10. Dec 13 09:08:55.181529 systemd[1]: Started sshd@10-188.245.203.154:22-139.178.89.65:46452.service - OpenSSH per-connection server daemon (139.178.89.65:46452). Dec 13 09:08:56.179606 sshd[7151]: Accepted publickey for core from 139.178.89.65 port 46452 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:08:56.186460 sshd[7151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:08:56.196244 systemd-logind[1453]: New session 11 of user core. Dec 13 09:08:56.206532 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 09:08:56.986449 sshd[7151]: pam_unix(sshd:session): session closed for user core Dec 13 09:08:56.994514 systemd[1]: sshd@10-188.245.203.154:22-139.178.89.65:46452.service: Deactivated successfully. Dec 13 09:08:56.997181 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 09:08:56.999896 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Dec 13 09:08:57.002678 systemd-logind[1453]: Removed session 11. Dec 13 09:08:57.159756 systemd[1]: Started sshd@11-188.245.203.154:22-139.178.89.65:46462.service - OpenSSH per-connection server daemon (139.178.89.65:46462). Dec 13 09:08:58.149174 sshd[7200]: Accepted publickey for core from 139.178.89.65 port 46462 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:08:58.150218 sshd[7200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:08:58.156044 systemd-logind[1453]: New session 12 of user core. Dec 13 09:08:58.161442 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 09:08:58.907801 sshd[7200]: pam_unix(sshd:session): session closed for user core Dec 13 09:08:58.913624 systemd[1]: sshd@11-188.245.203.154:22-139.178.89.65:46462.service: Deactivated successfully. Dec 13 09:08:58.916850 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 09:08:58.918995 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Dec 13 09:08:58.920042 systemd-logind[1453]: Removed session 12. Dec 13 09:09:04.082795 systemd[1]: Started sshd@12-188.245.203.154:22-139.178.89.65:50952.service - OpenSSH per-connection server daemon (139.178.89.65:50952). Dec 13 09:09:05.068248 sshd[7219]: Accepted publickey for core from 139.178.89.65 port 50952 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:09:05.069939 sshd[7219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:09:05.075309 systemd-logind[1453]: New session 13 of user core. Dec 13 09:09:05.081501 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 09:09:05.829425 sshd[7219]: pam_unix(sshd:session): session closed for user core Dec 13 09:09:05.833342 systemd[1]: sshd@12-188.245.203.154:22-139.178.89.65:50952.service: Deactivated successfully. Dec 13 09:09:05.836081 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 09:09:05.840234 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Dec 13 09:09:05.841889 systemd-logind[1453]: Removed session 13. Dec 13 09:09:11.014916 systemd[1]: Started sshd@13-188.245.203.154:22-139.178.89.65:38612.service - OpenSSH per-connection server daemon (139.178.89.65:38612). Dec 13 09:09:12.001460 sshd[7232]: Accepted publickey for core from 139.178.89.65 port 38612 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:09:12.005482 sshd[7232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:09:12.015571 systemd-logind[1453]: New session 14 of user core. Dec 13 09:09:12.022513 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 09:09:12.760883 sshd[7232]: pam_unix(sshd:session): session closed for user core Dec 13 09:09:12.765532 systemd[1]: sshd@13-188.245.203.154:22-139.178.89.65:38612.service: Deactivated successfully. Dec 13 09:09:12.767958 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 09:09:12.771131 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Dec 13 09:09:12.772820 systemd-logind[1453]: Removed session 14. Dec 13 09:09:17.938686 systemd[1]: Started sshd@14-188.245.203.154:22-139.178.89.65:38616.service - OpenSSH per-connection server daemon (139.178.89.65:38616). Dec 13 09:09:18.917487 sshd[7244]: Accepted publickey for core from 139.178.89.65 port 38616 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:09:18.919488 sshd[7244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:09:18.927247 systemd-logind[1453]: New session 15 of user core. Dec 13 09:09:18.932460 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 09:09:19.701244 sshd[7244]: pam_unix(sshd:session): session closed for user core Dec 13 09:09:19.707571 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 09:09:19.710707 systemd[1]: sshd@14-188.245.203.154:22-139.178.89.65:38616.service: Deactivated successfully. Dec 13 09:09:19.719108 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Dec 13 09:09:19.721423 systemd-logind[1453]: Removed session 15. Dec 13 09:09:19.878698 systemd[1]: Started sshd@15-188.245.203.154:22-139.178.89.65:41418.service - OpenSSH per-connection server daemon (139.178.89.65:41418). Dec 13 09:09:20.859937 sshd[7257]: Accepted publickey for core from 139.178.89.65 port 41418 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:09:20.860727 sshd[7257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:09:20.867558 systemd-logind[1453]: New session 16 of user core. Dec 13 09:09:20.872698 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 09:09:21.766990 sshd[7257]: pam_unix(sshd:session): session closed for user core Dec 13 09:09:21.771402 systemd[1]: sshd@15-188.245.203.154:22-139.178.89.65:41418.service: Deactivated successfully. Dec 13 09:09:21.773925 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 09:09:21.777433 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Dec 13 09:09:21.779673 systemd-logind[1453]: Removed session 16. Dec 13 09:09:21.940728 systemd[1]: Started sshd@16-188.245.203.154:22-139.178.89.65:41422.service - OpenSSH per-connection server daemon (139.178.89.65:41422). Dec 13 09:09:22.920651 sshd[7268]: Accepted publickey for core from 139.178.89.65 port 41422 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:09:22.923784 sshd[7268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:09:22.934464 systemd-logind[1453]: New session 17 of user core. Dec 13 09:09:22.941486 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 09:09:23.816218 systemd[1]: run-containerd-runc-k8s.io-7c0d05c1c49592161450a57b94c29b818fea4617072119ed2b38cc4f24cd3ddc-runc.OEOEN6.mount: Deactivated successfully. Dec 13 09:09:25.906895 sshd[7268]: pam_unix(sshd:session): session closed for user core Dec 13 09:09:25.914677 systemd[1]: sshd@16-188.245.203.154:22-139.178.89.65:41422.service: Deactivated successfully. Dec 13 09:09:25.922685 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 09:09:25.924695 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Dec 13 09:09:25.927572 systemd-logind[1453]: Removed session 17. Dec 13 09:09:26.096606 systemd[1]: Started sshd@17-188.245.203.154:22-139.178.89.65:41428.service - OpenSSH per-connection server daemon (139.178.89.65:41428). Dec 13 09:09:27.085257 sshd[7310]: Accepted publickey for core from 139.178.89.65 port 41428 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:09:27.087458 sshd[7310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:09:27.094855 systemd-logind[1453]: New session 18 of user core. Dec 13 09:09:27.097888 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 09:09:27.115854 systemd[1]: run-containerd-runc-k8s.io-cf216f36907da90bc7e117b70f53f507328336fffbfc37a90db8f6faf28db49e-runc.GFLtqo.mount: Deactivated successfully. Dec 13 09:09:27.975564 sshd[7310]: pam_unix(sshd:session): session closed for user core Dec 13 09:09:27.986677 systemd[1]: sshd@17-188.245.203.154:22-139.178.89.65:41428.service: Deactivated successfully. Dec 13 09:09:27.993346 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 09:09:27.996180 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Dec 13 09:09:27.998024 systemd-logind[1453]: Removed session 18. Dec 13 09:09:28.152709 systemd[1]: Started sshd@18-188.245.203.154:22-139.178.89.65:41434.service - OpenSSH per-connection server daemon (139.178.89.65:41434). Dec 13 09:09:29.121505 sshd[7339]: Accepted publickey for core from 139.178.89.65 port 41434 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:09:29.124043 sshd[7339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:09:29.129820 systemd-logind[1453]: New session 19 of user core. Dec 13 09:09:29.135456 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 09:09:29.872472 sshd[7339]: pam_unix(sshd:session): session closed for user core Dec 13 09:09:29.879755 systemd[1]: sshd@18-188.245.203.154:22-139.178.89.65:41434.service: Deactivated successfully. Dec 13 09:09:29.882137 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 09:09:29.883473 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Dec 13 09:09:29.885023 systemd-logind[1453]: Removed session 19. Dec 13 09:09:35.072678 systemd[1]: Started sshd@19-188.245.203.154:22-139.178.89.65:48610.service - OpenSSH per-connection server daemon (139.178.89.65:48610). Dec 13 09:09:36.071271 sshd[7357]: Accepted publickey for core from 139.178.89.65 port 48610 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:09:36.074550 sshd[7357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:09:36.080857 systemd-logind[1453]: New session 20 of user core. Dec 13 09:09:36.087435 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 09:09:36.834081 sshd[7357]: pam_unix(sshd:session): session closed for user core Dec 13 09:09:36.840766 systemd[1]: sshd@19-188.245.203.154:22-139.178.89.65:48610.service: Deactivated successfully. Dec 13 09:09:36.843127 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 09:09:36.845550 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Dec 13 09:09:36.848292 systemd-logind[1453]: Removed session 20. Dec 13 09:09:42.005505 systemd[1]: Started sshd@20-188.245.203.154:22-139.178.89.65:36310.service - OpenSSH per-connection server daemon (139.178.89.65:36310). Dec 13 09:09:42.987765 sshd[7370]: Accepted publickey for core from 139.178.89.65 port 36310 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:09:42.991178 sshd[7370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:09:42.997760 systemd-logind[1453]: New session 21 of user core. Dec 13 09:09:43.006437 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 09:09:43.750030 sshd[7370]: pam_unix(sshd:session): session closed for user core Dec 13 09:09:43.757467 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Dec 13 09:09:43.758383 systemd[1]: sshd@20-188.245.203.154:22-139.178.89.65:36310.service: Deactivated successfully. Dec 13 09:09:43.762774 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 09:09:43.765716 systemd-logind[1453]: Removed session 21. Dec 13 09:09:57.118104 systemd[1]: run-containerd-runc-k8s.io-cf216f36907da90bc7e117b70f53f507328336fffbfc37a90db8f6faf28db49e-runc.pjw7BL.mount: Deactivated successfully.