Dec 13 08:56:24.899530 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 08:56:24.899559 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 08:56:24.899569 kernel: KASLR enabled Dec 13 08:56:24.899575 kernel: efi: EFI v2.7 by EDK II Dec 13 08:56:24.899581 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Dec 13 08:56:24.899587 kernel: random: crng init done Dec 13 08:56:24.899604 kernel: ACPI: Early table checksum verification disabled Dec 13 08:56:24.899610 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Dec 13 08:56:24.899617 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Dec 13 08:56:24.899623 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:24.899631 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:24.899637 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:24.899643 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:24.899649 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:24.899657 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:24.899665 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:24.899672 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:24.899679 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:24.899686 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 08:56:24.899692 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 13 08:56:24.899699 kernel: NUMA: Failed to initialise from firmware Dec 13 08:56:24.899705 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 08:56:24.899712 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Dec 13 08:56:24.899718 kernel: Zone ranges: Dec 13 08:56:24.899724 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 08:56:24.899731 kernel: DMA32 empty Dec 13 08:56:24.899739 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Dec 13 08:56:24.899745 kernel: Movable zone start for each node Dec 13 08:56:24.899751 kernel: Early memory node ranges Dec 13 08:56:24.899758 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Dec 13 08:56:24.899765 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Dec 13 08:56:24.899771 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Dec 13 08:56:24.899777 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Dec 13 08:56:24.899784 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Dec 13 08:56:24.899790 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 08:56:24.899797 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Dec 13 08:56:24.899803 kernel: psci: probing for conduit method from ACPI. Dec 13 08:56:24.899811 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 08:56:24.899817 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 08:56:24.899837 kernel: psci: Trusted OS migration not required Dec 13 08:56:24.899849 kernel: psci: SMC Calling Convention v1.1 Dec 13 08:56:24.899856 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 08:56:24.899863 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 08:56:24.899871 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 08:56:24.899879 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 08:56:24.899933 kernel: Detected PIPT I-cache on CPU0 Dec 13 08:56:24.899941 kernel: CPU features: detected: GIC system register CPU interface Dec 13 08:56:24.899948 kernel: CPU features: detected: Hardware dirty bit management Dec 13 08:56:24.899955 kernel: CPU features: detected: Spectre-v4 Dec 13 08:56:24.899962 kernel: CPU features: detected: Spectre-BHB Dec 13 08:56:24.899969 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 08:56:24.899976 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 08:56:24.899983 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 08:56:24.899990 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 08:56:24.900000 kernel: alternatives: applying boot alternatives Dec 13 08:56:24.900008 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 08:56:24.900016 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 08:56:24.900023 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 08:56:24.900030 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 08:56:24.900037 kernel: Fallback order for Node 0: 0 Dec 13 08:56:24.900044 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Dec 13 08:56:24.900051 kernel: Policy zone: Normal Dec 13 08:56:24.900058 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 08:56:24.900064 kernel: software IO TLB: area num 2. Dec 13 08:56:24.900071 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Dec 13 08:56:24.900080 kernel: Memory: 3881592K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 214408K reserved, 0K cma-reserved) Dec 13 08:56:24.900087 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 08:56:24.900094 kernel: trace event string verifier disabled Dec 13 08:56:24.900101 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 08:56:24.900109 kernel: rcu: RCU event tracing is enabled. Dec 13 08:56:24.900116 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 08:56:24.900123 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 08:56:24.900130 kernel: Tracing variant of Tasks RCU enabled. Dec 13 08:56:24.900137 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 08:56:24.900144 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 08:56:24.900151 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 08:56:24.900160 kernel: GICv3: 256 SPIs implemented Dec 13 08:56:24.900167 kernel: GICv3: 0 Extended SPIs implemented Dec 13 08:56:24.900173 kernel: Root IRQ handler: gic_handle_irq Dec 13 08:56:24.900180 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 08:56:24.900187 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 08:56:24.900194 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 08:56:24.900201 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 08:56:24.900208 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 08:56:24.900215 kernel: GICv3: using LPI property table @0x00000001000e0000 Dec 13 08:56:24.900222 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Dec 13 08:56:24.900229 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 08:56:24.900238 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 08:56:24.900245 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 08:56:24.900252 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 08:56:24.900259 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 08:56:24.900266 kernel: Console: colour dummy device 80x25 Dec 13 08:56:24.900289 kernel: ACPI: Core revision 20230628 Dec 13 08:56:24.900301 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 08:56:24.900308 kernel: pid_max: default: 32768 minimum: 301 Dec 13 08:56:24.900315 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 08:56:24.900323 kernel: landlock: Up and running. Dec 13 08:56:24.900333 kernel: SELinux: Initializing. Dec 13 08:56:24.900340 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 08:56:24.900348 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 08:56:24.900355 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:56:24.900362 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:56:24.900453 kernel: rcu: Hierarchical SRCU implementation. Dec 13 08:56:24.900480 kernel: rcu: Max phase no-delay instances is 400. Dec 13 08:56:24.900488 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 08:56:24.900495 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 08:56:24.900512 kernel: Remapping and enabling EFI services. Dec 13 08:56:24.900520 kernel: smp: Bringing up secondary CPUs ... Dec 13 08:56:24.900527 kernel: Detected PIPT I-cache on CPU1 Dec 13 08:56:24.900535 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 08:56:24.900542 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Dec 13 08:56:24.900549 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 08:56:24.900556 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 08:56:24.900563 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 08:56:24.900571 kernel: SMP: Total of 2 processors activated. Dec 13 08:56:24.900578 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 08:56:24.900587 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 08:56:24.900595 kernel: CPU features: detected: Common not Private translations Dec 13 08:56:24.900608 kernel: CPU features: detected: CRC32 instructions Dec 13 08:56:24.900617 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 08:56:24.900624 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 08:56:24.900632 kernel: CPU features: detected: LSE atomic instructions Dec 13 08:56:24.900639 kernel: CPU features: detected: Privileged Access Never Dec 13 08:56:24.900647 kernel: CPU features: detected: RAS Extension Support Dec 13 08:56:24.900655 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 08:56:24.900664 kernel: CPU: All CPU(s) started at EL1 Dec 13 08:56:24.900672 kernel: alternatives: applying system-wide alternatives Dec 13 08:56:24.900679 kernel: devtmpfs: initialized Dec 13 08:56:24.900687 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 08:56:24.900695 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 08:56:24.900702 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 08:56:24.900710 kernel: SMBIOS 3.0.0 present. Dec 13 08:56:24.900719 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Dec 13 08:56:24.900727 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 08:56:24.900734 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 08:56:24.900742 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 08:56:24.900749 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 08:56:24.900757 kernel: audit: initializing netlink subsys (disabled) Dec 13 08:56:24.900765 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Dec 13 08:56:24.900772 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 08:56:24.900780 kernel: cpuidle: using governor menu Dec 13 08:56:24.900789 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 08:56:24.900837 kernel: ASID allocator initialised with 32768 entries Dec 13 08:56:24.900848 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 08:56:24.900855 kernel: Serial: AMBA PL011 UART driver Dec 13 08:56:24.900863 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 08:56:24.900871 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 08:56:24.900879 kernel: Modules: 509040 pages in range for PLT usage Dec 13 08:56:24.900886 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 08:56:24.900893 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 08:56:24.900904 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 08:56:24.900912 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 08:56:24.900919 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 08:56:24.900927 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 08:56:24.900934 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 08:56:24.900942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 08:56:24.900949 kernel: ACPI: Added _OSI(Module Device) Dec 13 08:56:24.900956 kernel: ACPI: Added _OSI(Processor Device) Dec 13 08:56:24.900964 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 08:56:24.900972 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 08:56:24.900980 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 08:56:24.900987 kernel: ACPI: Interpreter enabled Dec 13 08:56:24.900994 kernel: ACPI: Using GIC for interrupt routing Dec 13 08:56:24.901001 kernel: ACPI: MCFG table detected, 1 entries Dec 13 08:56:24.901050 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 08:56:24.901061 kernel: printk: console [ttyAMA0] enabled Dec 13 08:56:24.901069 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 08:56:24.901241 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 08:56:24.901373 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 08:56:24.902726 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 08:56:24.902873 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 08:56:24.902989 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 08:56:24.903007 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 08:56:24.903015 kernel: PCI host bridge to bus 0000:00 Dec 13 08:56:24.903112 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 08:56:24.903196 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 08:56:24.903255 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 08:56:24.904463 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 08:56:24.904613 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 08:56:24.904775 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Dec 13 08:56:24.904949 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Dec 13 08:56:24.905047 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 08:56:24.905128 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:24.905198 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Dec 13 08:56:24.905312 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:24.905411 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Dec 13 08:56:24.905496 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:24.905577 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Dec 13 08:56:24.905663 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:24.905739 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Dec 13 08:56:24.905819 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:24.905913 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Dec 13 08:56:24.905990 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:24.906064 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Dec 13 08:56:24.906137 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:24.906214 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Dec 13 08:56:24.906290 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:24.906359 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Dec 13 08:56:24.909682 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:24.909783 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Dec 13 08:56:24.909935 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Dec 13 08:56:24.910007 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Dec 13 08:56:24.910219 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 08:56:24.910367 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Dec 13 08:56:24.910494 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 08:56:24.911789 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 08:56:24.911997 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 08:56:24.912078 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Dec 13 08:56:24.912163 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 08:56:24.912279 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Dec 13 08:56:24.912359 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Dec 13 08:56:24.912477 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 08:56:24.912562 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Dec 13 08:56:24.912655 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 08:56:24.912729 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Dec 13 08:56:24.912812 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 08:56:24.913002 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Dec 13 08:56:24.913124 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 08:56:24.913213 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 08:56:24.913294 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Dec 13 08:56:24.913366 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Dec 13 08:56:24.913524 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 08:56:24.913608 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 08:56:24.913678 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 13 08:56:24.913747 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 13 08:56:24.913893 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 08:56:24.913990 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 08:56:24.914117 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 13 08:56:24.914201 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 08:56:24.914282 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 13 08:56:24.914351 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 08:56:24.914485 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 08:56:24.914560 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 13 08:56:24.914636 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 08:56:24.914811 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 08:56:24.914920 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 13 08:56:24.914990 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Dec 13 08:56:24.915062 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 08:56:24.915128 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 13 08:56:24.915193 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 13 08:56:24.915270 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 08:56:24.915337 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Dec 13 08:56:24.917560 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Dec 13 08:56:24.917709 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 08:56:24.917781 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 13 08:56:24.917923 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 13 08:56:24.918002 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 08:56:24.918068 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 13 08:56:24.918141 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 13 08:56:24.918211 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 08:56:24.918279 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 08:56:24.918348 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 08:56:24.920440 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 08:56:24.920558 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 08:56:24.920677 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 08:56:24.920765 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Dec 13 08:56:24.920887 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 08:56:24.920968 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Dec 13 08:56:24.921038 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 08:56:24.921108 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Dec 13 08:56:24.921174 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 08:56:24.921251 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Dec 13 08:56:24.921317 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 08:56:24.921429 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Dec 13 08:56:24.921500 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 08:56:24.921579 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Dec 13 08:56:24.921656 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 08:56:24.921729 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Dec 13 08:56:24.921802 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Dec 13 08:56:24.921982 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Dec 13 08:56:24.922060 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 08:56:24.922169 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Dec 13 08:56:24.922242 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 08:56:24.922311 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Dec 13 08:56:24.922377 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 08:56:24.922485 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Dec 13 08:56:24.922560 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 08:56:24.922627 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Dec 13 08:56:24.922691 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 08:56:24.922761 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Dec 13 08:56:24.922878 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 08:56:24.922964 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Dec 13 08:56:24.923031 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 08:56:24.923105 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Dec 13 08:56:24.923176 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 08:56:24.923244 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Dec 13 08:56:24.923309 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Dec 13 08:56:24.923637 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Dec 13 08:56:24.923767 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Dec 13 08:56:24.923877 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 08:56:24.923948 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Dec 13 08:56:24.924016 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 08:56:24.924092 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 08:56:24.924156 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 08:56:24.924220 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 08:56:24.924294 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Dec 13 08:56:24.924363 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 08:56:24.924484 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 08:56:24.924549 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Dec 13 08:56:24.924612 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 08:56:24.924685 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 08:56:24.924751 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Dec 13 08:56:24.924818 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 08:56:24.924899 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 08:56:24.924970 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Dec 13 08:56:24.925033 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 08:56:24.925107 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 08:56:24.925176 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 08:56:24.925241 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 08:56:24.925305 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Dec 13 08:56:24.925370 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 08:56:24.925472 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Dec 13 08:56:24.925549 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 08:56:24.925615 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 08:56:24.925680 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 13 08:56:24.925745 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 08:56:24.925867 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Dec 13 08:56:24.925956 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Dec 13 08:56:24.926026 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 08:56:24.926092 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 08:56:24.926164 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 13 08:56:24.926231 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 08:56:24.926306 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Dec 13 08:56:24.926375 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Dec 13 08:56:24.926464 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Dec 13 08:56:24.926600 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 08:56:24.926674 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 08:56:24.926739 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 13 08:56:24.926811 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 08:56:24.926898 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 08:56:24.926992 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 08:56:24.927059 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 13 08:56:24.927124 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 08:56:24.927194 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 08:56:24.927259 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Dec 13 08:56:24.927324 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Dec 13 08:56:24.927466 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 08:56:24.927542 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 08:56:24.927601 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 08:56:24.927660 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 08:56:24.927795 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 08:56:24.927883 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 08:56:24.927948 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 08:56:24.928025 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Dec 13 08:56:24.928086 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 08:56:24.928146 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 08:56:24.928219 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Dec 13 08:56:24.928280 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 08:56:24.928341 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 08:56:24.928777 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 08:56:24.928918 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 13 08:56:24.928995 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 08:56:24.929272 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Dec 13 08:56:24.929352 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 13 08:56:24.929530 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 08:56:24.929607 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Dec 13 08:56:24.929675 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 13 08:56:24.930126 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 08:56:24.930211 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Dec 13 08:56:24.930273 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 13 08:56:24.930341 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 08:56:24.930543 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Dec 13 08:56:24.930612 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 13 08:56:24.930674 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 08:56:24.930744 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Dec 13 08:56:24.930950 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 13 08:56:24.931032 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 08:56:24.931045 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 08:56:24.931053 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 08:56:24.931061 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 08:56:24.931069 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 08:56:24.931077 kernel: iommu: Default domain type: Translated Dec 13 08:56:24.931085 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 08:56:24.931092 kernel: efivars: Registered efivars operations Dec 13 08:56:24.931100 kernel: vgaarb: loaded Dec 13 08:56:24.931108 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 08:56:24.931118 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 08:56:24.931126 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 08:56:24.931134 kernel: pnp: PnP ACPI init Dec 13 08:56:24.931213 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 08:56:24.931225 kernel: pnp: PnP ACPI: found 1 devices Dec 13 08:56:24.931233 kernel: NET: Registered PF_INET protocol family Dec 13 08:56:24.931241 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 08:56:24.931249 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 08:56:24.931259 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 08:56:24.931267 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 08:56:24.931275 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 08:56:24.931283 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 08:56:24.931291 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 08:56:24.931298 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 08:56:24.931306 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 08:56:24.931406 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 13 08:56:24.931421 kernel: PCI: CLS 0 bytes, default 64 Dec 13 08:56:24.931432 kernel: kvm [1]: HYP mode not available Dec 13 08:56:24.931439 kernel: Initialise system trusted keyrings Dec 13 08:56:24.931447 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 08:56:24.931455 kernel: Key type asymmetric registered Dec 13 08:56:24.931462 kernel: Asymmetric key parser 'x509' registered Dec 13 08:56:24.931470 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 08:56:24.931478 kernel: io scheduler mq-deadline registered Dec 13 08:56:24.931485 kernel: io scheduler kyber registered Dec 13 08:56:24.931493 kernel: io scheduler bfq registered Dec 13 08:56:24.931503 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 08:56:24.931585 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Dec 13 08:56:24.931656 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Dec 13 08:56:24.931724 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:24.931793 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Dec 13 08:56:24.931948 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Dec 13 08:56:24.932038 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:24.932111 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Dec 13 08:56:24.932178 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Dec 13 08:56:24.932245 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:24.933506 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Dec 13 08:56:24.933708 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Dec 13 08:56:24.933789 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:24.933926 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Dec 13 08:56:24.934012 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Dec 13 08:56:24.934085 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:24.934158 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Dec 13 08:56:24.934225 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Dec 13 08:56:24.934302 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:24.934380 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Dec 13 08:56:24.934471 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Dec 13 08:56:24.934540 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:24.934611 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Dec 13 08:56:24.934737 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Dec 13 08:56:24.934813 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:24.934837 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 13 08:56:24.934914 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Dec 13 08:56:24.934982 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Dec 13 08:56:24.935085 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:24.935099 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 08:56:24.935107 kernel: ACPI: button: Power Button [PWRB] Dec 13 08:56:24.935120 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 08:56:24.935207 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Dec 13 08:56:24.935317 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 13 08:56:24.935416 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Dec 13 08:56:24.935430 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 08:56:24.935438 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 08:56:24.935514 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Dec 13 08:56:24.935525 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Dec 13 08:56:24.935533 kernel: thunder_xcv, ver 1.0 Dec 13 08:56:24.935546 kernel: thunder_bgx, ver 1.0 Dec 13 08:56:24.935554 kernel: nicpf, ver 1.0 Dec 13 08:56:24.935562 kernel: nicvf, ver 1.0 Dec 13 08:56:24.935648 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 08:56:24.935713 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T08:56:24 UTC (1734080184) Dec 13 08:56:24.935723 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 08:56:24.935731 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 08:56:24.935739 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 08:56:24.935750 kernel: watchdog: Hard watchdog permanently disabled Dec 13 08:56:24.935757 kernel: NET: Registered PF_INET6 protocol family Dec 13 08:56:24.935765 kernel: Segment Routing with IPv6 Dec 13 08:56:24.935773 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 08:56:24.935780 kernel: NET: Registered PF_PACKET protocol family Dec 13 08:56:24.935788 kernel: Key type dns_resolver registered Dec 13 08:56:24.935796 kernel: registered taskstats version 1 Dec 13 08:56:24.935804 kernel: Loading compiled-in X.509 certificates Dec 13 08:56:24.935812 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 08:56:24.935859 kernel: Key type .fscrypt registered Dec 13 08:56:24.935870 kernel: Key type fscrypt-provisioning registered Dec 13 08:56:24.935878 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 08:56:24.935886 kernel: ima: Allocated hash algorithm: sha1 Dec 13 08:56:24.935893 kernel: ima: No architecture policies found Dec 13 08:56:24.935901 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 08:56:24.935909 kernel: clk: Disabling unused clocks Dec 13 08:56:24.935917 kernel: Freeing unused kernel memory: 39360K Dec 13 08:56:24.935924 kernel: Run /init as init process Dec 13 08:56:24.935935 kernel: with arguments: Dec 13 08:56:24.935943 kernel: /init Dec 13 08:56:24.935951 kernel: with environment: Dec 13 08:56:24.935958 kernel: HOME=/ Dec 13 08:56:24.935966 kernel: TERM=linux Dec 13 08:56:24.935973 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 08:56:24.935983 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:56:24.935993 systemd[1]: Detected virtualization kvm. Dec 13 08:56:24.936003 systemd[1]: Detected architecture arm64. Dec 13 08:56:24.936011 systemd[1]: Running in initrd. Dec 13 08:56:24.936019 systemd[1]: No hostname configured, using default hostname. Dec 13 08:56:24.936027 systemd[1]: Hostname set to . Dec 13 08:56:24.936035 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:56:24.936043 systemd[1]: Queued start job for default target initrd.target. Dec 13 08:56:24.936051 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:56:24.936060 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:56:24.936070 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 08:56:24.936079 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:56:24.936137 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 08:56:24.936146 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 08:56:24.936156 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 08:56:24.936165 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 08:56:24.936177 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:56:24.936185 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:56:24.936195 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:56:24.936203 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:56:24.936211 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:56:24.936219 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:56:24.936228 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:56:24.936236 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:56:24.936245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 08:56:24.936255 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 08:56:24.936263 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:56:24.936272 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:56:24.936280 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:56:24.936288 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:56:24.936296 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 08:56:24.936305 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:56:24.936313 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 08:56:24.936324 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 08:56:24.936332 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:56:24.936340 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:56:24.936349 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:56:24.936357 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 08:56:24.936366 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:56:24.938498 systemd-journald[236]: Collecting audit messages is disabled. Dec 13 08:56:24.938551 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 08:56:24.938620 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 08:56:24.938650 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 08:56:24.938661 kernel: Bridge firewalling registered Dec 13 08:56:24.938670 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:56:24.938678 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:56:24.938687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:56:24.938695 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:56:24.938704 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:56:24.938712 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:56:24.938725 systemd-journald[236]: Journal started Dec 13 08:56:24.938744 systemd-journald[236]: Runtime Journal (/run/log/journal/686b353354a449168bfa889f376c1201) is 8.0M, max 76.5M, 68.5M free. Dec 13 08:56:24.891072 systemd-modules-load[237]: Inserted module 'overlay' Dec 13 08:56:24.914813 systemd-modules-load[237]: Inserted module 'br_netfilter' Dec 13 08:56:24.942176 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:56:24.944361 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:56:24.949595 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:56:24.960494 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:56:24.965641 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:56:24.970733 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 08:56:24.973432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:56:24.984629 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:56:24.997458 dracut-cmdline[271]: dracut-dracut-053 Dec 13 08:56:25.003245 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 08:56:25.032415 systemd-resolved[274]: Positive Trust Anchors: Dec 13 08:56:25.032437 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:56:25.032478 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:56:25.038739 systemd-resolved[274]: Defaulting to hostname 'linux'. Dec 13 08:56:25.039953 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:56:25.040710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:56:25.112472 kernel: SCSI subsystem initialized Dec 13 08:56:25.116486 kernel: Loading iSCSI transport class v2.0-870. Dec 13 08:56:25.124433 kernel: iscsi: registered transport (tcp) Dec 13 08:56:25.138645 kernel: iscsi: registered transport (qla4xxx) Dec 13 08:56:25.138785 kernel: QLogic iSCSI HBA Driver Dec 13 08:56:25.190243 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 08:56:25.195632 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 08:56:25.219687 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 08:56:25.219758 kernel: device-mapper: uevent: version 1.0.3 Dec 13 08:56:25.220495 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 08:56:25.274459 kernel: raid6: neonx8 gen() 15606 MB/s Dec 13 08:56:25.291499 kernel: raid6: neonx4 gen() 15484 MB/s Dec 13 08:56:25.308435 kernel: raid6: neonx2 gen() 13132 MB/s Dec 13 08:56:25.325471 kernel: raid6: neonx1 gen() 10434 MB/s Dec 13 08:56:25.342451 kernel: raid6: int64x8 gen() 6897 MB/s Dec 13 08:56:25.359485 kernel: raid6: int64x4 gen() 7283 MB/s Dec 13 08:56:25.376510 kernel: raid6: int64x2 gen() 6077 MB/s Dec 13 08:56:25.393464 kernel: raid6: int64x1 gen() 4999 MB/s Dec 13 08:56:25.393553 kernel: raid6: using algorithm neonx8 gen() 15606 MB/s Dec 13 08:56:25.410449 kernel: raid6: .... xor() 11704 MB/s, rmw enabled Dec 13 08:56:25.410529 kernel: raid6: using neon recovery algorithm Dec 13 08:56:25.417455 kernel: xor: measuring software checksum speed Dec 13 08:56:25.417525 kernel: 8regs : 19769 MB/sec Dec 13 08:56:25.418540 kernel: 32regs : 19641 MB/sec Dec 13 08:56:25.418581 kernel: arm64_neon : 26989 MB/sec Dec 13 08:56:25.418592 kernel: xor: using function: arm64_neon (26989 MB/sec) Dec 13 08:56:25.470537 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 08:56:25.486579 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:56:25.499762 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:56:25.514898 systemd-udevd[456]: Using default interface naming scheme 'v255'. Dec 13 08:56:25.518507 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:56:25.527614 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 08:56:25.544037 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Dec 13 08:56:25.585481 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:56:25.592690 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:56:25.642565 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:56:25.652991 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 08:56:25.672813 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 08:56:25.675555 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:56:25.677516 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:56:25.679270 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:56:25.684688 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 08:56:25.706314 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:56:25.744755 kernel: scsi host0: Virtio SCSI HBA Dec 13 08:56:25.760457 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 08:56:25.760541 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 08:56:25.770676 kernel: ACPI: bus type USB registered Dec 13 08:56:25.771135 kernel: usbcore: registered new interface driver usbfs Dec 13 08:56:25.771278 kernel: usbcore: registered new interface driver hub Dec 13 08:56:25.772435 kernel: usbcore: registered new device driver usb Dec 13 08:56:25.799723 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:56:25.807070 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 08:56:25.816665 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 08:56:25.816782 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 08:56:25.816930 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 08:56:25.817038 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 08:56:25.817159 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 08:56:25.817263 kernel: hub 1-0:1.0: USB hub found Dec 13 08:56:25.817372 kernel: hub 1-0:1.0: 4 ports detected Dec 13 08:56:25.817541 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 08:56:25.817670 kernel: hub 2-0:1.0: USB hub found Dec 13 08:56:25.817759 kernel: hub 2-0:1.0: 4 ports detected Dec 13 08:56:25.802019 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:56:25.808452 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:56:25.809237 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:56:25.809631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:56:25.811787 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:56:25.818737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:56:25.832633 kernel: sr 0:0:0:0: Power-on or device reset occurred Dec 13 08:56:25.842560 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Dec 13 08:56:25.842697 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 08:56:25.842709 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Dec 13 08:56:25.850598 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:56:25.854437 kernel: sd 0:0:0:1: Power-on or device reset occurred Dec 13 08:56:25.863365 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 08:56:25.863568 kernel: sd 0:0:0:1: [sda] Write Protect is off Dec 13 08:56:25.863658 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Dec 13 08:56:25.863740 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 08:56:25.863832 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 08:56:25.863845 kernel: GPT:17805311 != 80003071 Dec 13 08:56:25.863854 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 08:56:25.863870 kernel: GPT:17805311 != 80003071 Dec 13 08:56:25.863879 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 08:56:25.863888 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 08:56:25.863898 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Dec 13 08:56:25.862300 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:56:25.883464 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:56:25.919432 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (507) Dec 13 08:56:25.923463 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (521) Dec 13 08:56:25.932335 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 08:56:25.946854 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 08:56:25.951297 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 08:56:25.952067 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 08:56:25.957884 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 08:56:25.967631 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 08:56:25.987684 disk-uuid[573]: Primary Header is updated. Dec 13 08:56:25.987684 disk-uuid[573]: Secondary Entries is updated. Dec 13 08:56:25.987684 disk-uuid[573]: Secondary Header is updated. Dec 13 08:56:25.993490 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 08:56:26.000418 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 08:56:26.003433 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 08:56:26.049456 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 08:56:26.293428 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 13 08:56:26.431416 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 13 08:56:26.433675 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 08:56:26.438501 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 13 08:56:26.491525 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 13 08:56:26.491854 kernel: usbcore: registered new interface driver usbhid Dec 13 08:56:26.492765 kernel: usbhid: USB HID core driver Dec 13 08:56:27.012335 disk-uuid[574]: The operation has completed successfully. Dec 13 08:56:27.013087 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 08:56:27.070549 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 08:56:27.071786 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 08:56:27.078861 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 08:56:27.085858 sh[592]: Success Dec 13 08:56:27.100436 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 08:56:27.169145 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 08:56:27.173307 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 08:56:27.175476 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 08:56:27.194564 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 08:56:27.194633 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 08:56:27.194644 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 08:56:27.195521 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 08:56:27.195543 kernel: BTRFS info (device dm-0): using free space tree Dec 13 08:56:27.201417 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 08:56:27.203761 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 08:56:27.206244 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 08:56:27.212665 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 08:56:27.216956 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 08:56:27.231052 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 08:56:27.231122 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 08:56:27.231139 kernel: BTRFS info (device sda6): using free space tree Dec 13 08:56:27.234508 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 08:56:27.234582 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 08:56:27.248424 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 08:56:27.248128 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 08:56:27.256203 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 08:56:27.261701 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 08:56:27.343937 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:56:27.351685 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:56:27.375372 ignition[678]: Ignition 2.19.0 Dec 13 08:56:27.376039 ignition[678]: Stage: fetch-offline Dec 13 08:56:27.376092 ignition[678]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:27.376100 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:27.376318 ignition[678]: parsed url from cmdline: "" Dec 13 08:56:27.378362 systemd-networkd[777]: lo: Link UP Dec 13 08:56:27.376322 ignition[678]: no config URL provided Dec 13 08:56:27.378365 systemd-networkd[777]: lo: Gained carrier Dec 13 08:56:27.376326 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:56:27.379689 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:56:27.376337 ignition[678]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:56:27.382074 systemd-networkd[777]: Enumeration completed Dec 13 08:56:27.376343 ignition[678]: failed to fetch config: resource requires networking Dec 13 08:56:27.382377 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:56:27.376578 ignition[678]: Ignition finished successfully Dec 13 08:56:27.383339 systemd[1]: Reached target network.target - Network. Dec 13 08:56:27.384125 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:27.384129 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:56:27.386564 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:27.386567 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:56:27.387174 systemd-networkd[777]: eth0: Link UP Dec 13 08:56:27.387177 systemd-networkd[777]: eth0: Gained carrier Dec 13 08:56:27.387186 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:27.392746 systemd-networkd[777]: eth1: Link UP Dec 13 08:56:27.392750 systemd-networkd[777]: eth1: Gained carrier Dec 13 08:56:27.392762 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:27.393818 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 08:56:27.411260 ignition[780]: Ignition 2.19.0 Dec 13 08:56:27.411270 ignition[780]: Stage: fetch Dec 13 08:56:27.411667 ignition[780]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:27.411679 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:27.411774 ignition[780]: parsed url from cmdline: "" Dec 13 08:56:27.411777 ignition[780]: no config URL provided Dec 13 08:56:27.411782 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:56:27.411788 ignition[780]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:56:27.411856 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 08:56:27.412753 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 08:56:27.427489 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 08:56:27.446501 systemd-networkd[777]: eth0: DHCPv4 address 5.75.230.207/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 08:56:27.613069 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 08:56:27.620124 ignition[780]: GET result: OK Dec 13 08:56:27.620309 ignition[780]: parsing config with SHA512: c17d8ce07ac1b5afe1ef1f389ba9063f5b453c42d714fc39f16af2ed9f1efb706c39b1d365be3dbb9e4be721de74deeea2abc68cb17ad181a70ac12bbad09b87 Dec 13 08:56:27.626963 unknown[780]: fetched base config from "system" Dec 13 08:56:27.626972 unknown[780]: fetched base config from "system" Dec 13 08:56:27.627361 ignition[780]: fetch: fetch complete Dec 13 08:56:27.626977 unknown[780]: fetched user config from "hetzner" Dec 13 08:56:27.627365 ignition[780]: fetch: fetch passed Dec 13 08:56:27.629716 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 08:56:27.627450 ignition[780]: Ignition finished successfully Dec 13 08:56:27.638655 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 08:56:27.651507 ignition[787]: Ignition 2.19.0 Dec 13 08:56:27.651517 ignition[787]: Stage: kargs Dec 13 08:56:27.651708 ignition[787]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:27.651718 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:27.652700 ignition[787]: kargs: kargs passed Dec 13 08:56:27.652755 ignition[787]: Ignition finished successfully Dec 13 08:56:27.655658 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 08:56:27.661673 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 08:56:27.678477 ignition[793]: Ignition 2.19.0 Dec 13 08:56:27.678537 ignition[793]: Stage: disks Dec 13 08:56:27.679483 ignition[793]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:27.679497 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:27.682271 ignition[793]: disks: disks passed Dec 13 08:56:27.682333 ignition[793]: Ignition finished successfully Dec 13 08:56:27.685683 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 08:56:27.687577 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 08:56:27.689343 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 08:56:27.690072 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:56:27.691162 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:56:27.692170 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:56:27.698623 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 08:56:27.719652 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 08:56:27.724478 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 08:56:27.732093 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 08:56:27.783418 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 08:56:27.784084 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 08:56:27.785451 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 08:56:27.793595 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:56:27.800273 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 08:56:27.803627 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 08:56:27.804288 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 08:56:27.804320 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:56:27.813793 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (809) Dec 13 08:56:27.819468 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 08:56:27.819536 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 08:56:27.819556 kernel: BTRFS info (device sda6): using free space tree Dec 13 08:56:27.819935 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 08:56:27.828430 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 08:56:27.828500 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 08:56:27.831780 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 08:56:27.840241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:56:27.880181 coreos-metadata[811]: Dec 13 08:56:27.880 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 08:56:27.882689 coreos-metadata[811]: Dec 13 08:56:27.881 INFO Fetch successful Dec 13 08:56:27.883422 coreos-metadata[811]: Dec 13 08:56:27.883 INFO wrote hostname ci-4081-2-1-e-e153687e15 to /sysroot/etc/hostname Dec 13 08:56:27.885509 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 08:56:27.885272 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:56:27.895259 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Dec 13 08:56:27.901254 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 08:56:27.906341 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 08:56:28.017183 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 08:56:28.021550 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 08:56:28.025644 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 08:56:28.036404 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 08:56:28.053589 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 08:56:28.063159 ignition[926]: INFO : Ignition 2.19.0 Dec 13 08:56:28.064478 ignition[926]: INFO : Stage: mount Dec 13 08:56:28.064478 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:28.064478 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:28.066562 ignition[926]: INFO : mount: mount passed Dec 13 08:56:28.066993 ignition[926]: INFO : Ignition finished successfully Dec 13 08:56:28.069089 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 08:56:28.077647 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 08:56:28.196471 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 08:56:28.211774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:56:28.221572 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (937) Dec 13 08:56:28.223706 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 08:56:28.223763 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 08:56:28.223777 kernel: BTRFS info (device sda6): using free space tree Dec 13 08:56:28.227444 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 08:56:28.227504 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 08:56:28.229536 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:56:28.253673 ignition[954]: INFO : Ignition 2.19.0 Dec 13 08:56:28.253673 ignition[954]: INFO : Stage: files Dec 13 08:56:28.255240 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:28.255240 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:28.255240 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Dec 13 08:56:28.258904 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 08:56:28.258904 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 08:56:28.260667 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 08:56:28.261413 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 08:56:28.262511 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 08:56:28.261790 unknown[954]: wrote ssh authorized keys file for user: core Dec 13 08:56:28.264593 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 08:56:28.265707 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 08:56:28.351500 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 08:56:28.471053 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 08:56:28.471053 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 08:56:28.473427 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 08:56:28.523512 systemd-networkd[777]: eth0: Gained IPv6LL Dec 13 08:56:28.651737 systemd-networkd[777]: eth1: Gained IPv6LL Dec 13 08:56:28.873954 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 08:56:29.125822 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 08:56:29.125822 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 08:56:29.129374 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 08:56:29.129374 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 08:56:29.129374 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 08:56:29.129374 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 08:56:29.129374 ignition[954]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 08:56:29.129374 ignition[954]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 08:56:29.129374 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 08:56:29.129374 ignition[954]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 13 08:56:29.129374 ignition[954]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 08:56:29.129374 ignition[954]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:56:29.129374 ignition[954]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:56:29.129374 ignition[954]: INFO : files: files passed Dec 13 08:56:29.129374 ignition[954]: INFO : Ignition finished successfully Dec 13 08:56:29.130416 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 08:56:29.137708 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 08:56:29.141625 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 08:56:29.149628 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 08:56:29.150654 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 08:56:29.170780 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:56:29.170780 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:56:29.174545 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:56:29.174851 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:56:29.178079 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 08:56:29.187702 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 08:56:29.221219 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 08:56:29.221354 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 08:56:29.222777 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 08:56:29.224064 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 08:56:29.225345 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 08:56:29.230699 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 08:56:29.252821 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:56:29.259649 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 08:56:29.273083 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:56:29.274637 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:56:29.276115 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 08:56:29.277300 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 08:56:29.278127 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:56:29.279740 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 08:56:29.280312 systemd[1]: Stopped target basic.target - Basic System. Dec 13 08:56:29.281783 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 08:56:29.283963 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:56:29.284801 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 08:56:29.285898 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 08:56:29.286928 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:56:29.288106 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 08:56:29.289233 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 08:56:29.290277 systemd[1]: Stopped target swap.target - Swaps. Dec 13 08:56:29.291196 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 08:56:29.291327 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:56:29.292770 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:56:29.293426 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:56:29.294450 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 08:56:29.294526 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:56:29.295575 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 08:56:29.295704 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 08:56:29.297278 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 08:56:29.297413 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:56:29.298584 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 08:56:29.298674 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 08:56:29.299634 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 08:56:29.299727 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:56:29.306631 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 08:56:29.307142 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 08:56:29.307360 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:56:29.313724 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 08:56:29.314343 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 08:56:29.314582 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:56:29.319994 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 08:56:29.320118 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:56:29.330252 ignition[1006]: INFO : Ignition 2.19.0 Dec 13 08:56:29.330252 ignition[1006]: INFO : Stage: umount Dec 13 08:56:29.331309 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 08:56:29.331455 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 08:56:29.334706 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:29.334706 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:29.336146 ignition[1006]: INFO : umount: umount passed Dec 13 08:56:29.336146 ignition[1006]: INFO : Ignition finished successfully Dec 13 08:56:29.338379 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 08:56:29.338686 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 08:56:29.339870 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 08:56:29.339933 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 08:56:29.340610 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 08:56:29.340649 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 08:56:29.341888 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 08:56:29.341936 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 08:56:29.343044 systemd[1]: Stopped target network.target - Network. Dec 13 08:56:29.344009 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 08:56:29.344062 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:56:29.345036 systemd[1]: Stopped target paths.target - Path Units. Dec 13 08:56:29.346150 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 08:56:29.349519 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:56:29.351953 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 08:56:29.353270 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 08:56:29.355038 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 08:56:29.355084 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:56:29.356913 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 08:56:29.356955 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:56:29.357750 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 08:56:29.357820 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 08:56:29.360633 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 08:56:29.360701 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 08:56:29.361586 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 08:56:29.365760 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 08:56:29.369105 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 08:56:29.369712 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 08:56:29.369817 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 08:56:29.369868 systemd-networkd[777]: eth1: DHCPv6 lease lost Dec 13 08:56:29.371600 systemd-networkd[777]: eth0: DHCPv6 lease lost Dec 13 08:56:29.373117 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 08:56:29.373238 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 08:56:29.375029 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 08:56:29.375149 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 08:56:29.378810 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 08:56:29.378924 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 08:56:29.382267 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 08:56:29.382340 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:56:29.389611 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 08:56:29.390772 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 08:56:29.390897 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:56:29.394456 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 08:56:29.394523 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:56:29.395340 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 08:56:29.395394 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 08:56:29.396938 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 08:56:29.396987 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:56:29.398243 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:56:29.414225 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 08:56:29.414365 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 08:56:29.420291 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 08:56:29.420543 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:56:29.422492 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 08:56:29.422548 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 08:56:29.423642 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 08:56:29.423678 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:56:29.424339 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 08:56:29.424413 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:56:29.426409 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 08:56:29.426461 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 08:56:29.428329 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:56:29.428372 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:56:29.436676 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 08:56:29.439692 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 08:56:29.440452 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:56:29.441181 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:56:29.441226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:56:29.442347 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 08:56:29.442451 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 08:56:29.443818 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 08:56:29.446623 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 08:56:29.458565 systemd[1]: Switching root. Dec 13 08:56:29.496142 systemd-journald[236]: Journal stopped Dec 13 08:56:30.502111 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Dec 13 08:56:30.502205 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 08:56:30.502219 kernel: SELinux: policy capability open_perms=1 Dec 13 08:56:30.502229 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 08:56:30.502239 kernel: SELinux: policy capability always_check_network=0 Dec 13 08:56:30.502253 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 08:56:30.502263 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 08:56:30.502273 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 08:56:30.502283 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 08:56:30.502294 kernel: audit: type=1403 audit(1734080189.676:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 08:56:30.502305 systemd[1]: Successfully loaded SELinux policy in 35.661ms. Dec 13 08:56:30.502333 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.248ms. Dec 13 08:56:30.502344 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:56:30.502355 systemd[1]: Detected virtualization kvm. Dec 13 08:56:30.502366 systemd[1]: Detected architecture arm64. Dec 13 08:56:30.502377 systemd[1]: Detected first boot. Dec 13 08:56:30.502400 systemd[1]: Hostname set to . Dec 13 08:56:30.502414 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:56:30.502425 zram_generator::config[1048]: No configuration found. Dec 13 08:56:30.502437 systemd[1]: Populated /etc with preset unit settings. Dec 13 08:56:30.502448 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 08:56:30.502458 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 08:56:30.502469 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 08:56:30.502485 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 08:56:30.502496 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 08:56:30.502507 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 08:56:30.502519 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 08:56:30.502530 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 08:56:30.502541 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 08:56:30.502552 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 08:56:30.502563 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 08:56:30.502574 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:56:30.502584 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:56:30.502596 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 08:56:30.502608 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 08:56:30.502619 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 08:56:30.502630 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:56:30.502641 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 08:56:30.502651 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:56:30.502662 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 08:56:30.502673 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 08:56:30.502686 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 08:56:30.502697 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 08:56:30.502708 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:56:30.502719 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:56:30.502731 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:56:30.502742 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:56:30.502752 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 08:56:30.502764 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 08:56:30.502774 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:56:30.502832 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:56:30.502845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:56:30.502856 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 08:56:30.502867 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 08:56:30.502877 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 08:56:30.502889 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 08:56:30.502900 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 08:56:30.502911 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 08:56:30.502922 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 08:56:30.502934 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 08:56:30.502946 systemd[1]: Reached target machines.target - Containers. Dec 13 08:56:30.502957 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 08:56:30.502968 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:56:30.502981 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:56:30.502997 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 08:56:30.503008 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:56:30.503019 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:56:30.503030 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:56:30.503041 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 08:56:30.503052 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:56:30.503063 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 08:56:30.503074 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 08:56:30.503087 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 08:56:30.503098 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 08:56:30.503109 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 08:56:30.503120 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:56:30.503131 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:56:30.503142 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 08:56:30.503153 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 08:56:30.503164 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:56:30.503175 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 08:56:30.503187 systemd[1]: Stopped verity-setup.service. Dec 13 08:56:30.503199 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 08:56:30.503210 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 08:56:30.503222 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 08:56:30.503233 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 08:56:30.503245 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 08:56:30.503256 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 08:56:30.503267 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:56:30.503278 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 08:56:30.503294 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 08:56:30.503305 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:56:30.503316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:56:30.503326 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:56:30.503337 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:56:30.503350 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:56:30.503366 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 08:56:30.509375 systemd-journald[1118]: Collecting audit messages is disabled. Dec 13 08:56:30.509449 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:56:30.509471 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 08:56:30.509484 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 08:56:30.509495 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 08:56:30.509509 systemd-journald[1118]: Journal started Dec 13 08:56:30.509533 systemd-journald[1118]: Runtime Journal (/run/log/journal/686b353354a449168bfa889f376c1201) is 8.0M, max 76.5M, 68.5M free. Dec 13 08:56:30.513871 kernel: loop: module loaded Dec 13 08:56:30.236130 systemd[1]: Queued start job for default target multi-user.target. Dec 13 08:56:30.515550 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 08:56:30.257984 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 08:56:30.258658 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 08:56:30.516671 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:56:30.524490 kernel: fuse: init (API version 7.39) Dec 13 08:56:30.523931 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 08:56:30.525429 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 08:56:30.526963 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 08:56:30.528033 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:56:30.528434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:56:30.541522 kernel: ACPI: bus type drm_connector registered Dec 13 08:56:30.544579 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 08:56:30.547497 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 08:56:30.547554 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:56:30.551559 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 08:56:30.560692 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 08:56:30.566587 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 08:56:30.567687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:56:30.574840 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 08:56:30.578220 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 08:56:30.579523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:56:30.586707 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 08:56:30.588557 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:56:30.591704 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 08:56:30.596663 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 08:56:30.601594 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:56:30.602676 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:56:30.607334 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:56:30.609150 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 08:56:30.612093 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 08:56:30.638199 systemd-journald[1118]: Time spent on flushing to /var/log/journal/686b353354a449168bfa889f376c1201 is 42.585ms for 1126 entries. Dec 13 08:56:30.638199 systemd-journald[1118]: System Journal (/var/log/journal/686b353354a449168bfa889f376c1201) is 8.0M, max 584.8M, 576.8M free. Dec 13 08:56:30.708019 systemd-journald[1118]: Received client request to flush runtime journal. Dec 13 08:56:30.708101 kernel: loop0: detected capacity change from 0 to 114328 Dec 13 08:56:30.708126 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 08:56:30.639448 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 08:56:30.641533 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 08:56:30.648930 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 08:56:30.708948 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:56:30.716725 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 08:56:30.718186 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 08:56:30.722246 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 08:56:30.725432 kernel: loop1: detected capacity change from 0 to 8 Dec 13 08:56:30.729759 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 08:56:30.742396 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 08:56:30.753892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:56:30.756685 kernel: loop2: detected capacity change from 0 to 114432 Dec 13 08:56:30.760758 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 08:56:30.787930 kernel: loop3: detected capacity change from 0 to 194512 Dec 13 08:56:30.808365 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Dec 13 08:56:30.808399 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Dec 13 08:56:30.816743 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:56:30.848423 kernel: loop4: detected capacity change from 0 to 114328 Dec 13 08:56:30.871957 kernel: loop5: detected capacity change from 0 to 8 Dec 13 08:56:30.874419 kernel: loop6: detected capacity change from 0 to 114432 Dec 13 08:56:30.889464 kernel: loop7: detected capacity change from 0 to 194512 Dec 13 08:56:30.916485 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 08:56:30.918060 (sd-merge)[1189]: Merged extensions into '/usr'. Dec 13 08:56:30.924447 systemd[1]: Reloading requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 08:56:30.924469 systemd[1]: Reloading... Dec 13 08:56:31.084863 zram_generator::config[1219]: No configuration found. Dec 13 08:56:31.115910 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 08:56:31.209620 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:56:31.256993 systemd[1]: Reloading finished in 332 ms. Dec 13 08:56:31.281080 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 08:56:31.284247 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 08:56:31.293713 systemd[1]: Starting ensure-sysext.service... Dec 13 08:56:31.297662 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:56:31.314612 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Dec 13 08:56:31.314630 systemd[1]: Reloading... Dec 13 08:56:31.324596 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 08:56:31.324912 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 08:56:31.325745 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 08:56:31.326020 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Dec 13 08:56:31.326072 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Dec 13 08:56:31.333045 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:56:31.333060 systemd-tmpfiles[1254]: Skipping /boot Dec 13 08:56:31.342270 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:56:31.342440 systemd-tmpfiles[1254]: Skipping /boot Dec 13 08:56:31.414455 zram_generator::config[1284]: No configuration found. Dec 13 08:56:31.505653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:56:31.554006 systemd[1]: Reloading finished in 239 ms. Dec 13 08:56:31.571604 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 08:56:31.572725 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:56:31.588006 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:56:31.591592 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 08:56:31.596847 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 08:56:31.600614 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:56:31.604925 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:56:31.606837 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 08:56:31.614438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:56:31.623593 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:56:31.627720 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:56:31.630939 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:56:31.632819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:56:31.637003 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 08:56:31.638753 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:56:31.638959 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:56:31.643165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:56:31.645155 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:56:31.646378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:56:31.652209 systemd[1]: Finished ensure-sysext.service. Dec 13 08:56:31.675064 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 08:56:31.676211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:56:31.676401 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:56:31.688909 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 08:56:31.690101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:56:31.690223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:56:31.692375 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:56:31.692560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:56:31.700618 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 08:56:31.706367 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:56:31.706483 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:56:31.708702 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 08:56:31.713745 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:56:31.714503 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:56:31.722317 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Dec 13 08:56:31.723086 augenrules[1355]: No rules Dec 13 08:56:31.725578 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:56:31.755816 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 08:56:31.764930 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 08:56:31.780802 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 08:56:31.781755 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:56:31.791314 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:56:31.792182 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:56:31.850958 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 08:56:31.852411 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 08:56:31.915518 systemd-networkd[1372]: lo: Link UP Dec 13 08:56:31.915528 systemd-networkd[1372]: lo: Gained carrier Dec 13 08:56:31.916070 systemd-resolved[1324]: Positive Trust Anchors: Dec 13 08:56:31.916083 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:56:31.916115 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:56:31.916225 systemd-networkd[1372]: Enumeration completed Dec 13 08:56:31.916342 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:56:31.922664 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 08:56:31.926218 systemd-resolved[1324]: Using system hostname 'ci-4081-2-1-e-e153687e15'. Dec 13 08:56:31.938030 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1389) Dec 13 08:56:31.938129 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1378) Dec 13 08:56:31.939936 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:56:31.941295 systemd[1]: Reached target network.target - Network. Dec 13 08:56:31.943158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:56:31.945745 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1389) Dec 13 08:56:31.955076 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 08:56:32.055046 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 08:56:32.055879 systemd-networkd[1372]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:32.057022 systemd-networkd[1372]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:56:32.059584 systemd-networkd[1372]: eth1: Link UP Dec 13 08:56:32.059703 systemd-networkd[1372]: eth1: Gained carrier Dec 13 08:56:32.059788 systemd-networkd[1372]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:32.062424 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 08:56:32.065666 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 08:56:32.069223 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:32.069233 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:56:32.071275 systemd-networkd[1372]: eth0: Link UP Dec 13 08:56:32.071284 systemd-networkd[1372]: eth0: Gained carrier Dec 13 08:56:32.071304 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:32.088434 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 08:56:32.095662 systemd-networkd[1372]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 08:56:32.097991 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Dec 13 08:56:32.121229 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 08:56:32.121367 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:56:32.129549 systemd-networkd[1372]: eth0: DHCPv4 address 5.75.230.207/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 08:56:32.130231 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:56:32.131873 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Dec 13 08:56:32.133408 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Dec 13 08:56:32.137815 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Dec 13 08:56:32.135624 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:56:32.141569 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 08:56:32.141645 kernel: [drm] features: -context_init Dec 13 08:56:32.141696 kernel: [drm] number of scanouts: 1 Dec 13 08:56:32.140529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:56:32.142702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:56:32.142756 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:56:32.143191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:56:32.144788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:56:32.154349 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:56:32.156216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:56:32.158079 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:56:32.159426 kernel: [drm] number of cap sets: 0 Dec 13 08:56:32.164154 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:56:32.164331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:56:32.165233 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:56:32.168442 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 08:56:32.175884 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 08:56:32.186522 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 08:56:32.202857 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:56:32.212303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:56:32.213261 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:56:32.220691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:56:32.283244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:56:32.349690 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 08:56:32.358046 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 08:56:32.371565 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:56:32.398124 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 08:56:32.400255 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:56:32.401753 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:56:32.403334 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 08:56:32.404109 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 08:56:32.405021 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 08:56:32.405693 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 08:56:32.406357 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 08:56:32.407061 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 08:56:32.407099 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:56:32.407636 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:56:32.410352 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 08:56:32.412840 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 08:56:32.422948 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 08:56:32.425824 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 08:56:32.429193 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 08:56:32.430248 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:56:32.430808 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:56:32.431339 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:56:32.431375 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:56:32.434564 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 08:56:32.437300 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:56:32.441073 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 08:56:32.449615 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 08:56:32.453566 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 08:56:32.457323 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 08:56:32.458021 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 08:56:32.460610 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 08:56:32.466629 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 08:56:32.469056 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 08:56:32.474689 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 08:56:32.476713 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 08:56:32.487655 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 08:56:32.491295 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 08:56:32.491931 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 08:56:32.494650 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 08:56:32.498092 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 08:56:32.500544 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 08:56:32.502134 coreos-metadata[1440]: Dec 13 08:56:32.501 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 08:56:32.505456 coreos-metadata[1440]: Dec 13 08:56:32.502 INFO Fetch successful Dec 13 08:56:32.505456 coreos-metadata[1440]: Dec 13 08:56:32.504 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 08:56:32.527475 coreos-metadata[1440]: Dec 13 08:56:32.506 INFO Fetch successful Dec 13 08:56:32.521529 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 08:56:32.520186 dbus-daemon[1441]: [system] SELinux support is enabled Dec 13 08:56:32.527920 extend-filesystems[1443]: Found loop4 Dec 13 08:56:32.527920 extend-filesystems[1443]: Found loop5 Dec 13 08:56:32.527920 extend-filesystems[1443]: Found loop6 Dec 13 08:56:32.527920 extend-filesystems[1443]: Found loop7 Dec 13 08:56:32.527920 extend-filesystems[1443]: Found sda Dec 13 08:56:32.527920 extend-filesystems[1443]: Found sda1 Dec 13 08:56:32.527920 extend-filesystems[1443]: Found sda2 Dec 13 08:56:32.527920 extend-filesystems[1443]: Found sda3 Dec 13 08:56:32.527920 extend-filesystems[1443]: Found usr Dec 13 08:56:32.527920 extend-filesystems[1443]: Found sda4 Dec 13 08:56:32.527920 extend-filesystems[1443]: Found sda6 Dec 13 08:56:32.527920 extend-filesystems[1443]: Found sda7 Dec 13 08:56:32.527920 extend-filesystems[1443]: Found sda9 Dec 13 08:56:32.527920 extend-filesystems[1443]: Checking size of /dev/sda9 Dec 13 08:56:32.559184 jq[1454]: true Dec 13 08:56:32.528800 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 08:56:32.565701 jq[1442]: false Dec 13 08:56:32.528833 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 08:56:32.530305 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 08:56:32.530325 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 08:56:32.549609 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 08:56:32.551639 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 08:56:32.582086 extend-filesystems[1443]: Resized partition /dev/sda9 Dec 13 08:56:32.582638 jq[1462]: true Dec 13 08:56:32.595059 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Dec 13 08:56:32.599612 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 08:56:32.600935 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 08:56:32.601122 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 08:56:32.607080 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 08:56:32.607735 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 08:56:32.610721 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 08:56:32.629566 update_engine[1453]: I20241213 08:56:32.629047 1453 main.cc:92] Flatcar Update Engine starting Dec 13 08:56:32.637425 tar[1456]: linux-arm64/helm Dec 13 08:56:32.639524 systemd[1]: Started update-engine.service - Update Engine. Dec 13 08:56:32.644411 update_engine[1453]: I20241213 08:56:32.644274 1453 update_check_scheduler.cc:74] Next update check in 2m59s Dec 13 08:56:32.651661 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 08:56:32.720015 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 08:56:32.721906 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 08:56:32.746411 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1379) Dec 13 08:56:32.787422 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 08:56:32.790534 systemd-logind[1452]: New seat seat0. Dec 13 08:56:32.802498 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 08:56:32.805792 extend-filesystems[1474]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 08:56:32.805792 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 08:56:32.805792 extend-filesystems[1474]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 08:56:32.802515 systemd-logind[1452]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 13 08:56:32.814431 extend-filesystems[1443]: Resized filesystem in /dev/sda9 Dec 13 08:56:32.814431 extend-filesystems[1443]: Found sr0 Dec 13 08:56:32.802749 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 08:56:32.806960 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 08:56:32.816499 bash[1515]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:56:32.807497 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 08:56:32.819472 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 08:56:32.831924 systemd[1]: Starting sshkeys.service... Dec 13 08:56:32.844800 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 08:56:32.855710 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 08:56:32.862597 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 08:56:32.896867 coreos-metadata[1524]: Dec 13 08:56:32.896 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 08:56:32.899587 coreos-metadata[1524]: Dec 13 08:56:32.899 INFO Fetch successful Dec 13 08:56:32.903301 unknown[1524]: wrote ssh authorized keys file for user: core Dec 13 08:56:32.943411 containerd[1473]: time="2024-12-13T08:56:32.943233480Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 08:56:32.950197 update-ssh-keys[1529]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:56:32.951126 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 08:56:32.958012 systemd[1]: Finished sshkeys.service. Dec 13 08:56:33.021519 containerd[1473]: time="2024-12-13T08:56:33.021203160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:33.028721 containerd[1473]: time="2024-12-13T08:56:33.028540520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:56:33.028721 containerd[1473]: time="2024-12-13T08:56:33.028589800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 08:56:33.028721 containerd[1473]: time="2024-12-13T08:56:33.028608960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 08:56:33.028909 containerd[1473]: time="2024-12-13T08:56:33.028834960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 08:56:33.028909 containerd[1473]: time="2024-12-13T08:56:33.028858520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:33.028948 containerd[1473]: time="2024-12-13T08:56:33.028926320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:56:33.028948 containerd[1473]: time="2024-12-13T08:56:33.028940640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:33.029488 containerd[1473]: time="2024-12-13T08:56:33.029126520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:56:33.029488 containerd[1473]: time="2024-12-13T08:56:33.029146320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:33.029488 containerd[1473]: time="2024-12-13T08:56:33.029160800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:56:33.029488 containerd[1473]: time="2024-12-13T08:56:33.029170800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:33.029488 containerd[1473]: time="2024-12-13T08:56:33.029245360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:33.029629 containerd[1473]: time="2024-12-13T08:56:33.029517440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:33.029651 containerd[1473]: time="2024-12-13T08:56:33.029626160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:56:33.029651 containerd[1473]: time="2024-12-13T08:56:33.029642360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 08:56:33.030092 containerd[1473]: time="2024-12-13T08:56:33.029734800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 08:56:33.030092 containerd[1473]: time="2024-12-13T08:56:33.029804160Z" level=info msg="metadata content store policy set" policy=shared Dec 13 08:56:33.038305 containerd[1473]: time="2024-12-13T08:56:33.037918600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 08:56:33.038305 containerd[1473]: time="2024-12-13T08:56:33.037991160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 08:56:33.038305 containerd[1473]: time="2024-12-13T08:56:33.038008880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 08:56:33.038305 containerd[1473]: time="2024-12-13T08:56:33.038027400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 08:56:33.038305 containerd[1473]: time="2024-12-13T08:56:33.038042920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 08:56:33.038305 containerd[1473]: time="2024-12-13T08:56:33.038225800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 08:56:33.038537 containerd[1473]: time="2024-12-13T08:56:33.038496040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038629920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038656920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038671680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038687200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038701920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038717640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038732360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038748000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038774200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038814840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038829760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038852720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038867280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.038978 containerd[1473]: time="2024-12-13T08:56:33.038885160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.038900240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.038913240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.038928000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.038941040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.038956640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.038972720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.038989440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.039003280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.039016520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.039034240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.039050840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.039074200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.039087440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039285 containerd[1473]: time="2024-12-13T08:56:33.039103960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 08:56:33.039543 containerd[1473]: time="2024-12-13T08:56:33.039233480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 08:56:33.039543 containerd[1473]: time="2024-12-13T08:56:33.039256080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 08:56:33.039543 containerd[1473]: time="2024-12-13T08:56:33.039268200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 08:56:33.039543 containerd[1473]: time="2024-12-13T08:56:33.039280640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 08:56:33.039543 containerd[1473]: time="2024-12-13T08:56:33.039290920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.039543 containerd[1473]: time="2024-12-13T08:56:33.039303880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 08:56:33.039543 containerd[1473]: time="2024-12-13T08:56:33.039314320Z" level=info msg="NRI interface is disabled by configuration." Dec 13 08:56:33.039543 containerd[1473]: time="2024-12-13T08:56:33.039325200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 08:56:33.042713 containerd[1473]: time="2024-12-13T08:56:33.039782880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 08:56:33.042713 containerd[1473]: time="2024-12-13T08:56:33.039877080Z" level=info msg="Connect containerd service" Dec 13 08:56:33.042713 containerd[1473]: time="2024-12-13T08:56:33.039919880Z" level=info msg="using legacy CRI server" Dec 13 08:56:33.042713 containerd[1473]: time="2024-12-13T08:56:33.039928760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 08:56:33.042713 containerd[1473]: time="2024-12-13T08:56:33.040300760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 08:56:33.045996 containerd[1473]: time="2024-12-13T08:56:33.045934800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 08:56:33.049176 containerd[1473]: time="2024-12-13T08:56:33.049135600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 08:56:33.049982 containerd[1473]: time="2024-12-13T08:56:33.049241280Z" level=info msg="Start subscribing containerd event" Dec 13 08:56:33.050034 containerd[1473]: time="2024-12-13T08:56:33.049997160Z" level=info msg="Start recovering state" Dec 13 08:56:33.050473 containerd[1473]: time="2024-12-13T08:56:33.050451560Z" level=info msg="Start event monitor" Dec 13 08:56:33.050473 containerd[1473]: time="2024-12-13T08:56:33.050471440Z" level=info msg="Start snapshots syncer" Dec 13 08:56:33.050552 containerd[1473]: time="2024-12-13T08:56:33.050491360Z" level=info msg="Start cni network conf syncer for default" Dec 13 08:56:33.050552 containerd[1473]: time="2024-12-13T08:56:33.050499560Z" level=info msg="Start streaming server" Dec 13 08:56:33.050716 containerd[1473]: time="2024-12-13T08:56:33.050690960Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 08:56:33.051275 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 08:56:33.052717 containerd[1473]: time="2024-12-13T08:56:33.052683360Z" level=info msg="containerd successfully booted in 0.116327s" Dec 13 08:56:33.294544 tar[1456]: linux-arm64/LICENSE Dec 13 08:56:33.295018 tar[1456]: linux-arm64/README.md Dec 13 08:56:33.308834 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 08:56:33.771560 systemd-networkd[1372]: eth1: Gained IPv6LL Dec 13 08:56:33.772340 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Dec 13 08:56:33.777542 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 08:56:33.779005 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 08:56:33.788605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:56:33.791852 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 08:56:33.847426 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 08:56:33.899497 systemd-networkd[1372]: eth0: Gained IPv6LL Dec 13 08:56:33.900500 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Dec 13 08:56:33.993319 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 08:56:34.016116 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 08:56:34.028903 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 08:56:34.037167 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 08:56:34.037559 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 08:56:34.046996 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 08:56:34.061804 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 08:56:34.070055 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 08:56:34.073887 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 08:56:34.074817 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 08:56:34.545512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:56:34.546773 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 08:56:34.551503 systemd[1]: Startup finished in 775ms (kernel) + 4.982s (initrd) + 4.910s (userspace) = 10.668s. Dec 13 08:56:34.553369 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:56:35.225614 kubelet[1571]: E1213 08:56:35.225485 1571 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:56:35.228996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:56:35.229182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:56:45.479524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 08:56:45.489960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:56:45.597612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:56:45.613983 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:56:45.663238 kubelet[1591]: E1213 08:56:45.663147 1591 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:56:45.667631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:56:45.667983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:56:55.911952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 08:56:55.920838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:56:56.026182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:56:56.031825 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:56:56.085361 kubelet[1606]: E1213 08:56:56.085301 1606 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:56:56.088872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:56:56.089067 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:03.921055 systemd-timesyncd[1340]: Contacted time server 194.50.19.204:123 (2.flatcar.pool.ntp.org). Dec 13 08:57:03.921141 systemd-timesyncd[1340]: Initial clock synchronization to Fri 2024-12-13 08:57:03.941755 UTC. Dec 13 08:57:06.161830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 08:57:06.170729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:57:06.285657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:57:06.298656 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:57:06.352766 kubelet[1622]: E1213 08:57:06.352703 1622 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:57:06.355942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:57:06.356097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:16.411914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 08:57:16.420772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:57:16.537744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:57:16.543341 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:57:16.597676 kubelet[1639]: E1213 08:57:16.597596 1639 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:57:16.600762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:57:16.600918 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:17.923668 update_engine[1453]: I20241213 08:57:17.923522 1453 update_attempter.cc:509] Updating boot flags... Dec 13 08:57:17.967405 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1656) Dec 13 08:57:18.031779 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1658) Dec 13 08:57:26.662022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 08:57:26.672863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:57:26.788172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:57:26.801938 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:57:26.854551 kubelet[1673]: E1213 08:57:26.854474 1673 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:57:26.857882 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:57:26.858204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:36.911652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 08:57:36.925727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:57:37.042751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:57:37.042843 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:57:37.093102 kubelet[1689]: E1213 08:57:37.093022 1689 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:57:37.097499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:57:37.097751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:41.783787 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 08:57:41.794882 systemd[1]: Started sshd@0-5.75.230.207:22-205.210.31.100:54911.service - OpenSSH per-connection server daemon (205.210.31.100:54911). Dec 13 08:57:42.098785 sshd[1699]: Connection closed by 205.210.31.100 port 54911 Dec 13 08:57:42.100350 systemd[1]: sshd@0-5.75.230.207:22-205.210.31.100:54911.service: Deactivated successfully. Dec 13 08:57:47.162217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 08:57:47.169767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:57:47.285316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:57:47.299881 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:57:47.357706 kubelet[1709]: E1213 08:57:47.357622 1709 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:57:47.360138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:57:47.360268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:57.215625 systemd[1]: Started sshd@1-5.75.230.207:22-139.178.89.65:49896.service - OpenSSH per-connection server daemon (139.178.89.65:49896). Dec 13 08:57:57.411696 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 08:57:57.425773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:57:57.536596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:57:57.549972 (kubelet)[1729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:57:57.607565 kubelet[1729]: E1213 08:57:57.607469 1729 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:57:57.610571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:57:57.610716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:58.211235 sshd[1719]: Accepted publickey for core from 139.178.89.65 port 49896 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:57:58.212523 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:57:58.223619 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 08:57:58.236186 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 08:57:58.241485 systemd-logind[1452]: New session 1 of user core. Dec 13 08:57:58.251857 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 08:57:58.259962 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 08:57:58.265027 (systemd)[1739]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 08:57:58.380828 systemd[1739]: Queued start job for default target default.target. Dec 13 08:57:58.390053 systemd[1739]: Created slice app.slice - User Application Slice. Dec 13 08:57:58.390408 systemd[1739]: Reached target paths.target - Paths. Dec 13 08:57:58.390607 systemd[1739]: Reached target timers.target - Timers. Dec 13 08:57:58.392260 systemd[1739]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 08:57:58.409602 systemd[1739]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 08:57:58.409890 systemd[1739]: Reached target sockets.target - Sockets. Dec 13 08:57:58.409913 systemd[1739]: Reached target basic.target - Basic System. Dec 13 08:57:58.409971 systemd[1739]: Reached target default.target - Main User Target. Dec 13 08:57:58.410013 systemd[1739]: Startup finished in 137ms. Dec 13 08:57:58.410170 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 08:57:58.417736 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 08:57:59.119649 systemd[1]: Started sshd@2-5.75.230.207:22-139.178.89.65:51512.service - OpenSSH per-connection server daemon (139.178.89.65:51512). Dec 13 08:58:00.096524 sshd[1750]: Accepted publickey for core from 139.178.89.65 port 51512 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:00.098587 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:00.105548 systemd-logind[1452]: New session 2 of user core. Dec 13 08:58:00.111916 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 08:58:00.780080 sshd[1750]: pam_unix(sshd:session): session closed for user core Dec 13 08:58:00.785839 systemd[1]: sshd@2-5.75.230.207:22-139.178.89.65:51512.service: Deactivated successfully. Dec 13 08:58:00.787961 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 08:58:00.789776 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Dec 13 08:58:00.790894 systemd-logind[1452]: Removed session 2. Dec 13 08:58:00.954013 systemd[1]: Started sshd@3-5.75.230.207:22-139.178.89.65:51516.service - OpenSSH per-connection server daemon (139.178.89.65:51516). Dec 13 08:58:01.940895 sshd[1757]: Accepted publickey for core from 139.178.89.65 port 51516 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:01.942879 sshd[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:01.948137 systemd-logind[1452]: New session 3 of user core. Dec 13 08:58:01.956680 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 08:58:02.618056 sshd[1757]: pam_unix(sshd:session): session closed for user core Dec 13 08:58:02.622600 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Dec 13 08:58:02.623857 systemd[1]: sshd@3-5.75.230.207:22-139.178.89.65:51516.service: Deactivated successfully. Dec 13 08:58:02.625663 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 08:58:02.627668 systemd-logind[1452]: Removed session 3. Dec 13 08:58:02.794917 systemd[1]: Started sshd@4-5.75.230.207:22-139.178.89.65:51526.service - OpenSSH per-connection server daemon (139.178.89.65:51526). Dec 13 08:58:03.770788 sshd[1764]: Accepted publickey for core from 139.178.89.65 port 51526 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:03.772503 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:03.777598 systemd-logind[1452]: New session 4 of user core. Dec 13 08:58:03.788743 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 08:58:04.455189 sshd[1764]: pam_unix(sshd:session): session closed for user core Dec 13 08:58:04.460134 systemd[1]: sshd@4-5.75.230.207:22-139.178.89.65:51526.service: Deactivated successfully. Dec 13 08:58:04.461881 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 08:58:04.463460 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Dec 13 08:58:04.464822 systemd-logind[1452]: Removed session 4. Dec 13 08:58:04.627747 systemd[1]: Started sshd@5-5.75.230.207:22-139.178.89.65:51540.service - OpenSSH per-connection server daemon (139.178.89.65:51540). Dec 13 08:58:05.617272 sshd[1771]: Accepted publickey for core from 139.178.89.65 port 51540 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:05.619073 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:05.623955 systemd-logind[1452]: New session 5 of user core. Dec 13 08:58:05.633678 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 08:58:06.148962 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 08:58:06.149228 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:58:06.171480 sudo[1774]: pam_unix(sudo:session): session closed for user root Dec 13 08:58:06.335909 sshd[1771]: pam_unix(sshd:session): session closed for user core Dec 13 08:58:06.344042 systemd[1]: sshd@5-5.75.230.207:22-139.178.89.65:51540.service: Deactivated successfully. Dec 13 08:58:06.346756 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 08:58:06.349751 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Dec 13 08:58:06.351830 systemd-logind[1452]: Removed session 5. Dec 13 08:58:06.517954 systemd[1]: Started sshd@6-5.75.230.207:22-139.178.89.65:51552.service - OpenSSH per-connection server daemon (139.178.89.65:51552). Dec 13 08:58:07.500898 sshd[1779]: Accepted publickey for core from 139.178.89.65 port 51552 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:07.503256 sshd[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:07.509811 systemd-logind[1452]: New session 6 of user core. Dec 13 08:58:07.519703 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 08:58:07.661860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 08:58:07.667770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:07.784734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:07.789890 (kubelet)[1790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:58:07.838916 kubelet[1790]: E1213 08:58:07.838803 1790 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:58:07.842086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:58:07.842368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:58:08.022885 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 08:58:08.023179 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:58:08.028107 sudo[1799]: pam_unix(sudo:session): session closed for user root Dec 13 08:58:08.034839 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 08:58:08.035141 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:58:08.051985 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 08:58:08.055049 auditctl[1802]: No rules Dec 13 08:58:08.056093 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 08:58:08.056376 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 08:58:08.059878 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:58:08.091560 augenrules[1820]: No rules Dec 13 08:58:08.092844 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:58:08.096692 sudo[1798]: pam_unix(sudo:session): session closed for user root Dec 13 08:58:08.257645 sshd[1779]: pam_unix(sshd:session): session closed for user core Dec 13 08:58:08.261808 systemd[1]: sshd@6-5.75.230.207:22-139.178.89.65:51552.service: Deactivated successfully. Dec 13 08:58:08.263782 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 08:58:08.265435 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Dec 13 08:58:08.266836 systemd-logind[1452]: Removed session 6. Dec 13 08:58:08.437915 systemd[1]: Started sshd@7-5.75.230.207:22-139.178.89.65:48204.service - OpenSSH per-connection server daemon (139.178.89.65:48204). Dec 13 08:58:09.418079 sshd[1828]: Accepted publickey for core from 139.178.89.65 port 48204 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:09.420516 sshd[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:09.426223 systemd-logind[1452]: New session 7 of user core. Dec 13 08:58:09.434782 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 08:58:09.940252 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 08:58:09.940572 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:58:10.250804 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 08:58:10.251466 (dockerd)[1846]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 08:58:10.509950 dockerd[1846]: time="2024-12-13T08:58:10.509495204Z" level=info msg="Starting up" Dec 13 08:58:10.613071 dockerd[1846]: time="2024-12-13T08:58:10.613014539Z" level=info msg="Loading containers: start." Dec 13 08:58:10.743410 kernel: Initializing XFRM netlink socket Dec 13 08:58:10.830489 systemd-networkd[1372]: docker0: Link UP Dec 13 08:58:10.853964 dockerd[1846]: time="2024-12-13T08:58:10.853865115Z" level=info msg="Loading containers: done." Dec 13 08:58:10.870712 dockerd[1846]: time="2024-12-13T08:58:10.870642682Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 08:58:10.870883 dockerd[1846]: time="2024-12-13T08:58:10.870770848Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 08:58:10.870909 dockerd[1846]: time="2024-12-13T08:58:10.870900894Z" level=info msg="Daemon has completed initialization" Dec 13 08:58:10.915911 dockerd[1846]: time="2024-12-13T08:58:10.915086915Z" level=info msg="API listen on /run/docker.sock" Dec 13 08:58:10.916190 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 08:58:12.028003 containerd[1473]: time="2024-12-13T08:58:12.027944284Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 08:58:12.682279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2720437348.mount: Deactivated successfully. Dec 13 08:58:13.664972 containerd[1473]: time="2024-12-13T08:58:13.664879567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:13.667773 containerd[1473]: time="2024-12-13T08:58:13.667687685Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201342" Dec 13 08:58:13.669538 containerd[1473]: time="2024-12-13T08:58:13.669486921Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:13.673072 containerd[1473]: time="2024-12-13T08:58:13.672970347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:13.675173 containerd[1473]: time="2024-12-13T08:58:13.675118558Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 1.647124272s" Dec 13 08:58:13.675173 containerd[1473]: time="2024-12-13T08:58:13.675165360Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 08:58:13.704175 containerd[1473]: time="2024-12-13T08:58:13.704130858Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 08:58:15.042817 containerd[1473]: time="2024-12-13T08:58:15.041724555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:15.044071 containerd[1473]: time="2024-12-13T08:58:15.044037767Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381317" Dec 13 08:58:15.045090 containerd[1473]: time="2024-12-13T08:58:15.045013566Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:15.048519 containerd[1473]: time="2024-12-13T08:58:15.048479584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:15.050676 containerd[1473]: time="2024-12-13T08:58:15.050633069Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.346450689s" Dec 13 08:58:15.051347 containerd[1473]: time="2024-12-13T08:58:15.051321017Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 08:58:15.079023 containerd[1473]: time="2024-12-13T08:58:15.078988678Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 08:58:16.009276 containerd[1473]: time="2024-12-13T08:58:16.009225485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:16.010413 containerd[1473]: time="2024-12-13T08:58:16.010367770Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765660" Dec 13 08:58:16.011418 containerd[1473]: time="2024-12-13T08:58:16.011071677Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:16.015180 containerd[1473]: time="2024-12-13T08:58:16.015114633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:16.016615 containerd[1473]: time="2024-12-13T08:58:16.016373802Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 937.220038ms" Dec 13 08:58:16.016615 containerd[1473]: time="2024-12-13T08:58:16.016427164Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 08:58:16.039550 containerd[1473]: time="2024-12-13T08:58:16.039494857Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 08:58:17.004614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3959162793.mount: Deactivated successfully. Dec 13 08:58:17.319695 containerd[1473]: time="2024-12-13T08:58:17.319532998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:17.321735 containerd[1473]: time="2024-12-13T08:58:17.321692519Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25274003" Dec 13 08:58:17.323107 containerd[1473]: time="2024-12-13T08:58:17.323065651Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:17.325603 containerd[1473]: time="2024-12-13T08:58:17.325561865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:17.327156 containerd[1473]: time="2024-12-13T08:58:17.327115243Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.287577825s" Dec 13 08:58:17.327290 containerd[1473]: time="2024-12-13T08:58:17.327273929Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 08:58:17.353415 containerd[1473]: time="2024-12-13T08:58:17.353353911Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 08:58:17.877024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 08:58:17.884290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:17.895029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818572464.mount: Deactivated successfully. Dec 13 08:58:18.018407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:18.019593 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:58:18.097169 kubelet[2091]: E1213 08:58:18.097081 2091 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:58:18.100495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:58:18.100637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:58:18.604797 containerd[1473]: time="2024-12-13T08:58:18.604730154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:18.607152 containerd[1473]: time="2024-12-13T08:58:18.606720067Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Dec 13 08:58:18.609441 containerd[1473]: time="2024-12-13T08:58:18.609057873Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:18.618485 containerd[1473]: time="2024-12-13T08:58:18.618378494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:18.620401 containerd[1473]: time="2024-12-13T08:58:18.619485175Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.265838332s" Dec 13 08:58:18.620401 containerd[1473]: time="2024-12-13T08:58:18.619530337Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 08:58:18.648809 containerd[1473]: time="2024-12-13T08:58:18.648773328Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 08:58:19.179047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028646571.mount: Deactivated successfully. Dec 13 08:58:19.187410 containerd[1473]: time="2024-12-13T08:58:19.187336645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:19.188752 containerd[1473]: time="2024-12-13T08:58:19.188711814Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Dec 13 08:58:19.190421 containerd[1473]: time="2024-12-13T08:58:19.188989504Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:19.191799 containerd[1473]: time="2024-12-13T08:58:19.191738802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:19.193236 containerd[1473]: time="2024-12-13T08:58:19.192600433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 543.625418ms" Dec 13 08:58:19.193236 containerd[1473]: time="2024-12-13T08:58:19.192635154Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 08:58:19.214134 containerd[1473]: time="2024-12-13T08:58:19.214094960Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 08:58:19.828713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915570141.mount: Deactivated successfully. Dec 13 08:58:21.207839 containerd[1473]: time="2024-12-13T08:58:21.206408809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:21.208378 containerd[1473]: time="2024-12-13T08:58:21.208344875Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200866" Dec 13 08:58:21.210472 containerd[1473]: time="2024-12-13T08:58:21.210439145Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:21.213836 containerd[1473]: time="2024-12-13T08:58:21.213796339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:21.215177 containerd[1473]: time="2024-12-13T08:58:21.215131944Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.000826737s" Dec 13 08:58:21.215247 containerd[1473]: time="2024-12-13T08:58:21.215175945Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 08:58:25.673168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:25.684778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:25.724106 systemd[1]: Reloading requested from client PID 2256 ('systemctl') (unit session-7.scope)... Dec 13 08:58:25.724128 systemd[1]: Reloading... Dec 13 08:58:25.855823 zram_generator::config[2296]: No configuration found. Dec 13 08:58:25.961170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:58:26.030864 systemd[1]: Reloading finished in 306 ms. Dec 13 08:58:26.083866 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 08:58:26.083953 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 08:58:26.084215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:26.091936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:26.209665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:26.216779 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:58:26.263808 kubelet[2344]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:58:26.264223 kubelet[2344]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:58:26.264269 kubelet[2344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:58:26.264467 kubelet[2344]: I1213 08:58:26.264417 2344 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:58:26.888447 kubelet[2344]: I1213 08:58:26.888406 2344 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 08:58:26.888447 kubelet[2344]: I1213 08:58:26.888443 2344 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:58:26.888716 kubelet[2344]: I1213 08:58:26.888699 2344 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 08:58:26.909239 kubelet[2344]: I1213 08:58:26.909184 2344 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:58:26.910417 kubelet[2344]: E1213 08:58:26.910032 2344 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://5.75.230.207:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:26.920164 kubelet[2344]: I1213 08:58:26.920120 2344 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:58:26.921424 kubelet[2344]: I1213 08:58:26.921354 2344 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:58:26.921630 kubelet[2344]: I1213 08:58:26.921589 2344 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:58:26.921630 kubelet[2344]: I1213 08:58:26.921624 2344 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:58:26.921630 kubelet[2344]: I1213 08:58:26.921633 2344 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:58:26.922997 kubelet[2344]: I1213 08:58:26.922934 2344 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:58:26.926593 kubelet[2344]: I1213 08:58:26.926517 2344 kubelet.go:396] "Attempting to sync node with API server" Dec 13 08:58:26.926593 kubelet[2344]: I1213 08:58:26.926575 2344 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:58:26.929437 kubelet[2344]: I1213 08:58:26.928109 2344 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:58:26.929437 kubelet[2344]: I1213 08:58:26.928143 2344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:58:26.929437 kubelet[2344]: W1213 08:58:26.928098 2344 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://5.75.230.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-e-e153687e15&limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:26.929437 kubelet[2344]: E1213 08:58:26.928189 2344 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://5.75.230.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-e-e153687e15&limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:26.930855 kubelet[2344]: I1213 08:58:26.930822 2344 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:58:26.931371 kubelet[2344]: I1213 08:58:26.931341 2344 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:58:26.931503 kubelet[2344]: W1213 08:58:26.931481 2344 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 08:58:26.932520 kubelet[2344]: I1213 08:58:26.932488 2344 server.go:1256] "Started kubelet" Dec 13 08:58:26.932669 kubelet[2344]: W1213 08:58:26.932625 2344 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://5.75.230.207:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:26.932706 kubelet[2344]: E1213 08:58:26.932679 2344 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://5.75.230.207:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:26.943005 kubelet[2344]: I1213 08:58:26.942955 2344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:58:26.945404 kubelet[2344]: E1213 08:58:26.945353 2344 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://5.75.230.207:6443/api/v1/namespaces/default/events\": dial tcp 5.75.230.207:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-e-e153687e15.1810b0da02b5a08d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-e-e153687e15,UID:ci-4081-2-1-e-e153687e15,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-e-e153687e15,},FirstTimestamp:2024-12-13 08:58:26.932457613 +0000 UTC m=+0.711285858,LastTimestamp:2024-12-13 08:58:26.932457613 +0000 UTC m=+0.711285858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-e-e153687e15,}" Dec 13 08:58:26.949805 kubelet[2344]: I1213 08:58:26.949768 2344 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:58:26.951283 kubelet[2344]: I1213 08:58:26.951249 2344 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:58:26.951523 kubelet[2344]: I1213 08:58:26.951500 2344 server.go:461] "Adding debug handlers to kubelet server" Dec 13 08:58:26.951919 kubelet[2344]: I1213 08:58:26.951894 2344 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:58:26.952814 kubelet[2344]: I1213 08:58:26.952777 2344 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:58:26.954956 kubelet[2344]: E1213 08:58:26.954933 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.230.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-e-e153687e15?timeout=10s\": dial tcp 5.75.230.207:6443: connect: connection refused" interval="200ms" Dec 13 08:58:26.956303 kubelet[2344]: I1213 08:58:26.956265 2344 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 08:58:26.956377 kubelet[2344]: I1213 08:58:26.956355 2344 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 08:58:26.957676 kubelet[2344]: E1213 08:58:26.957655 2344 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 08:58:26.958449 kubelet[2344]: I1213 08:58:26.957949 2344 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:58:26.958449 kubelet[2344]: I1213 08:58:26.957965 2344 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:58:26.958449 kubelet[2344]: I1213 08:58:26.958038 2344 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:58:26.974185 kubelet[2344]: I1213 08:58:26.974134 2344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:58:26.975671 kubelet[2344]: I1213 08:58:26.975639 2344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:58:26.975671 kubelet[2344]: I1213 08:58:26.975668 2344 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:58:26.975809 kubelet[2344]: I1213 08:58:26.975693 2344 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 08:58:26.975809 kubelet[2344]: E1213 08:58:26.975748 2344 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 08:58:26.983491 kubelet[2344]: W1213 08:58:26.983141 2344 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://5.75.230.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:26.983491 kubelet[2344]: E1213 08:58:26.983232 2344 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://5.75.230.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:26.984848 kubelet[2344]: W1213 08:58:26.984536 2344 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://5.75.230.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:26.984943 kubelet[2344]: E1213 08:58:26.984855 2344 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://5.75.230.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:26.990159 kubelet[2344]: I1213 08:58:26.990127 2344 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:58:26.990159 kubelet[2344]: I1213 08:58:26.990153 2344 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:58:26.990283 kubelet[2344]: I1213 08:58:26.990173 2344 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:58:26.994452 kubelet[2344]: I1213 08:58:26.994334 2344 policy_none.go:49] "None policy: Start" Dec 13 08:58:26.995407 kubelet[2344]: I1213 08:58:26.995299 2344 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:58:26.995407 kubelet[2344]: I1213 08:58:26.995408 2344 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:58:27.002768 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 08:58:27.014035 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 08:58:27.019291 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 08:58:27.029833 kubelet[2344]: I1213 08:58:27.029800 2344 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:58:27.030779 kubelet[2344]: I1213 08:58:27.030748 2344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:58:27.035053 kubelet[2344]: E1213 08:58:27.035010 2344 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-e-e153687e15\" not found" Dec 13 08:58:27.055883 kubelet[2344]: I1213 08:58:27.055848 2344 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.056505 kubelet[2344]: E1213 08:58:27.056480 2344 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://5.75.230.207:6443/api/v1/nodes\": dial tcp 5.75.230.207:6443: connect: connection refused" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.076939 kubelet[2344]: I1213 08:58:27.076766 2344 topology_manager.go:215] "Topology Admit Handler" podUID="983ff4a1f4bfb218c897a0546e7200c7" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.079460 kubelet[2344]: I1213 08:58:27.079107 2344 topology_manager.go:215] "Topology Admit Handler" podUID="d821ee79222541b6d299a159aeff515b" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.082694 kubelet[2344]: I1213 08:58:27.082652 2344 topology_manager.go:215] "Topology Admit Handler" podUID="f6d834555c43627646756760f927db4a" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.093318 systemd[1]: Created slice kubepods-burstable-pod983ff4a1f4bfb218c897a0546e7200c7.slice - libcontainer container kubepods-burstable-pod983ff4a1f4bfb218c897a0546e7200c7.slice. Dec 13 08:58:27.111605 systemd[1]: Created slice kubepods-burstable-podd821ee79222541b6d299a159aeff515b.slice - libcontainer container kubepods-burstable-podd821ee79222541b6d299a159aeff515b.slice. Dec 13 08:58:27.125557 systemd[1]: Created slice kubepods-burstable-podf6d834555c43627646756760f927db4a.slice - libcontainer container kubepods-burstable-podf6d834555c43627646756760f927db4a.slice. Dec 13 08:58:27.157793 kubelet[2344]: E1213 08:58:27.157661 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.230.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-e-e153687e15?timeout=10s\": dial tcp 5.75.230.207:6443: connect: connection refused" interval="400ms" Dec 13 08:58:27.258246 kubelet[2344]: I1213 08:58:27.258096 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/983ff4a1f4bfb218c897a0546e7200c7-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-e-e153687e15\" (UID: \"983ff4a1f4bfb218c897a0546e7200c7\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.258458 kubelet[2344]: I1213 08:58:27.258305 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/983ff4a1f4bfb218c897a0546e7200c7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-e-e153687e15\" (UID: \"983ff4a1f4bfb218c897a0546e7200c7\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.258458 kubelet[2344]: I1213 08:58:27.258374 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/983ff4a1f4bfb218c897a0546e7200c7-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-e-e153687e15\" (UID: \"983ff4a1f4bfb218c897a0546e7200c7\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.258606 kubelet[2344]: I1213 08:58:27.258462 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/983ff4a1f4bfb218c897a0546e7200c7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-e-e153687e15\" (UID: \"983ff4a1f4bfb218c897a0546e7200c7\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.258606 kubelet[2344]: I1213 08:58:27.258512 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d821ee79222541b6d299a159aeff515b-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-e-e153687e15\" (UID: \"d821ee79222541b6d299a159aeff515b\") " pod="kube-system/kube-scheduler-ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.258606 kubelet[2344]: I1213 08:58:27.258557 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/983ff4a1f4bfb218c897a0546e7200c7-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-e-e153687e15\" (UID: \"983ff4a1f4bfb218c897a0546e7200c7\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.258606 kubelet[2344]: I1213 08:58:27.258605 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6d834555c43627646756760f927db4a-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-e-e153687e15\" (UID: \"f6d834555c43627646756760f927db4a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.258824 kubelet[2344]: I1213 08:58:27.258647 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6d834555c43627646756760f927db4a-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-e-e153687e15\" (UID: \"f6d834555c43627646756760f927db4a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.258824 kubelet[2344]: I1213 08:58:27.258695 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6d834555c43627646756760f927db4a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-e-e153687e15\" (UID: \"f6d834555c43627646756760f927db4a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.260720 kubelet[2344]: I1213 08:58:27.260310 2344 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.260720 kubelet[2344]: E1213 08:58:27.260692 2344 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://5.75.230.207:6443/api/v1/nodes\": dial tcp 5.75.230.207:6443: connect: connection refused" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.409632 containerd[1473]: time="2024-12-13T08:58:27.409343779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-e-e153687e15,Uid:983ff4a1f4bfb218c897a0546e7200c7,Namespace:kube-system,Attempt:0,}" Dec 13 08:58:27.423192 containerd[1473]: time="2024-12-13T08:58:27.422942931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-e-e153687e15,Uid:d821ee79222541b6d299a159aeff515b,Namespace:kube-system,Attempt:0,}" Dec 13 08:58:27.429566 containerd[1473]: time="2024-12-13T08:58:27.429252873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-e-e153687e15,Uid:f6d834555c43627646756760f927db4a,Namespace:kube-system,Attempt:0,}" Dec 13 08:58:27.558794 kubelet[2344]: E1213 08:58:27.558753 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.230.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-e-e153687e15?timeout=10s\": dial tcp 5.75.230.207:6443: connect: connection refused" interval="800ms" Dec 13 08:58:27.664881 kubelet[2344]: I1213 08:58:27.664536 2344 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.665057 kubelet[2344]: E1213 08:58:27.664950 2344 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://5.75.230.207:6443/api/v1/nodes\": dial tcp 5.75.230.207:6443: connect: connection refused" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:27.951683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712030438.mount: Deactivated successfully. Dec 13 08:58:27.957892 containerd[1473]: time="2024-12-13T08:58:27.957817398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:58:27.959430 containerd[1473]: time="2024-12-13T08:58:27.959373723Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Dec 13 08:58:27.962522 containerd[1473]: time="2024-12-13T08:58:27.962424131Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:58:27.964770 containerd[1473]: time="2024-12-13T08:58:27.964234343Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:58:27.964770 containerd[1473]: time="2024-12-13T08:58:27.964572473Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:58:27.965958 containerd[1473]: time="2024-12-13T08:58:27.965917152Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:58:27.967226 containerd[1473]: time="2024-12-13T08:58:27.967190628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:58:27.969782 containerd[1473]: time="2024-12-13T08:58:27.969506095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:58:27.971952 containerd[1473]: time="2024-12-13T08:58:27.971910564Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.018101ms" Dec 13 08:58:27.973339 containerd[1473]: time="2024-12-13T08:58:27.972244814Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.20088ms" Dec 13 08:58:27.979043 containerd[1473]: time="2024-12-13T08:58:27.978702680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.346565ms" Dec 13 08:58:27.984769 kubelet[2344]: W1213 08:58:27.984711 2344 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://5.75.230.207:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:27.984769 kubelet[2344]: E1213 08:58:27.984770 2344 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://5.75.230.207:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:28.094170 containerd[1473]: time="2024-12-13T08:58:28.094008697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:58:28.094479 containerd[1473]: time="2024-12-13T08:58:28.094368387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:58:28.094579 containerd[1473]: time="2024-12-13T08:58:28.094464470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:28.094810 containerd[1473]: time="2024-12-13T08:58:28.094744158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:28.096809 containerd[1473]: time="2024-12-13T08:58:28.096541768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:58:28.096809 containerd[1473]: time="2024-12-13T08:58:28.096597530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:58:28.096809 containerd[1473]: time="2024-12-13T08:58:28.096614010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:28.096809 containerd[1473]: time="2024-12-13T08:58:28.096693612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:28.099939 containerd[1473]: time="2024-12-13T08:58:28.098747590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:58:28.099939 containerd[1473]: time="2024-12-13T08:58:28.098799672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:58:28.099939 containerd[1473]: time="2024-12-13T08:58:28.098810992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:28.099939 containerd[1473]: time="2024-12-13T08:58:28.098883234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:28.125645 systemd[1]: Started cri-containerd-9c2b10a677f8517ad96274fbff98b5ba1d130aad9a61127d5bceecd8609a93c2.scope - libcontainer container 9c2b10a677f8517ad96274fbff98b5ba1d130aad9a61127d5bceecd8609a93c2. Dec 13 08:58:28.130926 systemd[1]: Started cri-containerd-c4d63d09e5f391e933a517446e5e5f7b1500fb8ed3a3cc29e41e10b0ace6e5a5.scope - libcontainer container c4d63d09e5f391e933a517446e5e5f7b1500fb8ed3a3cc29e41e10b0ace6e5a5. Dec 13 08:58:28.144527 systemd[1]: Started cri-containerd-8de6ce95677fd4620b809e725e9318c7c358a7d46259d3249b43ce547e219794.scope - libcontainer container 8de6ce95677fd4620b809e725e9318c7c358a7d46259d3249b43ce547e219794. Dec 13 08:58:28.148807 kubelet[2344]: W1213 08:58:28.148647 2344 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://5.75.230.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-e-e153687e15&limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:28.148948 kubelet[2344]: E1213 08:58:28.148813 2344 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://5.75.230.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-e-e153687e15&limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:28.190613 containerd[1473]: time="2024-12-13T08:58:28.190479048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-e-e153687e15,Uid:983ff4a1f4bfb218c897a0546e7200c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8de6ce95677fd4620b809e725e9318c7c358a7d46259d3249b43ce547e219794\"" Dec 13 08:58:28.198240 containerd[1473]: time="2024-12-13T08:58:28.198129663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-e-e153687e15,Uid:f6d834555c43627646756760f927db4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c2b10a677f8517ad96274fbff98b5ba1d130aad9a61127d5bceecd8609a93c2\"" Dec 13 08:58:28.199777 containerd[1473]: time="2024-12-13T08:58:28.199584664Z" level=info msg="CreateContainer within sandbox \"8de6ce95677fd4620b809e725e9318c7c358a7d46259d3249b43ce547e219794\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 08:58:28.204904 containerd[1473]: time="2024-12-13T08:58:28.204764490Z" level=info msg="CreateContainer within sandbox \"9c2b10a677f8517ad96274fbff98b5ba1d130aad9a61127d5bceecd8609a93c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 08:58:28.206979 kubelet[2344]: W1213 08:58:28.206922 2344 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://5.75.230.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:28.206979 kubelet[2344]: E1213 08:58:28.206978 2344 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://5.75.230.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:28.215315 containerd[1473]: time="2024-12-13T08:58:28.215244544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-e-e153687e15,Uid:d821ee79222541b6d299a159aeff515b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4d63d09e5f391e933a517446e5e5f7b1500fb8ed3a3cc29e41e10b0ace6e5a5\"" Dec 13 08:58:28.219090 containerd[1473]: time="2024-12-13T08:58:28.219053891Z" level=info msg="CreateContainer within sandbox \"c4d63d09e5f391e933a517446e5e5f7b1500fb8ed3a3cc29e41e10b0ace6e5a5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 08:58:28.227634 containerd[1473]: time="2024-12-13T08:58:28.227584931Z" level=info msg="CreateContainer within sandbox \"8de6ce95677fd4620b809e725e9318c7c358a7d46259d3249b43ce547e219794\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2483c32c6656cb26fe8c99be969cc1cedee5d4dc089521ef62ba48705944005d\"" Dec 13 08:58:28.228796 containerd[1473]: time="2024-12-13T08:58:28.228767964Z" level=info msg="StartContainer for \"2483c32c6656cb26fe8c99be969cc1cedee5d4dc089521ef62ba48705944005d\"" Dec 13 08:58:28.243030 containerd[1473]: time="2024-12-13T08:58:28.242768118Z" level=info msg="CreateContainer within sandbox \"9c2b10a677f8517ad96274fbff98b5ba1d130aad9a61127d5bceecd8609a93c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a4b70cf8fb4b3947e0c90e4fc4889936da9449fdddca951abcc0e2a5ee073c22\"" Dec 13 08:58:28.244037 containerd[1473]: time="2024-12-13T08:58:28.244008353Z" level=info msg="StartContainer for \"a4b70cf8fb4b3947e0c90e4fc4889936da9449fdddca951abcc0e2a5ee073c22\"" Dec 13 08:58:28.248869 containerd[1473]: time="2024-12-13T08:58:28.248819368Z" level=info msg="CreateContainer within sandbox \"c4d63d09e5f391e933a517446e5e5f7b1500fb8ed3a3cc29e41e10b0ace6e5a5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b816931bb84d9158be5bd7a8bfea1e6f7bf8ec30b399dbd841f86912dcaf83c1\"" Dec 13 08:58:28.249913 containerd[1473]: time="2024-12-13T08:58:28.249779795Z" level=info msg="StartContainer for \"b816931bb84d9158be5bd7a8bfea1e6f7bf8ec30b399dbd841f86912dcaf83c1\"" Dec 13 08:58:28.270594 systemd[1]: Started cri-containerd-2483c32c6656cb26fe8c99be969cc1cedee5d4dc089521ef62ba48705944005d.scope - libcontainer container 2483c32c6656cb26fe8c99be969cc1cedee5d4dc089521ef62ba48705944005d. Dec 13 08:58:28.285234 systemd[1]: Started cri-containerd-a4b70cf8fb4b3947e0c90e4fc4889936da9449fdddca951abcc0e2a5ee073c22.scope - libcontainer container a4b70cf8fb4b3947e0c90e4fc4889936da9449fdddca951abcc0e2a5ee073c22. Dec 13 08:58:28.305808 kubelet[2344]: W1213 08:58:28.305537 2344 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://5.75.230.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:28.305808 kubelet[2344]: E1213 08:58:28.305645 2344 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://5.75.230.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 5.75.230.207:6443: connect: connection refused Dec 13 08:58:28.308071 systemd[1]: Started cri-containerd-b816931bb84d9158be5bd7a8bfea1e6f7bf8ec30b399dbd841f86912dcaf83c1.scope - libcontainer container b816931bb84d9158be5bd7a8bfea1e6f7bf8ec30b399dbd841f86912dcaf83c1. Dec 13 08:58:28.348749 containerd[1473]: time="2024-12-13T08:58:28.348601612Z" level=info msg="StartContainer for \"a4b70cf8fb4b3947e0c90e4fc4889936da9449fdddca951abcc0e2a5ee073c22\" returns successfully" Dec 13 08:58:28.358877 containerd[1473]: time="2024-12-13T08:58:28.358831780Z" level=info msg="StartContainer for \"2483c32c6656cb26fe8c99be969cc1cedee5d4dc089521ef62ba48705944005d\" returns successfully" Dec 13 08:58:28.360043 kubelet[2344]: E1213 08:58:28.359787 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://5.75.230.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-e-e153687e15?timeout=10s\": dial tcp 5.75.230.207:6443: connect: connection refused" interval="1.6s" Dec 13 08:58:28.389878 containerd[1473]: time="2024-12-13T08:58:28.389766849Z" level=info msg="StartContainer for \"b816931bb84d9158be5bd7a8bfea1e6f7bf8ec30b399dbd841f86912dcaf83c1\" returns successfully" Dec 13 08:58:28.467664 kubelet[2344]: I1213 08:58:28.467552 2344 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:28.469637 kubelet[2344]: E1213 08:58:28.469583 2344 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://5.75.230.207:6443/api/v1/nodes\": dial tcp 5.75.230.207:6443: connect: connection refused" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:30.072910 kubelet[2344]: I1213 08:58:30.072873 2344 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:31.131835 kubelet[2344]: E1213 08:58:31.131794 2344 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-e-e153687e15\" not found" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:31.145750 kubelet[2344]: I1213 08:58:31.145603 2344 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:31.173351 kubelet[2344]: E1213 08:58:31.173131 2344 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-2-1-e-e153687e15.1810b0da02b5a08d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-e-e153687e15,UID:ci-4081-2-1-e-e153687e15,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-e-e153687e15,},FirstTimestamp:2024-12-13 08:58:26.932457613 +0000 UTC m=+0.711285858,LastTimestamp:2024-12-13 08:58:26.932457613 +0000 UTC m=+0.711285858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-e-e153687e15,}" Dec 13 08:58:31.246894 kubelet[2344]: E1213 08:58:31.246849 2344 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-2-1-e-e153687e15.1810b0da0435c816 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-e-e153687e15,UID:ci-4081-2-1-e-e153687e15,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-e-e153687e15,},FirstTimestamp:2024-12-13 08:58:26.957633558 +0000 UTC m=+0.736461803,LastTimestamp:2024-12-13 08:58:26.957633558 +0000 UTC m=+0.736461803,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-e-e153687e15,}" Dec 13 08:58:31.483270 kubelet[2344]: E1213 08:58:31.483127 2344 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-2-1-e-e153687e15\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:31.938130 kubelet[2344]: I1213 08:58:31.937487 2344 apiserver.go:52] "Watching apiserver" Dec 13 08:58:31.956769 kubelet[2344]: I1213 08:58:31.956694 2344 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 08:58:33.910434 systemd[1]: Reloading requested from client PID 2618 ('systemctl') (unit session-7.scope)... Dec 13 08:58:33.910811 systemd[1]: Reloading... Dec 13 08:58:34.032455 zram_generator::config[2658]: No configuration found. Dec 13 08:58:34.133539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:58:34.219340 systemd[1]: Reloading finished in 308 ms. Dec 13 08:58:34.268255 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:34.268777 kubelet[2344]: I1213 08:58:34.268100 2344 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:58:34.281489 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 08:58:34.281861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:34.281946 systemd[1]: kubelet.service: Consumed 1.178s CPU time, 111.5M memory peak, 0B memory swap peak. Dec 13 08:58:34.299789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:34.426901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:34.438788 (kubelet)[2703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:58:34.503280 kubelet[2703]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:58:34.503280 kubelet[2703]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:58:34.503280 kubelet[2703]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:58:34.503280 kubelet[2703]: I1213 08:58:34.503240 2703 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:58:34.508416 kubelet[2703]: I1213 08:58:34.508260 2703 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 08:58:34.508416 kubelet[2703]: I1213 08:58:34.508295 2703 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:58:34.508717 kubelet[2703]: I1213 08:58:34.508698 2703 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 08:58:34.514564 kubelet[2703]: I1213 08:58:34.514508 2703 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 08:58:34.520565 kubelet[2703]: I1213 08:58:34.518366 2703 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:58:34.536629 kubelet[2703]: I1213 08:58:34.536583 2703 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:58:34.536857 kubelet[2703]: I1213 08:58:34.536830 2703 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:58:34.537033 kubelet[2703]: I1213 08:58:34.537017 2703 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:58:34.537121 kubelet[2703]: I1213 08:58:34.537044 2703 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:58:34.537121 kubelet[2703]: I1213 08:58:34.537055 2703 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:58:34.537121 kubelet[2703]: I1213 08:58:34.537086 2703 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:58:34.537442 kubelet[2703]: I1213 08:58:34.537426 2703 kubelet.go:396] "Attempting to sync node with API server" Dec 13 08:58:34.537575 kubelet[2703]: I1213 08:58:34.537452 2703 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:58:34.543003 kubelet[2703]: I1213 08:58:34.542963 2703 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:58:34.543003 kubelet[2703]: I1213 08:58:34.543011 2703 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:58:34.545416 kubelet[2703]: I1213 08:58:34.544809 2703 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:58:34.545416 kubelet[2703]: I1213 08:58:34.545043 2703 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:58:34.546092 kubelet[2703]: I1213 08:58:34.546074 2703 server.go:1256] "Started kubelet" Dec 13 08:58:34.551762 kubelet[2703]: I1213 08:58:34.549528 2703 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:58:34.565977 kubelet[2703]: I1213 08:58:34.565930 2703 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:58:34.567127 kubelet[2703]: I1213 08:58:34.567095 2703 server.go:461] "Adding debug handlers to kubelet server" Dec 13 08:58:34.577433 kubelet[2703]: I1213 08:58:34.574951 2703 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:58:34.577433 kubelet[2703]: I1213 08:58:34.575156 2703 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:58:34.581427 kubelet[2703]: I1213 08:58:34.579648 2703 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:58:34.581427 kubelet[2703]: I1213 08:58:34.580243 2703 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 08:58:34.581427 kubelet[2703]: I1213 08:58:34.581253 2703 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 08:58:34.594151 kubelet[2703]: I1213 08:58:34.594119 2703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:58:34.596952 kubelet[2703]: I1213 08:58:34.596908 2703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:58:34.597478 kubelet[2703]: I1213 08:58:34.597124 2703 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:58:34.597478 kubelet[2703]: I1213 08:58:34.597152 2703 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 08:58:34.597478 kubelet[2703]: E1213 08:58:34.597205 2703 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 08:58:34.601957 kubelet[2703]: I1213 08:58:34.601909 2703 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:58:34.602952 kubelet[2703]: I1213 08:58:34.602549 2703 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:58:34.614247 kubelet[2703]: I1213 08:58:34.612183 2703 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:58:34.625039 kubelet[2703]: E1213 08:58:34.625006 2703 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 08:58:34.675732 kubelet[2703]: I1213 08:58:34.675646 2703 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:58:34.675901 kubelet[2703]: I1213 08:58:34.675891 2703 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:58:34.676000 kubelet[2703]: I1213 08:58:34.675990 2703 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:58:34.676275 kubelet[2703]: I1213 08:58:34.676255 2703 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 08:58:34.676368 kubelet[2703]: I1213 08:58:34.676357 2703 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 08:58:34.676450 kubelet[2703]: I1213 08:58:34.676441 2703 policy_none.go:49] "None policy: Start" Dec 13 08:58:34.677402 kubelet[2703]: I1213 08:58:34.677370 2703 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:58:34.677532 kubelet[2703]: I1213 08:58:34.677522 2703 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:58:34.677914 kubelet[2703]: I1213 08:58:34.677899 2703 state_mem.go:75] "Updated machine memory state" Dec 13 08:58:34.684816 kubelet[2703]: I1213 08:58:34.683853 2703 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.694562 kubelet[2703]: I1213 08:58:34.693672 2703 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:58:34.694562 kubelet[2703]: I1213 08:58:34.693979 2703 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:58:34.697479 kubelet[2703]: I1213 08:58:34.697308 2703 topology_manager.go:215] "Topology Admit Handler" podUID="f6d834555c43627646756760f927db4a" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.698680 kubelet[2703]: I1213 08:58:34.698650 2703 topology_manager.go:215] "Topology Admit Handler" podUID="983ff4a1f4bfb218c897a0546e7200c7" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.700008 kubelet[2703]: I1213 08:58:34.698972 2703 topology_manager.go:215] "Topology Admit Handler" podUID="d821ee79222541b6d299a159aeff515b" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.720014 kubelet[2703]: I1213 08:58:34.719254 2703 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.723807 kubelet[2703]: I1213 08:58:34.720312 2703 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.782906 kubelet[2703]: I1213 08:58:34.782759 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d821ee79222541b6d299a159aeff515b-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-e-e153687e15\" (UID: \"d821ee79222541b6d299a159aeff515b\") " pod="kube-system/kube-scheduler-ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.782906 kubelet[2703]: I1213 08:58:34.782856 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6d834555c43627646756760f927db4a-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-e-e153687e15\" (UID: \"f6d834555c43627646756760f927db4a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.782906 kubelet[2703]: I1213 08:58:34.782881 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6d834555c43627646756760f927db4a-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-e-e153687e15\" (UID: \"f6d834555c43627646756760f927db4a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.783109 kubelet[2703]: I1213 08:58:34.782930 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/983ff4a1f4bfb218c897a0546e7200c7-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-e-e153687e15\" (UID: \"983ff4a1f4bfb218c897a0546e7200c7\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.783109 kubelet[2703]: I1213 08:58:34.782959 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/983ff4a1f4bfb218c897a0546e7200c7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-e-e153687e15\" (UID: \"983ff4a1f4bfb218c897a0546e7200c7\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.783109 kubelet[2703]: I1213 08:58:34.783001 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/983ff4a1f4bfb218c897a0546e7200c7-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-e-e153687e15\" (UID: \"983ff4a1f4bfb218c897a0546e7200c7\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.783109 kubelet[2703]: I1213 08:58:34.783024 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/983ff4a1f4bfb218c897a0546e7200c7-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-e-e153687e15\" (UID: \"983ff4a1f4bfb218c897a0546e7200c7\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.783109 kubelet[2703]: I1213 08:58:34.783081 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6d834555c43627646756760f927db4a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-e-e153687e15\" (UID: \"f6d834555c43627646756760f927db4a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-e-e153687e15" Dec 13 08:58:34.783294 kubelet[2703]: I1213 08:58:34.783117 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/983ff4a1f4bfb218c897a0546e7200c7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-e-e153687e15\" (UID: \"983ff4a1f4bfb218c897a0546e7200c7\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-e-e153687e15" Dec 13 08:58:35.545787 kubelet[2703]: I1213 08:58:35.545357 2703 apiserver.go:52] "Watching apiserver" Dec 13 08:58:35.581174 kubelet[2703]: I1213 08:58:35.581075 2703 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 08:58:35.699130 kubelet[2703]: E1213 08:58:35.698594 2703 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-e-e153687e15\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-e-e153687e15" Dec 13 08:58:35.778795 kubelet[2703]: I1213 08:58:35.778758 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-e-e153687e15" podStartSLOduration=1.778709948 podStartE2EDuration="1.778709948s" podCreationTimestamp="2024-12-13 08:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:58:35.74650355 +0000 UTC m=+1.301306889" watchObservedRunningTime="2024-12-13 08:58:35.778709948 +0000 UTC m=+1.333513367" Dec 13 08:58:35.796935 kubelet[2703]: I1213 08:58:35.796764 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-e-e153687e15" podStartSLOduration=1.796721932 podStartE2EDuration="1.796721932s" podCreationTimestamp="2024-12-13 08:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:58:35.779357604 +0000 UTC m=+1.334160983" watchObservedRunningTime="2024-12-13 08:58:35.796721932 +0000 UTC m=+1.351525351" Dec 13 08:58:35.816848 kubelet[2703]: I1213 08:58:35.816697 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-e-e153687e15" podStartSLOduration=1.816644242 podStartE2EDuration="1.816644242s" podCreationTimestamp="2024-12-13 08:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:58:35.798007243 +0000 UTC m=+1.352810622" watchObservedRunningTime="2024-12-13 08:58:35.816644242 +0000 UTC m=+1.371447621" Dec 13 08:58:39.729150 sudo[1831]: pam_unix(sudo:session): session closed for user root Dec 13 08:58:39.890130 sshd[1828]: pam_unix(sshd:session): session closed for user core Dec 13 08:58:39.896880 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Dec 13 08:58:39.897363 systemd[1]: sshd@7-5.75.230.207:22-139.178.89.65:48204.service: Deactivated successfully. Dec 13 08:58:39.899657 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 08:58:39.899849 systemd[1]: session-7.scope: Consumed 6.353s CPU time, 186.2M memory peak, 0B memory swap peak. Dec 13 08:58:39.901611 systemd-logind[1452]: Removed session 7. Dec 13 08:58:46.898455 kubelet[2703]: I1213 08:58:46.897625 2703 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 08:58:46.901347 containerd[1473]: time="2024-12-13T08:58:46.899658517Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 08:58:46.901725 kubelet[2703]: I1213 08:58:46.900655 2703 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 08:58:47.831360 kubelet[2703]: I1213 08:58:47.829982 2703 topology_manager.go:215] "Topology Admit Handler" podUID="45ebd0b4-52b5-41e3-b518-acd06b59eef7" podNamespace="kube-system" podName="kube-proxy-mqvtm" Dec 13 08:58:47.845361 systemd[1]: Created slice kubepods-besteffort-pod45ebd0b4_52b5_41e3_b518_acd06b59eef7.slice - libcontainer container kubepods-besteffort-pod45ebd0b4_52b5_41e3_b518_acd06b59eef7.slice. Dec 13 08:58:47.873774 kubelet[2703]: I1213 08:58:47.873051 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45ebd0b4-52b5-41e3-b518-acd06b59eef7-xtables-lock\") pod \"kube-proxy-mqvtm\" (UID: \"45ebd0b4-52b5-41e3-b518-acd06b59eef7\") " pod="kube-system/kube-proxy-mqvtm" Dec 13 08:58:47.873774 kubelet[2703]: I1213 08:58:47.873098 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/45ebd0b4-52b5-41e3-b518-acd06b59eef7-kube-proxy\") pod \"kube-proxy-mqvtm\" (UID: \"45ebd0b4-52b5-41e3-b518-acd06b59eef7\") " pod="kube-system/kube-proxy-mqvtm" Dec 13 08:58:47.873774 kubelet[2703]: I1213 08:58:47.873119 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45ebd0b4-52b5-41e3-b518-acd06b59eef7-lib-modules\") pod \"kube-proxy-mqvtm\" (UID: \"45ebd0b4-52b5-41e3-b518-acd06b59eef7\") " pod="kube-system/kube-proxy-mqvtm" Dec 13 08:58:47.873774 kubelet[2703]: I1213 08:58:47.873178 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v5cj\" (UniqueName: \"kubernetes.io/projected/45ebd0b4-52b5-41e3-b518-acd06b59eef7-kube-api-access-7v5cj\") pod \"kube-proxy-mqvtm\" (UID: \"45ebd0b4-52b5-41e3-b518-acd06b59eef7\") " pod="kube-system/kube-proxy-mqvtm" Dec 13 08:58:48.020810 kubelet[2703]: I1213 08:58:48.020758 2703 topology_manager.go:215] "Topology Admit Handler" podUID="91e5988a-ae9c-44df-9f8e-8c320470ffbd" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-ckjdb" Dec 13 08:58:48.032865 systemd[1]: Created slice kubepods-besteffort-pod91e5988a_ae9c_44df_9f8e_8c320470ffbd.slice - libcontainer container kubepods-besteffort-pod91e5988a_ae9c_44df_9f8e_8c320470ffbd.slice. Dec 13 08:58:48.074930 kubelet[2703]: I1213 08:58:48.074653 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/91e5988a-ae9c-44df-9f8e-8c320470ffbd-var-lib-calico\") pod \"tigera-operator-c7ccbd65-ckjdb\" (UID: \"91e5988a-ae9c-44df-9f8e-8c320470ffbd\") " pod="tigera-operator/tigera-operator-c7ccbd65-ckjdb" Dec 13 08:58:48.074930 kubelet[2703]: I1213 08:58:48.074733 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4t4q\" (UniqueName: \"kubernetes.io/projected/91e5988a-ae9c-44df-9f8e-8c320470ffbd-kube-api-access-z4t4q\") pod \"tigera-operator-c7ccbd65-ckjdb\" (UID: \"91e5988a-ae9c-44df-9f8e-8c320470ffbd\") " pod="tigera-operator/tigera-operator-c7ccbd65-ckjdb" Dec 13 08:58:48.155170 containerd[1473]: time="2024-12-13T08:58:48.155032651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mqvtm,Uid:45ebd0b4-52b5-41e3-b518-acd06b59eef7,Namespace:kube-system,Attempt:0,}" Dec 13 08:58:48.185612 containerd[1473]: time="2024-12-13T08:58:48.185501900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:58:48.186240 containerd[1473]: time="2024-12-13T08:58:48.186034949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:58:48.186240 containerd[1473]: time="2024-12-13T08:58:48.186070430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:48.187493 containerd[1473]: time="2024-12-13T08:58:48.187335692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:48.217693 systemd[1]: Started cri-containerd-d0e61ac4b6832061746d2debfd8c8134edc8dc67fa3ac62d987b47334f3b5de7.scope - libcontainer container d0e61ac4b6832061746d2debfd8c8134edc8dc67fa3ac62d987b47334f3b5de7. Dec 13 08:58:48.243813 containerd[1473]: time="2024-12-13T08:58:48.243692870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mqvtm,Uid:45ebd0b4-52b5-41e3-b518-acd06b59eef7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0e61ac4b6832061746d2debfd8c8134edc8dc67fa3ac62d987b47334f3b5de7\"" Dec 13 08:58:48.248943 containerd[1473]: time="2024-12-13T08:58:48.248765398Z" level=info msg="CreateContainer within sandbox \"d0e61ac4b6832061746d2debfd8c8134edc8dc67fa3ac62d987b47334f3b5de7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 08:58:48.266644 containerd[1473]: time="2024-12-13T08:58:48.266586947Z" level=info msg="CreateContainer within sandbox \"d0e61ac4b6832061746d2debfd8c8134edc8dc67fa3ac62d987b47334f3b5de7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b936d3ab9b8a8b58cfd63507976072a4e38b7582c2a266b2ff630360d9824059\"" Dec 13 08:58:48.270483 containerd[1473]: time="2024-12-13T08:58:48.269101591Z" level=info msg="StartContainer for \"b936d3ab9b8a8b58cfd63507976072a4e38b7582c2a266b2ff630360d9824059\"" Dec 13 08:58:48.300678 systemd[1]: Started cri-containerd-b936d3ab9b8a8b58cfd63507976072a4e38b7582c2a266b2ff630360d9824059.scope - libcontainer container b936d3ab9b8a8b58cfd63507976072a4e38b7582c2a266b2ff630360d9824059. Dec 13 08:58:48.333238 containerd[1473]: time="2024-12-13T08:58:48.333139422Z" level=info msg="StartContainer for \"b936d3ab9b8a8b58cfd63507976072a4e38b7582c2a266b2ff630360d9824059\" returns successfully" Dec 13 08:58:48.338709 containerd[1473]: time="2024-12-13T08:58:48.338673158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-ckjdb,Uid:91e5988a-ae9c-44df-9f8e-8c320470ffbd,Namespace:tigera-operator,Attempt:0,}" Dec 13 08:58:48.380164 containerd[1473]: time="2024-12-13T08:58:48.379830632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:58:48.380164 containerd[1473]: time="2024-12-13T08:58:48.379899273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:58:48.380164 containerd[1473]: time="2024-12-13T08:58:48.379915753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:48.380164 containerd[1473]: time="2024-12-13T08:58:48.380020675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:48.400824 systemd[1]: Started cri-containerd-1f1690cbe2a8f7f9cdc9286679583895939b481868fd30151e0bf1bcd88dbd82.scope - libcontainer container 1f1690cbe2a8f7f9cdc9286679583895939b481868fd30151e0bf1bcd88dbd82. Dec 13 08:58:48.452082 containerd[1473]: time="2024-12-13T08:58:48.452027725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-ckjdb,Uid:91e5988a-ae9c-44df-9f8e-8c320470ffbd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1f1690cbe2a8f7f9cdc9286679583895939b481868fd30151e0bf1bcd88dbd82\"" Dec 13 08:58:48.455802 containerd[1473]: time="2024-12-13T08:58:48.455759709Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 08:58:48.998885 systemd[1]: run-containerd-runc-k8s.io-d0e61ac4b6832061746d2debfd8c8134edc8dc67fa3ac62d987b47334f3b5de7-runc.S9zc66.mount: Deactivated successfully. Dec 13 08:58:50.507758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4150217834.mount: Deactivated successfully. Dec 13 08:58:50.831081 containerd[1473]: time="2024-12-13T08:58:50.830868203Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:50.833626 containerd[1473]: time="2024-12-13T08:58:50.833487966Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125996" Dec 13 08:58:50.834372 containerd[1473]: time="2024-12-13T08:58:50.834305380Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:50.840247 containerd[1473]: time="2024-12-13T08:58:50.839318303Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:50.840247 containerd[1473]: time="2024-12-13T08:58:50.840072075Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.384266805s" Dec 13 08:58:50.841792 containerd[1473]: time="2024-12-13T08:58:50.841728623Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 08:58:50.846082 containerd[1473]: time="2024-12-13T08:58:50.845926173Z" level=info msg="CreateContainer within sandbox \"1f1690cbe2a8f7f9cdc9286679583895939b481868fd30151e0bf1bcd88dbd82\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 08:58:50.871813 containerd[1473]: time="2024-12-13T08:58:50.871652480Z" level=info msg="CreateContainer within sandbox \"1f1690cbe2a8f7f9cdc9286679583895939b481868fd30151e0bf1bcd88dbd82\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3c0e8415728ae889416817c0d77f0672a7563a76b061470f4f334cf927627b5b\"" Dec 13 08:58:50.872938 containerd[1473]: time="2024-12-13T08:58:50.872575335Z" level=info msg="StartContainer for \"3c0e8415728ae889416817c0d77f0672a7563a76b061470f4f334cf927627b5b\"" Dec 13 08:58:50.907143 systemd[1]: Started cri-containerd-3c0e8415728ae889416817c0d77f0672a7563a76b061470f4f334cf927627b5b.scope - libcontainer container 3c0e8415728ae889416817c0d77f0672a7563a76b061470f4f334cf927627b5b. Dec 13 08:58:50.937462 containerd[1473]: time="2024-12-13T08:58:50.937030085Z" level=info msg="StartContainer for \"3c0e8415728ae889416817c0d77f0672a7563a76b061470f4f334cf927627b5b\" returns successfully" Dec 13 08:58:51.722592 kubelet[2703]: I1213 08:58:51.722091 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mqvtm" podStartSLOduration=4.722045905 podStartE2EDuration="4.722045905s" podCreationTimestamp="2024-12-13 08:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:58:48.712768529 +0000 UTC m=+14.267571908" watchObservedRunningTime="2024-12-13 08:58:51.722045905 +0000 UTC m=+17.276849244" Dec 13 08:58:51.722592 kubelet[2703]: I1213 08:58:51.722179 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-ckjdb" podStartSLOduration=2.333433945 podStartE2EDuration="4.722162786s" podCreationTimestamp="2024-12-13 08:58:47 +0000 UTC" firstStartedPulling="2024-12-13 08:58:48.454254483 +0000 UTC m=+14.009057862" lastFinishedPulling="2024-12-13 08:58:50.842983324 +0000 UTC m=+16.397786703" observedRunningTime="2024-12-13 08:58:51.721281852 +0000 UTC m=+17.276085231" watchObservedRunningTime="2024-12-13 08:58:51.722162786 +0000 UTC m=+17.276966165" Dec 13 08:58:55.469605 kubelet[2703]: I1213 08:58:55.469560 2703 topology_manager.go:215] "Topology Admit Handler" podUID="64749c66-fcbc-45e4-9644-7c84f32f508f" podNamespace="calico-system" podName="calico-typha-56d8fd84c-z8d6r" Dec 13 08:58:55.480811 systemd[1]: Created slice kubepods-besteffort-pod64749c66_fcbc_45e4_9644_7c84f32f508f.slice - libcontainer container kubepods-besteffort-pod64749c66_fcbc_45e4_9644_7c84f32f508f.slice. Dec 13 08:58:55.524081 kubelet[2703]: I1213 08:58:55.524047 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxn7x\" (UniqueName: \"kubernetes.io/projected/64749c66-fcbc-45e4-9644-7c84f32f508f-kube-api-access-lxn7x\") pod \"calico-typha-56d8fd84c-z8d6r\" (UID: \"64749c66-fcbc-45e4-9644-7c84f32f508f\") " pod="calico-system/calico-typha-56d8fd84c-z8d6r" Dec 13 08:58:55.524434 kubelet[2703]: I1213 08:58:55.524269 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64749c66-fcbc-45e4-9644-7c84f32f508f-tigera-ca-bundle\") pod \"calico-typha-56d8fd84c-z8d6r\" (UID: \"64749c66-fcbc-45e4-9644-7c84f32f508f\") " pod="calico-system/calico-typha-56d8fd84c-z8d6r" Dec 13 08:58:55.524434 kubelet[2703]: I1213 08:58:55.524297 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/64749c66-fcbc-45e4-9644-7c84f32f508f-typha-certs\") pod \"calico-typha-56d8fd84c-z8d6r\" (UID: \"64749c66-fcbc-45e4-9644-7c84f32f508f\") " pod="calico-system/calico-typha-56d8fd84c-z8d6r" Dec 13 08:58:55.580295 kubelet[2703]: I1213 08:58:55.580253 2703 topology_manager.go:215] "Topology Admit Handler" podUID="5b67b005-87d1-4b04-833a-5887db7b7c84" podNamespace="calico-system" podName="calico-node-hdtfw" Dec 13 08:58:55.590830 systemd[1]: Created slice kubepods-besteffort-pod5b67b005_87d1_4b04_833a_5887db7b7c84.slice - libcontainer container kubepods-besteffort-pod5b67b005_87d1_4b04_833a_5887db7b7c84.slice. Dec 13 08:58:55.625659 kubelet[2703]: I1213 08:58:55.625626 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b67b005-87d1-4b04-833a-5887db7b7c84-lib-modules\") pod \"calico-node-hdtfw\" (UID: \"5b67b005-87d1-4b04-833a-5887db7b7c84\") " pod="calico-system/calico-node-hdtfw" Dec 13 08:58:55.628556 kubelet[2703]: I1213 08:58:55.626354 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b67b005-87d1-4b04-833a-5887db7b7c84-tigera-ca-bundle\") pod \"calico-node-hdtfw\" (UID: \"5b67b005-87d1-4b04-833a-5887db7b7c84\") " pod="calico-system/calico-node-hdtfw" Dec 13 08:58:55.628556 kubelet[2703]: I1213 08:58:55.626409 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5b67b005-87d1-4b04-833a-5887db7b7c84-flexvol-driver-host\") pod \"calico-node-hdtfw\" (UID: \"5b67b005-87d1-4b04-833a-5887db7b7c84\") " pod="calico-system/calico-node-hdtfw" Dec 13 08:58:55.628556 kubelet[2703]: I1213 08:58:55.626433 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f6ck\" (UniqueName: \"kubernetes.io/projected/5b67b005-87d1-4b04-833a-5887db7b7c84-kube-api-access-4f6ck\") pod \"calico-node-hdtfw\" (UID: \"5b67b005-87d1-4b04-833a-5887db7b7c84\") " pod="calico-system/calico-node-hdtfw" Dec 13 08:58:55.628556 kubelet[2703]: I1213 08:58:55.626510 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5b67b005-87d1-4b04-833a-5887db7b7c84-var-run-calico\") pod \"calico-node-hdtfw\" (UID: \"5b67b005-87d1-4b04-833a-5887db7b7c84\") " pod="calico-system/calico-node-hdtfw" Dec 13 08:58:55.628556 kubelet[2703]: I1213 08:58:55.626530 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5b67b005-87d1-4b04-833a-5887db7b7c84-cni-log-dir\") pod \"calico-node-hdtfw\" (UID: \"5b67b005-87d1-4b04-833a-5887db7b7c84\") " pod="calico-system/calico-node-hdtfw" Dec 13 08:58:55.628781 kubelet[2703]: I1213 08:58:55.626607 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b67b005-87d1-4b04-833a-5887db7b7c84-xtables-lock\") pod \"calico-node-hdtfw\" (UID: \"5b67b005-87d1-4b04-833a-5887db7b7c84\") " pod="calico-system/calico-node-hdtfw" Dec 13 08:58:55.628781 kubelet[2703]: I1213 08:58:55.626641 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5b67b005-87d1-4b04-833a-5887db7b7c84-policysync\") pod \"calico-node-hdtfw\" (UID: \"5b67b005-87d1-4b04-833a-5887db7b7c84\") " pod="calico-system/calico-node-hdtfw" Dec 13 08:58:55.628781 kubelet[2703]: I1213 08:58:55.626667 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5b67b005-87d1-4b04-833a-5887db7b7c84-cni-bin-dir\") pod \"calico-node-hdtfw\" (UID: \"5b67b005-87d1-4b04-833a-5887db7b7c84\") " pod="calico-system/calico-node-hdtfw" Dec 13 08:58:55.628781 kubelet[2703]: I1213 08:58:55.626690 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5b67b005-87d1-4b04-833a-5887db7b7c84-cni-net-dir\") pod \"calico-node-hdtfw\" (UID: \"5b67b005-87d1-4b04-833a-5887db7b7c84\") " pod="calico-system/calico-node-hdtfw" Dec 13 08:58:55.628781 kubelet[2703]: I1213 08:58:55.626711 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5b67b005-87d1-4b04-833a-5887db7b7c84-node-certs\") pod \"calico-node-hdtfw\" (UID: \"5b67b005-87d1-4b04-833a-5887db7b7c84\") " pod="calico-system/calico-node-hdtfw" Dec 13 08:58:55.628925 kubelet[2703]: I1213 08:58:55.626732 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5b67b005-87d1-4b04-833a-5887db7b7c84-var-lib-calico\") pod \"calico-node-hdtfw\" (UID: \"5b67b005-87d1-4b04-833a-5887db7b7c84\") " pod="calico-system/calico-node-hdtfw" Dec 13 08:58:55.714682 kubelet[2703]: I1213 08:58:55.714203 2703 topology_manager.go:215] "Topology Admit Handler" podUID="8cd9a8db-6f3d-4382-8dc3-75aed978669b" podNamespace="calico-system" podName="csi-node-driver-chvmc" Dec 13 08:58:55.715305 kubelet[2703]: E1213 08:58:55.715275 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chvmc" podUID="8cd9a8db-6f3d-4382-8dc3-75aed978669b" Dec 13 08:58:55.731109 kubelet[2703]: E1213 08:58:55.730996 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.731260 kubelet[2703]: W1213 08:58:55.731242 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.731346 kubelet[2703]: E1213 08:58:55.731335 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.732463 kubelet[2703]: E1213 08:58:55.732425 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.732642 kubelet[2703]: W1213 08:58:55.732621 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.732834 kubelet[2703]: E1213 08:58:55.732747 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.733073 kubelet[2703]: E1213 08:58:55.733040 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.733073 kubelet[2703]: W1213 08:58:55.733056 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.733328 kubelet[2703]: E1213 08:58:55.733235 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.733548 kubelet[2703]: E1213 08:58:55.733534 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.733725 kubelet[2703]: W1213 08:58:55.733655 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.733725 kubelet[2703]: E1213 08:58:55.733713 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.734565 kubelet[2703]: E1213 08:58:55.734060 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.734565 kubelet[2703]: W1213 08:58:55.734075 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.734565 kubelet[2703]: E1213 08:58:55.734116 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.734903 kubelet[2703]: E1213 08:58:55.734820 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.734903 kubelet[2703]: W1213 08:58:55.734837 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.734903 kubelet[2703]: E1213 08:58:55.734896 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.735794 kubelet[2703]: E1213 08:58:55.735683 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.735794 kubelet[2703]: W1213 08:58:55.735703 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.736036 kubelet[2703]: E1213 08:58:55.735960 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.736322 kubelet[2703]: E1213 08:58:55.736218 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.736773 kubelet[2703]: W1213 08:58:55.736413 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.736773 kubelet[2703]: E1213 08:58:55.736533 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.737432 kubelet[2703]: E1213 08:58:55.737025 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.737566 kubelet[2703]: W1213 08:58:55.737545 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.737684 kubelet[2703]: E1213 08:58:55.737658 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.737936 kubelet[2703]: E1213 08:58:55.737903 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.737936 kubelet[2703]: W1213 08:58:55.737917 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.738298 kubelet[2703]: E1213 08:58:55.738201 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.738553 kubelet[2703]: W1213 08:58:55.738364 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.738943 kubelet[2703]: E1213 08:58:55.738724 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.739199 kubelet[2703]: E1213 08:58:55.739084 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.740581 kubelet[2703]: E1213 08:58:55.740556 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.740909 kubelet[2703]: W1213 08:58:55.740689 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.740909 kubelet[2703]: E1213 08:58:55.740733 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.741478 kubelet[2703]: E1213 08:58:55.741453 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.741478 kubelet[2703]: W1213 08:58:55.741472 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.741577 kubelet[2703]: E1213 08:58:55.741490 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.752807 kubelet[2703]: E1213 08:58:55.752694 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.752807 kubelet[2703]: W1213 08:58:55.752715 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.752807 kubelet[2703]: E1213 08:58:55.752747 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.755360 kubelet[2703]: E1213 08:58:55.755153 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.755360 kubelet[2703]: W1213 08:58:55.755176 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.755360 kubelet[2703]: E1213 08:58:55.755292 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.784793 containerd[1473]: time="2024-12-13T08:58:55.784532345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56d8fd84c-z8d6r,Uid:64749c66-fcbc-45e4-9644-7c84f32f508f,Namespace:calico-system,Attempt:0,}" Dec 13 08:58:55.809672 kubelet[2703]: E1213 08:58:55.809638 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.809672 kubelet[2703]: W1213 08:58:55.809663 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.809952 kubelet[2703]: E1213 08:58:55.809687 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.810791 kubelet[2703]: E1213 08:58:55.810765 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.810791 kubelet[2703]: W1213 08:58:55.810784 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.811228 kubelet[2703]: E1213 08:58:55.810803 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.811645 kubelet[2703]: E1213 08:58:55.811623 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.811645 kubelet[2703]: W1213 08:58:55.811641 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.812127 kubelet[2703]: E1213 08:58:55.811657 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.812661 kubelet[2703]: E1213 08:58:55.812633 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.812661 kubelet[2703]: W1213 08:58:55.812648 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.812661 kubelet[2703]: E1213 08:58:55.812663 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.813964 kubelet[2703]: E1213 08:58:55.813921 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.813964 kubelet[2703]: W1213 08:58:55.813937 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.813964 kubelet[2703]: E1213 08:58:55.813954 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.815519 kubelet[2703]: E1213 08:58:55.815493 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.815519 kubelet[2703]: W1213 08:58:55.815515 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.815875 kubelet[2703]: E1213 08:58:55.815531 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.815981 kubelet[2703]: E1213 08:58:55.815968 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.816012 kubelet[2703]: W1213 08:58:55.815981 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.816012 kubelet[2703]: E1213 08:58:55.815995 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.816708 kubelet[2703]: E1213 08:58:55.816608 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.816708 kubelet[2703]: W1213 08:58:55.816624 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.816708 kubelet[2703]: E1213 08:58:55.816638 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.817737 kubelet[2703]: E1213 08:58:55.817715 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.817737 kubelet[2703]: W1213 08:58:55.817731 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.817814 kubelet[2703]: E1213 08:58:55.817745 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.818679 kubelet[2703]: E1213 08:58:55.818632 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.819522 kubelet[2703]: W1213 08:58:55.818831 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.819522 kubelet[2703]: E1213 08:58:55.818857 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.820992 kubelet[2703]: E1213 08:58:55.820538 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.820992 kubelet[2703]: W1213 08:58:55.820989 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.821094 kubelet[2703]: E1213 08:58:55.821011 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.823027 kubelet[2703]: E1213 08:58:55.822529 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.823027 kubelet[2703]: W1213 08:58:55.822549 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.823027 kubelet[2703]: E1213 08:58:55.822569 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.823502 kubelet[2703]: E1213 08:58:55.823100 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.823502 kubelet[2703]: W1213 08:58:55.823117 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.823502 kubelet[2703]: E1213 08:58:55.823130 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.823502 kubelet[2703]: E1213 08:58:55.823310 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.823595 kubelet[2703]: W1213 08:58:55.823505 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.823595 kubelet[2703]: E1213 08:58:55.823522 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.824936 kubelet[2703]: E1213 08:58:55.824751 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.824936 kubelet[2703]: W1213 08:58:55.824768 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.824936 kubelet[2703]: E1213 08:58:55.824782 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.825224 kubelet[2703]: E1213 08:58:55.825020 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.825224 kubelet[2703]: W1213 08:58:55.825028 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.825224 kubelet[2703]: E1213 08:58:55.825040 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.825716 kubelet[2703]: E1213 08:58:55.825390 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.825716 kubelet[2703]: W1213 08:58:55.825404 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.825716 kubelet[2703]: E1213 08:58:55.825416 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.826214 kubelet[2703]: E1213 08:58:55.826032 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.826214 kubelet[2703]: W1213 08:58:55.826044 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.826214 kubelet[2703]: E1213 08:58:55.826155 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.826607 kubelet[2703]: E1213 08:58:55.826587 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.826607 kubelet[2703]: W1213 08:58:55.826603 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.826809 kubelet[2703]: E1213 08:58:55.826617 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.827016 kubelet[2703]: E1213 08:58:55.826961 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.827016 kubelet[2703]: W1213 08:58:55.826975 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.827016 kubelet[2703]: E1213 08:58:55.826988 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.828252 kubelet[2703]: E1213 08:58:55.828233 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.831433 kubelet[2703]: W1213 08:58:55.829020 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.831433 kubelet[2703]: E1213 08:58:55.829061 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.831433 kubelet[2703]: I1213 08:58:55.829098 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8cd9a8db-6f3d-4382-8dc3-75aed978669b-socket-dir\") pod \"csi-node-driver-chvmc\" (UID: \"8cd9a8db-6f3d-4382-8dc3-75aed978669b\") " pod="calico-system/csi-node-driver-chvmc" Dec 13 08:58:55.831433 kubelet[2703]: E1213 08:58:55.831264 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.831433 kubelet[2703]: W1213 08:58:55.831287 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.831867 kubelet[2703]: E1213 08:58:55.831408 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.831989 kubelet[2703]: I1213 08:58:55.831953 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls2fx\" (UniqueName: \"kubernetes.io/projected/8cd9a8db-6f3d-4382-8dc3-75aed978669b-kube-api-access-ls2fx\") pod \"csi-node-driver-chvmc\" (UID: \"8cd9a8db-6f3d-4382-8dc3-75aed978669b\") " pod="calico-system/csi-node-driver-chvmc" Dec 13 08:58:55.832338 kubelet[2703]: E1213 08:58:55.832065 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.832508 kubelet[2703]: W1213 08:58:55.832490 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.832590 kubelet[2703]: E1213 08:58:55.832579 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.832885 kubelet[2703]: E1213 08:58:55.832866 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.832885 kubelet[2703]: W1213 08:58:55.832881 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.832980 kubelet[2703]: E1213 08:58:55.832901 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.834212 kubelet[2703]: E1213 08:58:55.833751 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.834212 kubelet[2703]: W1213 08:58:55.833769 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.834212 kubelet[2703]: E1213 08:58:55.833796 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.834212 kubelet[2703]: I1213 08:58:55.833828 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8cd9a8db-6f3d-4382-8dc3-75aed978669b-registration-dir\") pod \"csi-node-driver-chvmc\" (UID: \"8cd9a8db-6f3d-4382-8dc3-75aed978669b\") " pod="calico-system/csi-node-driver-chvmc" Dec 13 08:58:55.834212 kubelet[2703]: E1213 08:58:55.833999 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.834212 kubelet[2703]: W1213 08:58:55.834010 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.834212 kubelet[2703]: E1213 08:58:55.834022 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.834212 kubelet[2703]: E1213 08:58:55.834182 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.834212 kubelet[2703]: W1213 08:58:55.834189 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.835268 kubelet[2703]: E1213 08:58:55.834200 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.835268 kubelet[2703]: E1213 08:58:55.834546 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.835268 kubelet[2703]: W1213 08:58:55.834559 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.835268 kubelet[2703]: E1213 08:58:55.834576 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.835268 kubelet[2703]: E1213 08:58:55.835081 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.835268 kubelet[2703]: W1213 08:58:55.835092 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.835268 kubelet[2703]: E1213 08:58:55.835107 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.835895 kubelet[2703]: E1213 08:58:55.835612 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.835895 kubelet[2703]: W1213 08:58:55.835626 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.835895 kubelet[2703]: E1213 08:58:55.835641 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.835895 kubelet[2703]: I1213 08:58:55.835754 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8cd9a8db-6f3d-4382-8dc3-75aed978669b-kubelet-dir\") pod \"csi-node-driver-chvmc\" (UID: \"8cd9a8db-6f3d-4382-8dc3-75aed978669b\") " pod="calico-system/csi-node-driver-chvmc" Dec 13 08:58:55.836816 kubelet[2703]: E1213 08:58:55.836330 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.836816 kubelet[2703]: W1213 08:58:55.836354 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.836816 kubelet[2703]: E1213 08:58:55.836374 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.836816 kubelet[2703]: I1213 08:58:55.836413 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8cd9a8db-6f3d-4382-8dc3-75aed978669b-varrun\") pod \"csi-node-driver-chvmc\" (UID: \"8cd9a8db-6f3d-4382-8dc3-75aed978669b\") " pod="calico-system/csi-node-driver-chvmc" Dec 13 08:58:55.837342 kubelet[2703]: E1213 08:58:55.837321 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.837342 kubelet[2703]: W1213 08:58:55.837337 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.837553 kubelet[2703]: E1213 08:58:55.837357 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.837986 kubelet[2703]: E1213 08:58:55.837958 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.837986 kubelet[2703]: W1213 08:58:55.837972 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.841460 kubelet[2703]: E1213 08:58:55.838111 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.841460 kubelet[2703]: E1213 08:58:55.838776 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.841460 kubelet[2703]: W1213 08:58:55.838795 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.841460 kubelet[2703]: E1213 08:58:55.838814 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.841460 kubelet[2703]: E1213 08:58:55.839624 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.841460 kubelet[2703]: W1213 08:58:55.839643 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.841460 kubelet[2703]: E1213 08:58:55.839660 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.849549 containerd[1473]: time="2024-12-13T08:58:55.849416274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:58:55.849549 containerd[1473]: time="2024-12-13T08:58:55.849497356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:58:55.849727 containerd[1473]: time="2024-12-13T08:58:55.849540116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:55.849727 containerd[1473]: time="2024-12-13T08:58:55.849636038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:55.873287 systemd[1]: Started cri-containerd-0e96315590b74ffac7ae908d5efba4889b3c837bed25a4f72c1ac0d5f35f954e.scope - libcontainer container 0e96315590b74ffac7ae908d5efba4889b3c837bed25a4f72c1ac0d5f35f954e. Dec 13 08:58:55.899295 containerd[1473]: time="2024-12-13T08:58:55.898939254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hdtfw,Uid:5b67b005-87d1-4b04-833a-5887db7b7c84,Namespace:calico-system,Attempt:0,}" Dec 13 08:58:55.935405 containerd[1473]: time="2024-12-13T08:58:55.935294517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:58:55.935783 containerd[1473]: time="2024-12-13T08:58:55.935355198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:58:55.935783 containerd[1473]: time="2024-12-13T08:58:55.935376678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:55.935783 containerd[1473]: time="2024-12-13T08:58:55.935488160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:55.938097 kubelet[2703]: E1213 08:58:55.937969 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.938944 kubelet[2703]: W1213 08:58:55.938275 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.938944 kubelet[2703]: E1213 08:58:55.938307 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.939049 kubelet[2703]: E1213 08:58:55.938960 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.939049 kubelet[2703]: W1213 08:58:55.938973 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.939049 kubelet[2703]: E1213 08:58:55.938999 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.941950 kubelet[2703]: E1213 08:58:55.940937 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.941950 kubelet[2703]: W1213 08:58:55.940960 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.941950 kubelet[2703]: E1213 08:58:55.940984 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.941950 kubelet[2703]: E1213 08:58:55.941148 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.941950 kubelet[2703]: W1213 08:58:55.941155 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.941950 kubelet[2703]: E1213 08:58:55.941190 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.941950 kubelet[2703]: E1213 08:58:55.941597 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.941950 kubelet[2703]: W1213 08:58:55.941608 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.941950 kubelet[2703]: E1213 08:58:55.941705 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.941950 kubelet[2703]: E1213 08:58:55.941824 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.942263 kubelet[2703]: W1213 08:58:55.941831 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.942263 kubelet[2703]: E1213 08:58:55.941842 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.942858 kubelet[2703]: E1213 08:58:55.942816 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.942858 kubelet[2703]: W1213 08:58:55.942832 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.944051 kubelet[2703]: E1213 08:58:55.943984 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.944790 kubelet[2703]: E1213 08:58:55.944774 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.944995 kubelet[2703]: W1213 08:58:55.944932 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.945094 kubelet[2703]: E1213 08:58:55.945080 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.945659 kubelet[2703]: E1213 08:58:55.945575 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.945876 kubelet[2703]: W1213 08:58:55.945853 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.946052 kubelet[2703]: E1213 08:58:55.946041 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.947282 kubelet[2703]: E1213 08:58:55.947074 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.947282 kubelet[2703]: W1213 08:58:55.947096 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.947282 kubelet[2703]: E1213 08:58:55.947114 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.948606 kubelet[2703]: E1213 08:58:55.948501 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.948875 kubelet[2703]: W1213 08:58:55.948699 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.949109 kubelet[2703]: E1213 08:58:55.948990 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.950458 kubelet[2703]: E1213 08:58:55.950345 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.950458 kubelet[2703]: W1213 08:58:55.950361 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.950758 containerd[1473]: time="2024-12-13T08:58:55.950637426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56d8fd84c-z8d6r,Uid:64749c66-fcbc-45e4-9644-7c84f32f508f,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e96315590b74ffac7ae908d5efba4889b3c837bed25a4f72c1ac0d5f35f954e\"" Dec 13 08:58:55.951007 kubelet[2703]: E1213 08:58:55.950825 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.951869 kubelet[2703]: E1213 08:58:55.951852 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.952093 kubelet[2703]: W1213 08:58:55.951972 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.952754 kubelet[2703]: E1213 08:58:55.952546 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.952947 kubelet[2703]: E1213 08:58:55.952933 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.953090 kubelet[2703]: W1213 08:58:55.953002 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.953238 kubelet[2703]: E1213 08:58:55.953202 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.953660 kubelet[2703]: E1213 08:58:55.953605 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.953660 kubelet[2703]: W1213 08:58:55.953619 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.954831 kubelet[2703]: E1213 08:58:55.954296 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.954831 kubelet[2703]: W1213 08:58:55.954765 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.955110 kubelet[2703]: E1213 08:58:55.955027 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.955337 kubelet[2703]: E1213 08:58:55.955218 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.955513 kubelet[2703]: E1213 08:58:55.955430 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.955742 kubelet[2703]: W1213 08:58:55.955553 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.956832 kubelet[2703]: E1213 08:58:55.956286 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.956910 containerd[1473]: time="2024-12-13T08:58:55.956779758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 08:58:55.957651 kubelet[2703]: E1213 08:58:55.957617 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.957854 kubelet[2703]: W1213 08:58:55.957838 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.958291 kubelet[2703]: E1213 08:58:55.958192 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.959676 kubelet[2703]: E1213 08:58:55.959432 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.959676 kubelet[2703]: W1213 08:58:55.959637 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.960204 kubelet[2703]: E1213 08:58:55.960008 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.960612 kubelet[2703]: E1213 08:58:55.960431 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.960612 kubelet[2703]: W1213 08:58:55.960469 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.960980 kubelet[2703]: E1213 08:58:55.960922 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.961635 kubelet[2703]: E1213 08:58:55.961515 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.963541 kubelet[2703]: W1213 08:58:55.961532 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.964309 kubelet[2703]: E1213 08:58:55.964088 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.964309 kubelet[2703]: W1213 08:58:55.964246 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.964309 kubelet[2703]: E1213 08:58:55.964260 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.964760 kubelet[2703]: E1213 08:58:55.964220 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.964889 kubelet[2703]: E1213 08:58:55.964878 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.964941 kubelet[2703]: W1213 08:58:55.964931 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.965017 kubelet[2703]: E1213 08:58:55.965010 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.968919 kubelet[2703]: E1213 08:58:55.968885 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.969156 kubelet[2703]: W1213 08:58:55.969122 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.969322 kubelet[2703]: E1213 08:58:55.969259 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.970174 kubelet[2703]: E1213 08:58:55.969926 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.970302 kubelet[2703]: W1213 08:58:55.970286 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.970361 kubelet[2703]: E1213 08:58:55.970352 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:55.978639 systemd[1]: Started cri-containerd-1825f66ed250b6061122e4994aeb5b27e5f37dfcb76870b3468e23babbf0783b.scope - libcontainer container 1825f66ed250b6061122e4994aeb5b27e5f37dfcb76870b3468e23babbf0783b. Dec 13 08:58:55.986650 kubelet[2703]: E1213 08:58:55.986308 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:55.986650 kubelet[2703]: W1213 08:58:55.986417 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:55.987251 kubelet[2703]: E1213 08:58:55.986720 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:56.027161 containerd[1473]: time="2024-12-13T08:58:56.027110920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hdtfw,Uid:5b67b005-87d1-4b04-833a-5887db7b7c84,Namespace:calico-system,Attempt:0,} returns sandbox id \"1825f66ed250b6061122e4994aeb5b27e5f37dfcb76870b3468e23babbf0783b\"" Dec 13 08:58:57.582344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931198315.mount: Deactivated successfully. Dec 13 08:58:57.598001 kubelet[2703]: E1213 08:58:57.597948 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chvmc" podUID="8cd9a8db-6f3d-4382-8dc3-75aed978669b" Dec 13 08:58:58.435463 containerd[1473]: time="2024-12-13T08:58:58.435220491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:58.437552 containerd[1473]: time="2024-12-13T08:58:58.437235119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 08:58:58.439447 containerd[1473]: time="2024-12-13T08:58:58.439402429Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:58.444023 containerd[1473]: time="2024-12-13T08:58:58.443708850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:58.444562 containerd[1473]: time="2024-12-13T08:58:58.444519741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.487694943s" Dec 13 08:58:58.444562 containerd[1473]: time="2024-12-13T08:58:58.444558982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 08:58:58.446152 containerd[1473]: time="2024-12-13T08:58:58.445589756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 08:58:58.464711 containerd[1473]: time="2024-12-13T08:58:58.464639064Z" level=info msg="CreateContainer within sandbox \"0e96315590b74ffac7ae908d5efba4889b3c837bed25a4f72c1ac0d5f35f954e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 08:58:58.486062 containerd[1473]: time="2024-12-13T08:58:58.486005204Z" level=info msg="CreateContainer within sandbox \"0e96315590b74ffac7ae908d5efba4889b3c837bed25a4f72c1ac0d5f35f954e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"792d0dc8cf421b4c6a915a415cfc4f6da8fa355666cf8e1224310ab836ee2f71\"" Dec 13 08:58:58.487223 containerd[1473]: time="2024-12-13T08:58:58.487191941Z" level=info msg="StartContainer for \"792d0dc8cf421b4c6a915a415cfc4f6da8fa355666cf8e1224310ab836ee2f71\"" Dec 13 08:58:58.531665 systemd[1]: Started cri-containerd-792d0dc8cf421b4c6a915a415cfc4f6da8fa355666cf8e1224310ab836ee2f71.scope - libcontainer container 792d0dc8cf421b4c6a915a415cfc4f6da8fa355666cf8e1224310ab836ee2f71. Dec 13 08:58:58.573142 containerd[1473]: time="2024-12-13T08:58:58.573009827Z" level=info msg="StartContainer for \"792d0dc8cf421b4c6a915a415cfc4f6da8fa355666cf8e1224310ab836ee2f71\" returns successfully" Dec 13 08:58:58.746228 kubelet[2703]: I1213 08:58:58.745523 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-56d8fd84c-z8d6r" podStartSLOduration=1.2568524939999999 podStartE2EDuration="3.74542801s" podCreationTimestamp="2024-12-13 08:58:55 +0000 UTC" firstStartedPulling="2024-12-13 08:58:55.956414272 +0000 UTC m=+21.511217651" lastFinishedPulling="2024-12-13 08:58:58.444989788 +0000 UTC m=+23.999793167" observedRunningTime="2024-12-13 08:58:58.74541873 +0000 UTC m=+24.300222109" watchObservedRunningTime="2024-12-13 08:58:58.74542801 +0000 UTC m=+24.300231349" Dec 13 08:58:58.747289 kubelet[2703]: E1213 08:58:58.747258 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.747289 kubelet[2703]: W1213 08:58:58.747279 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.747723 kubelet[2703]: E1213 08:58:58.747301 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.747723 kubelet[2703]: E1213 08:58:58.747556 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.747723 kubelet[2703]: W1213 08:58:58.747576 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.747723 kubelet[2703]: E1213 08:58:58.747592 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.748094 kubelet[2703]: E1213 08:58:58.747775 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.748094 kubelet[2703]: W1213 08:58:58.747784 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.748094 kubelet[2703]: E1213 08:58:58.747795 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.748094 kubelet[2703]: E1213 08:58:58.747940 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.748094 kubelet[2703]: W1213 08:58:58.747948 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.748094 kubelet[2703]: E1213 08:58:58.747958 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.748094 kubelet[2703]: E1213 08:58:58.748124 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.748094 kubelet[2703]: W1213 08:58:58.748140 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.748094 kubelet[2703]: E1213 08:58:58.748151 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.748094 kubelet[2703]: E1213 08:58:58.748281 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.749184 kubelet[2703]: W1213 08:58:58.748304 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.749184 kubelet[2703]: E1213 08:58:58.748315 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.749184 kubelet[2703]: E1213 08:58:58.748492 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.749184 kubelet[2703]: W1213 08:58:58.748501 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.749184 kubelet[2703]: E1213 08:58:58.748620 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.749184 kubelet[2703]: E1213 08:58:58.748908 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.749184 kubelet[2703]: W1213 08:58:58.748932 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.749184 kubelet[2703]: E1213 08:58:58.748948 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.749840 kubelet[2703]: E1213 08:58:58.749661 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.749840 kubelet[2703]: W1213 08:58:58.749689 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.749840 kubelet[2703]: E1213 08:58:58.749703 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.750364 kubelet[2703]: E1213 08:58:58.750336 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.750364 kubelet[2703]: W1213 08:58:58.750349 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.750536 kubelet[2703]: E1213 08:58:58.750376 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.750874 kubelet[2703]: E1213 08:58:58.750781 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.750874 kubelet[2703]: W1213 08:58:58.750865 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.751516 kubelet[2703]: E1213 08:58:58.750882 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.751516 kubelet[2703]: E1213 08:58:58.751212 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.751516 kubelet[2703]: W1213 08:58:58.751234 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.751516 kubelet[2703]: E1213 08:58:58.751247 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.751757 kubelet[2703]: E1213 08:58:58.751533 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.751757 kubelet[2703]: W1213 08:58:58.751545 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.751757 kubelet[2703]: E1213 08:58:58.751558 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.751852 kubelet[2703]: E1213 08:58:58.751789 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.751852 kubelet[2703]: W1213 08:58:58.751800 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.751852 kubelet[2703]: E1213 08:58:58.751812 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.752107 kubelet[2703]: E1213 08:58:58.752087 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.752107 kubelet[2703]: W1213 08:58:58.752100 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.752402 kubelet[2703]: E1213 08:58:58.752225 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.773639 kubelet[2703]: E1213 08:58:58.772976 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.773639 kubelet[2703]: W1213 08:58:58.773229 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.773639 kubelet[2703]: E1213 08:58:58.773267 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.774607 kubelet[2703]: E1213 08:58:58.774482 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.774607 kubelet[2703]: W1213 08:58:58.774499 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.774607 kubelet[2703]: E1213 08:58:58.774524 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.775011 kubelet[2703]: E1213 08:58:58.774979 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.775350 kubelet[2703]: W1213 08:58:58.775248 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.776005 kubelet[2703]: E1213 08:58:58.775525 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.776160 kubelet[2703]: E1213 08:58:58.776146 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.776356 kubelet[2703]: W1213 08:58:58.776204 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.776356 kubelet[2703]: E1213 08:58:58.776232 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.776809 kubelet[2703]: E1213 08:58:58.776568 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.776809 kubelet[2703]: W1213 08:58:58.776589 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.776809 kubelet[2703]: E1213 08:58:58.776616 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.777410 kubelet[2703]: E1213 08:58:58.777085 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.777410 kubelet[2703]: W1213 08:58:58.777103 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.777410 kubelet[2703]: E1213 08:58:58.777120 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.778217 kubelet[2703]: E1213 08:58:58.777765 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.778217 kubelet[2703]: W1213 08:58:58.777781 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.778217 kubelet[2703]: E1213 08:58:58.777804 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.778217 kubelet[2703]: E1213 08:58:58.778163 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.778217 kubelet[2703]: W1213 08:58:58.778177 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.778217 kubelet[2703]: E1213 08:58:58.778205 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.779238 kubelet[2703]: E1213 08:58:58.778567 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.779238 kubelet[2703]: W1213 08:58:58.778581 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.779238 kubelet[2703]: E1213 08:58:58.778611 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.779238 kubelet[2703]: E1213 08:58:58.778936 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.779238 kubelet[2703]: W1213 08:58:58.778949 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.779238 kubelet[2703]: E1213 08:58:58.779039 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.779238 kubelet[2703]: E1213 08:58:58.779170 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.779238 kubelet[2703]: W1213 08:58:58.779178 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.779238 kubelet[2703]: E1213 08:58:58.779196 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.779671 kubelet[2703]: E1213 08:58:58.779444 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.779671 kubelet[2703]: W1213 08:58:58.779455 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.779671 kubelet[2703]: E1213 08:58:58.779498 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.780030 kubelet[2703]: E1213 08:58:58.779984 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.780030 kubelet[2703]: W1213 08:58:58.780027 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.780101 kubelet[2703]: E1213 08:58:58.780060 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.780347 kubelet[2703]: E1213 08:58:58.780331 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.780415 kubelet[2703]: W1213 08:58:58.780348 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.780536 kubelet[2703]: E1213 08:58:58.780521 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.781243 kubelet[2703]: E1213 08:58:58.781221 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.781243 kubelet[2703]: W1213 08:58:58.781240 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.781378 kubelet[2703]: E1213 08:58:58.781264 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.781709 kubelet[2703]: E1213 08:58:58.781687 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.781921 kubelet[2703]: W1213 08:58:58.781709 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.781921 kubelet[2703]: E1213 08:58:58.781732 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.782179 kubelet[2703]: E1213 08:58:58.782160 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.782264 kubelet[2703]: W1213 08:58:58.782248 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.782439 kubelet[2703]: E1213 08:58:58.782352 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:58.782900 kubelet[2703]: E1213 08:58:58.782832 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:58.782900 kubelet[2703]: W1213 08:58:58.782855 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:58.782900 kubelet[2703]: E1213 08:58:58.782872 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.598566 kubelet[2703]: E1213 08:58:59.597984 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chvmc" podUID="8cd9a8db-6f3d-4382-8dc3-75aed978669b" Dec 13 08:58:59.727294 kubelet[2703]: I1213 08:58:59.727229 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:58:59.759242 kubelet[2703]: E1213 08:58:59.759203 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.760724 kubelet[2703]: W1213 08:58:59.760279 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.762287 kubelet[2703]: E1213 08:58:59.761320 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.762777 kubelet[2703]: E1213 08:58:59.762562 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.762777 kubelet[2703]: W1213 08:58:59.762587 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.762777 kubelet[2703]: E1213 08:58:59.762652 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.764805 kubelet[2703]: E1213 08:58:59.764090 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.764805 kubelet[2703]: W1213 08:58:59.764119 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.764805 kubelet[2703]: E1213 08:58:59.764141 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.765064 kubelet[2703]: E1213 08:58:59.765035 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.765064 kubelet[2703]: W1213 08:58:59.765051 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.765169 kubelet[2703]: E1213 08:58:59.765070 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.765303 kubelet[2703]: E1213 08:58:59.765280 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.765303 kubelet[2703]: W1213 08:58:59.765300 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.765303 kubelet[2703]: E1213 08:58:59.765312 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.765757 kubelet[2703]: E1213 08:58:59.765691 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.765757 kubelet[2703]: W1213 08:58:59.765706 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.765757 kubelet[2703]: E1213 08:58:59.765722 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.766037 kubelet[2703]: E1213 08:58:59.766011 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.766037 kubelet[2703]: W1213 08:58:59.766031 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.766168 kubelet[2703]: E1213 08:58:59.766046 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.766507 kubelet[2703]: E1213 08:58:59.766459 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.766507 kubelet[2703]: W1213 08:58:59.766486 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.766633 kubelet[2703]: E1213 08:58:59.766502 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.767113 kubelet[2703]: E1213 08:58:59.766976 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.767113 kubelet[2703]: W1213 08:58:59.766996 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.767113 kubelet[2703]: E1213 08:58:59.767010 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.767540 kubelet[2703]: E1213 08:58:59.767420 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.767540 kubelet[2703]: W1213 08:58:59.767438 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.767540 kubelet[2703]: E1213 08:58:59.767452 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.768018 kubelet[2703]: E1213 08:58:59.767976 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.768068 kubelet[2703]: W1213 08:58:59.768010 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.768068 kubelet[2703]: E1213 08:58:59.768045 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.768344 kubelet[2703]: E1213 08:58:59.768309 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.768344 kubelet[2703]: W1213 08:58:59.768326 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.769512 kubelet[2703]: E1213 08:58:59.768338 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.769512 kubelet[2703]: E1213 08:58:59.768627 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.769512 kubelet[2703]: W1213 08:58:59.768636 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.769512 kubelet[2703]: E1213 08:58:59.768673 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.769512 kubelet[2703]: E1213 08:58:59.768904 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.769512 kubelet[2703]: W1213 08:58:59.768913 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.769512 kubelet[2703]: E1213 08:58:59.768924 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.769512 kubelet[2703]: E1213 08:58:59.769225 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.769512 kubelet[2703]: W1213 08:58:59.769235 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.769512 kubelet[2703]: E1213 08:58:59.769259 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.787602 kubelet[2703]: E1213 08:58:59.787133 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.787602 kubelet[2703]: W1213 08:58:59.787550 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.788590 kubelet[2703]: E1213 08:58:59.787946 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.788590 kubelet[2703]: E1213 08:58:59.788260 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.788590 kubelet[2703]: W1213 08:58:59.788270 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.788590 kubelet[2703]: E1213 08:58:59.788285 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.788979 kubelet[2703]: E1213 08:58:59.788960 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.788979 kubelet[2703]: W1213 08:58:59.788977 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.789089 kubelet[2703]: E1213 08:58:59.789006 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.789236 kubelet[2703]: E1213 08:58:59.789221 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.789236 kubelet[2703]: W1213 08:58:59.789236 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.789303 kubelet[2703]: E1213 08:58:59.789249 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.789577 kubelet[2703]: E1213 08:58:59.789522 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.789577 kubelet[2703]: W1213 08:58:59.789563 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.789681 kubelet[2703]: E1213 08:58:59.789616 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.789806 kubelet[2703]: E1213 08:58:59.789797 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.789839 kubelet[2703]: W1213 08:58:59.789807 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.789839 kubelet[2703]: E1213 08:58:59.789822 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.790039 kubelet[2703]: E1213 08:58:59.790027 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.790039 kubelet[2703]: W1213 08:58:59.790038 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.790123 kubelet[2703]: E1213 08:58:59.790053 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.790362 kubelet[2703]: E1213 08:58:59.790349 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.790631 kubelet[2703]: W1213 08:58:59.790502 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.790631 kubelet[2703]: E1213 08:58:59.790536 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.790772 kubelet[2703]: E1213 08:58:59.790760 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.790831 kubelet[2703]: W1213 08:58:59.790821 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.790951 kubelet[2703]: E1213 08:58:59.790896 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.791040 kubelet[2703]: E1213 08:58:59.791030 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.791097 kubelet[2703]: W1213 08:58:59.791086 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.791208 kubelet[2703]: E1213 08:58:59.791156 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.791302 kubelet[2703]: E1213 08:58:59.791292 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.791401 kubelet[2703]: W1213 08:58:59.791356 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.791525 kubelet[2703]: E1213 08:58:59.791380 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.791584 kubelet[2703]: E1213 08:58:59.791559 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.791584 kubelet[2703]: W1213 08:58:59.791572 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.791687 kubelet[2703]: E1213 08:58:59.791676 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.791823 kubelet[2703]: E1213 08:58:59.791811 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.791823 kubelet[2703]: W1213 08:58:59.791821 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.791891 kubelet[2703]: E1213 08:58:59.791836 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.792112 kubelet[2703]: E1213 08:58:59.792095 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.792163 kubelet[2703]: W1213 08:58:59.792121 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.792191 kubelet[2703]: E1213 08:58:59.792168 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.792435 kubelet[2703]: E1213 08:58:59.792422 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.792435 kubelet[2703]: W1213 08:58:59.792435 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.792532 kubelet[2703]: E1213 08:58:59.792455 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.792892 kubelet[2703]: E1213 08:58:59.792852 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.792892 kubelet[2703]: W1213 08:58:59.792871 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.792892 kubelet[2703]: E1213 08:58:59.792886 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.793151 kubelet[2703]: E1213 08:58:59.793140 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.793151 kubelet[2703]: W1213 08:58:59.793151 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.793221 kubelet[2703]: E1213 08:58:59.793167 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:58:59.793331 kubelet[2703]: E1213 08:58:59.793322 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:58:59.793364 kubelet[2703]: W1213 08:58:59.793332 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:58:59.793364 kubelet[2703]: E1213 08:58:59.793343 2703 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:01.059823 containerd[1473]: time="2024-12-13T08:59:01.058991018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:01.059823 containerd[1473]: time="2024-12-13T08:59:01.059752988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 08:59:01.060807 containerd[1473]: time="2024-12-13T08:59:01.060776242Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:01.063176 containerd[1473]: time="2024-12-13T08:59:01.063130433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:01.064113 containerd[1473]: time="2024-12-13T08:59:01.064075565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 2.618447728s" Dec 13 08:59:01.064113 containerd[1473]: time="2024-12-13T08:59:01.064112966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 08:59:01.068048 containerd[1473]: time="2024-12-13T08:59:01.068007697Z" level=info msg="CreateContainer within sandbox \"1825f66ed250b6061122e4994aeb5b27e5f37dfcb76870b3468e23babbf0783b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 08:59:01.088587 containerd[1473]: time="2024-12-13T08:59:01.088528769Z" level=info msg="CreateContainer within sandbox \"1825f66ed250b6061122e4994aeb5b27e5f37dfcb76870b3468e23babbf0783b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"26630559891f84f43b40993b945ef2d7d3f41d262990f9a0f4e72c4831aab925\"" Dec 13 08:59:01.090563 containerd[1473]: time="2024-12-13T08:59:01.089208698Z" level=info msg="StartContainer for \"26630559891f84f43b40993b945ef2d7d3f41d262990f9a0f4e72c4831aab925\"" Dec 13 08:59:01.130144 systemd[1]: Started cri-containerd-26630559891f84f43b40993b945ef2d7d3f41d262990f9a0f4e72c4831aab925.scope - libcontainer container 26630559891f84f43b40993b945ef2d7d3f41d262990f9a0f4e72c4831aab925. Dec 13 08:59:01.176973 containerd[1473]: time="2024-12-13T08:59:01.175923927Z" level=info msg="StartContainer for \"26630559891f84f43b40993b945ef2d7d3f41d262990f9a0f4e72c4831aab925\" returns successfully" Dec 13 08:59:01.195356 systemd[1]: cri-containerd-26630559891f84f43b40993b945ef2d7d3f41d262990f9a0f4e72c4831aab925.scope: Deactivated successfully. Dec 13 08:59:01.220667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26630559891f84f43b40993b945ef2d7d3f41d262990f9a0f4e72c4831aab925-rootfs.mount: Deactivated successfully. Dec 13 08:59:01.312502 containerd[1473]: time="2024-12-13T08:59:01.312262654Z" level=info msg="shim disconnected" id=26630559891f84f43b40993b945ef2d7d3f41d262990f9a0f4e72c4831aab925 namespace=k8s.io Dec 13 08:59:01.312502 containerd[1473]: time="2024-12-13T08:59:01.312352415Z" level=warning msg="cleaning up after shim disconnected" id=26630559891f84f43b40993b945ef2d7d3f41d262990f9a0f4e72c4831aab925 namespace=k8s.io Dec 13 08:59:01.312502 containerd[1473]: time="2024-12-13T08:59:01.312370216Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:59:01.598684 kubelet[2703]: E1213 08:59:01.598056 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chvmc" podUID="8cd9a8db-6f3d-4382-8dc3-75aed978669b" Dec 13 08:59:01.740640 containerd[1473]: time="2024-12-13T08:59:01.740594210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 08:59:03.598039 kubelet[2703]: E1213 08:59:03.597698 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chvmc" podUID="8cd9a8db-6f3d-4382-8dc3-75aed978669b" Dec 13 08:59:05.598563 kubelet[2703]: E1213 08:59:05.598441 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chvmc" podUID="8cd9a8db-6f3d-4382-8dc3-75aed978669b" Dec 13 08:59:06.413911 containerd[1473]: time="2024-12-13T08:59:06.413837541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:06.415700 containerd[1473]: time="2024-12-13T08:59:06.415492281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 08:59:06.415700 containerd[1473]: time="2024-12-13T08:59:06.415638483Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:06.419221 containerd[1473]: time="2024-12-13T08:59:06.418746480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:06.419766 containerd[1473]: time="2024-12-13T08:59:06.419722172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.679078841s" Dec 13 08:59:06.419766 containerd[1473]: time="2024-12-13T08:59:06.419764533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 08:59:06.425370 containerd[1473]: time="2024-12-13T08:59:06.425296519Z" level=info msg="CreateContainer within sandbox \"1825f66ed250b6061122e4994aeb5b27e5f37dfcb76870b3468e23babbf0783b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 08:59:06.446889 containerd[1473]: time="2024-12-13T08:59:06.446776739Z" level=info msg="CreateContainer within sandbox \"1825f66ed250b6061122e4994aeb5b27e5f37dfcb76870b3468e23babbf0783b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c0526dcb916600f1a2f6dab4f4f35e6de94015edef93fd98fc1442c03df92b1a\"" Dec 13 08:59:06.448417 containerd[1473]: time="2024-12-13T08:59:06.448117995Z" level=info msg="StartContainer for \"c0526dcb916600f1a2f6dab4f4f35e6de94015edef93fd98fc1442c03df92b1a\"" Dec 13 08:59:06.487797 systemd[1]: Started cri-containerd-c0526dcb916600f1a2f6dab4f4f35e6de94015edef93fd98fc1442c03df92b1a.scope - libcontainer container c0526dcb916600f1a2f6dab4f4f35e6de94015edef93fd98fc1442c03df92b1a. Dec 13 08:59:06.537665 containerd[1473]: time="2024-12-13T08:59:06.537611435Z" level=info msg="StartContainer for \"c0526dcb916600f1a2f6dab4f4f35e6de94015edef93fd98fc1442c03df92b1a\" returns successfully" Dec 13 08:59:07.076634 containerd[1473]: time="2024-12-13T08:59:07.076537766Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 08:59:07.080067 systemd[1]: cri-containerd-c0526dcb916600f1a2f6dab4f4f35e6de94015edef93fd98fc1442c03df92b1a.scope: Deactivated successfully. Dec 13 08:59:07.098632 kubelet[2703]: I1213 08:59:07.098468 2703 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 08:59:07.112888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0526dcb916600f1a2f6dab4f4f35e6de94015edef93fd98fc1442c03df92b1a-rootfs.mount: Deactivated successfully. Dec 13 08:59:07.135849 kubelet[2703]: I1213 08:59:07.135803 2703 topology_manager.go:215] "Topology Admit Handler" podUID="0f81c19a-7569-4158-afcc-88fa220a0f30" podNamespace="kube-system" podName="coredns-76f75df574-pmrq8" Dec 13 08:59:07.139075 kubelet[2703]: I1213 08:59:07.138983 2703 topology_manager.go:215] "Topology Admit Handler" podUID="b5f43183-1de9-47e8-b420-26f81d9d2ef1" podNamespace="kube-system" podName="coredns-76f75df574-p84dh" Dec 13 08:59:07.159187 kubelet[2703]: I1213 08:59:07.158991 2703 topology_manager.go:215] "Topology Admit Handler" podUID="3b341b31-6a86-4a5b-85c7-acbdb333dcd5" podNamespace="calico-apiserver" podName="calico-apiserver-8775d4447-vsr2v" Dec 13 08:59:07.162725 kubelet[2703]: I1213 08:59:07.162670 2703 topology_manager.go:215] "Topology Admit Handler" podUID="ed34c5d8-9877-44f6-82cc-1e049f25725d" podNamespace="calico-system" podName="calico-kube-controllers-6bf7964f-ch45z" Dec 13 08:59:07.162892 kubelet[2703]: I1213 08:59:07.162864 2703 topology_manager.go:215] "Topology Admit Handler" podUID="4825802f-c466-4d66-9dd0-24e12a47633b" podNamespace="calico-apiserver" podName="calico-apiserver-8775d4447-zfgm4" Dec 13 08:59:07.167791 systemd[1]: Created slice kubepods-burstable-pod0f81c19a_7569_4158_afcc_88fa220a0f30.slice - libcontainer container kubepods-burstable-pod0f81c19a_7569_4158_afcc_88fa220a0f30.slice. Dec 13 08:59:07.181134 systemd[1]: Created slice kubepods-burstable-podb5f43183_1de9_47e8_b420_26f81d9d2ef1.slice - libcontainer container kubepods-burstable-podb5f43183_1de9_47e8_b420_26f81d9d2ef1.slice. Dec 13 08:59:07.189428 containerd[1473]: time="2024-12-13T08:59:07.189331624Z" level=info msg="shim disconnected" id=c0526dcb916600f1a2f6dab4f4f35e6de94015edef93fd98fc1442c03df92b1a namespace=k8s.io Dec 13 08:59:07.189428 containerd[1473]: time="2024-12-13T08:59:07.189412585Z" level=warning msg="cleaning up after shim disconnected" id=c0526dcb916600f1a2f6dab4f4f35e6de94015edef93fd98fc1442c03df92b1a namespace=k8s.io Dec 13 08:59:07.189428 containerd[1473]: time="2024-12-13T08:59:07.189424745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:59:07.196032 systemd[1]: Created slice kubepods-besteffort-pod3b341b31_6a86_4a5b_85c7_acbdb333dcd5.slice - libcontainer container kubepods-besteffort-pod3b341b31_6a86_4a5b_85c7_acbdb333dcd5.slice. Dec 13 08:59:07.216915 systemd[1]: Created slice kubepods-besteffort-pod4825802f_c466_4d66_9dd0_24e12a47633b.slice - libcontainer container kubepods-besteffort-pod4825802f_c466_4d66_9dd0_24e12a47633b.slice. Dec 13 08:59:07.224048 systemd[1]: Created slice kubepods-besteffort-poded34c5d8_9877_44f6_82cc_1e049f25725d.slice - libcontainer container kubepods-besteffort-poded34c5d8_9877_44f6_82cc_1e049f25725d.slice. Dec 13 08:59:07.235568 containerd[1473]: time="2024-12-13T08:59:07.235472891Z" level=warning msg="cleanup warnings time=\"2024-12-13T08:59:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 08:59:07.249452 kubelet[2703]: I1213 08:59:07.248742 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsc89\" (UniqueName: \"kubernetes.io/projected/b5f43183-1de9-47e8-b420-26f81d9d2ef1-kube-api-access-wsc89\") pod \"coredns-76f75df574-p84dh\" (UID: \"b5f43183-1de9-47e8-b420-26f81d9d2ef1\") " pod="kube-system/coredns-76f75df574-p84dh" Dec 13 08:59:07.249452 kubelet[2703]: I1213 08:59:07.248809 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4jv6\" (UniqueName: \"kubernetes.io/projected/3b341b31-6a86-4a5b-85c7-acbdb333dcd5-kube-api-access-p4jv6\") pod \"calico-apiserver-8775d4447-vsr2v\" (UID: \"3b341b31-6a86-4a5b-85c7-acbdb333dcd5\") " pod="calico-apiserver/calico-apiserver-8775d4447-vsr2v" Dec 13 08:59:07.249452 kubelet[2703]: I1213 08:59:07.248872 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed34c5d8-9877-44f6-82cc-1e049f25725d-tigera-ca-bundle\") pod \"calico-kube-controllers-6bf7964f-ch45z\" (UID: \"ed34c5d8-9877-44f6-82cc-1e049f25725d\") " pod="calico-system/calico-kube-controllers-6bf7964f-ch45z" Dec 13 08:59:07.249452 kubelet[2703]: I1213 08:59:07.248900 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f81c19a-7569-4158-afcc-88fa220a0f30-config-volume\") pod \"coredns-76f75df574-pmrq8\" (UID: \"0f81c19a-7569-4158-afcc-88fa220a0f30\") " pod="kube-system/coredns-76f75df574-pmrq8" Dec 13 08:59:07.249452 kubelet[2703]: I1213 08:59:07.248925 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b341b31-6a86-4a5b-85c7-acbdb333dcd5-calico-apiserver-certs\") pod \"calico-apiserver-8775d4447-vsr2v\" (UID: \"3b341b31-6a86-4a5b-85c7-acbdb333dcd5\") " pod="calico-apiserver/calico-apiserver-8775d4447-vsr2v" Dec 13 08:59:07.251770 kubelet[2703]: I1213 08:59:07.248952 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwtbm\" (UniqueName: \"kubernetes.io/projected/4825802f-c466-4d66-9dd0-24e12a47633b-kube-api-access-dwtbm\") pod \"calico-apiserver-8775d4447-zfgm4\" (UID: \"4825802f-c466-4d66-9dd0-24e12a47633b\") " pod="calico-apiserver/calico-apiserver-8775d4447-zfgm4" Dec 13 08:59:07.251770 kubelet[2703]: I1213 08:59:07.248980 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpcgx\" (UniqueName: \"kubernetes.io/projected/0f81c19a-7569-4158-afcc-88fa220a0f30-kube-api-access-bpcgx\") pod \"coredns-76f75df574-pmrq8\" (UID: \"0f81c19a-7569-4158-afcc-88fa220a0f30\") " pod="kube-system/coredns-76f75df574-pmrq8" Dec 13 08:59:07.251770 kubelet[2703]: I1213 08:59:07.249000 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5f43183-1de9-47e8-b420-26f81d9d2ef1-config-volume\") pod \"coredns-76f75df574-p84dh\" (UID: \"b5f43183-1de9-47e8-b420-26f81d9d2ef1\") " pod="kube-system/coredns-76f75df574-p84dh" Dec 13 08:59:07.251770 kubelet[2703]: I1213 08:59:07.249024 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnsmb\" (UniqueName: \"kubernetes.io/projected/ed34c5d8-9877-44f6-82cc-1e049f25725d-kube-api-access-lnsmb\") pod \"calico-kube-controllers-6bf7964f-ch45z\" (UID: \"ed34c5d8-9877-44f6-82cc-1e049f25725d\") " pod="calico-system/calico-kube-controllers-6bf7964f-ch45z" Dec 13 08:59:07.251770 kubelet[2703]: I1213 08:59:07.249050 2703 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4825802f-c466-4d66-9dd0-24e12a47633b-calico-apiserver-certs\") pod \"calico-apiserver-8775d4447-zfgm4\" (UID: \"4825802f-c466-4d66-9dd0-24e12a47633b\") " pod="calico-apiserver/calico-apiserver-8775d4447-zfgm4" Dec 13 08:59:07.476937 containerd[1473]: time="2024-12-13T08:59:07.476871514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pmrq8,Uid:0f81c19a-7569-4158-afcc-88fa220a0f30,Namespace:kube-system,Attempt:0,}" Dec 13 08:59:07.491481 containerd[1473]: time="2024-12-13T08:59:07.490418195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p84dh,Uid:b5f43183-1de9-47e8-b420-26f81d9d2ef1,Namespace:kube-system,Attempt:0,}" Dec 13 08:59:07.503763 containerd[1473]: time="2024-12-13T08:59:07.503712072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8775d4447-vsr2v,Uid:3b341b31-6a86-4a5b-85c7-acbdb333dcd5,Namespace:calico-apiserver,Attempt:0,}" Dec 13 08:59:07.528607 containerd[1473]: time="2024-12-13T08:59:07.528493366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8775d4447-zfgm4,Uid:4825802f-c466-4d66-9dd0-24e12a47633b,Namespace:calico-apiserver,Attempt:0,}" Dec 13 08:59:07.546127 containerd[1473]: time="2024-12-13T08:59:07.545767291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf7964f-ch45z,Uid:ed34c5d8-9877-44f6-82cc-1e049f25725d,Namespace:calico-system,Attempt:0,}" Dec 13 08:59:07.608311 systemd[1]: Created slice kubepods-besteffort-pod8cd9a8db_6f3d_4382_8dc3_75aed978669b.slice - libcontainer container kubepods-besteffort-pod8cd9a8db_6f3d_4382_8dc3_75aed978669b.slice. Dec 13 08:59:07.613669 containerd[1473]: time="2024-12-13T08:59:07.613509774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chvmc,Uid:8cd9a8db-6f3d-4382-8dc3-75aed978669b,Namespace:calico-system,Attempt:0,}" Dec 13 08:59:07.703634 containerd[1473]: time="2024-12-13T08:59:07.703577523Z" level=error msg="Failed to destroy network for sandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.704107 containerd[1473]: time="2024-12-13T08:59:07.704067288Z" level=error msg="encountered an error cleaning up failed sandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.704642 containerd[1473]: time="2024-12-13T08:59:07.704526974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p84dh,Uid:b5f43183-1de9-47e8-b420-26f81d9d2ef1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.705353 kubelet[2703]: E1213 08:59:07.705162 2703 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.705353 kubelet[2703]: E1213 08:59:07.705235 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-p84dh" Dec 13 08:59:07.705353 kubelet[2703]: E1213 08:59:07.705255 2703 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-p84dh" Dec 13 08:59:07.705523 kubelet[2703]: E1213 08:59:07.705327 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-p84dh_kube-system(b5f43183-1de9-47e8-b420-26f81d9d2ef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-p84dh_kube-system(b5f43183-1de9-47e8-b420-26f81d9d2ef1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-p84dh" podUID="b5f43183-1de9-47e8-b420-26f81d9d2ef1" Dec 13 08:59:07.709273 containerd[1473]: time="2024-12-13T08:59:07.708331459Z" level=error msg="Failed to destroy network for sandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.711787 containerd[1473]: time="2024-12-13T08:59:07.711729379Z" level=error msg="encountered an error cleaning up failed sandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.712124 containerd[1473]: time="2024-12-13T08:59:07.712090184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pmrq8,Uid:0f81c19a-7569-4158-afcc-88fa220a0f30,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.713161 kubelet[2703]: E1213 08:59:07.712791 2703 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.713161 kubelet[2703]: E1213 08:59:07.712845 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-pmrq8" Dec 13 08:59:07.713161 kubelet[2703]: E1213 08:59:07.712865 2703 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-pmrq8" Dec 13 08:59:07.713335 kubelet[2703]: E1213 08:59:07.712929 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-pmrq8_kube-system(0f81c19a-7569-4158-afcc-88fa220a0f30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-pmrq8_kube-system(0f81c19a-7569-4158-afcc-88fa220a0f30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-pmrq8" podUID="0f81c19a-7569-4158-afcc-88fa220a0f30" Dec 13 08:59:07.722103 containerd[1473]: time="2024-12-13T08:59:07.722051942Z" level=error msg="Failed to destroy network for sandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.722644 containerd[1473]: time="2024-12-13T08:59:07.722611628Z" level=error msg="encountered an error cleaning up failed sandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.722775 containerd[1473]: time="2024-12-13T08:59:07.722752030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8775d4447-vsr2v,Uid:3b341b31-6a86-4a5b-85c7-acbdb333dcd5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.723462 kubelet[2703]: E1213 08:59:07.723061 2703 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.723462 kubelet[2703]: E1213 08:59:07.723111 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8775d4447-vsr2v" Dec 13 08:59:07.723462 kubelet[2703]: E1213 08:59:07.723133 2703 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8775d4447-vsr2v" Dec 13 08:59:07.723645 kubelet[2703]: E1213 08:59:07.723188 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8775d4447-vsr2v_calico-apiserver(3b341b31-6a86-4a5b-85c7-acbdb333dcd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8775d4447-vsr2v_calico-apiserver(3b341b31-6a86-4a5b-85c7-acbdb333dcd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8775d4447-vsr2v" podUID="3b341b31-6a86-4a5b-85c7-acbdb333dcd5" Dec 13 08:59:07.740776 containerd[1473]: time="2024-12-13T08:59:07.740631202Z" level=error msg="Failed to destroy network for sandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.741828 containerd[1473]: time="2024-12-13T08:59:07.741765176Z" level=error msg="encountered an error cleaning up failed sandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.741941 containerd[1473]: time="2024-12-13T08:59:07.741859097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8775d4447-zfgm4,Uid:4825802f-c466-4d66-9dd0-24e12a47633b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.742568 kubelet[2703]: E1213 08:59:07.742159 2703 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.742568 kubelet[2703]: E1213 08:59:07.742216 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8775d4447-zfgm4" Dec 13 08:59:07.742568 kubelet[2703]: E1213 08:59:07.742243 2703 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8775d4447-zfgm4" Dec 13 08:59:07.743571 kubelet[2703]: E1213 08:59:07.742302 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8775d4447-zfgm4_calico-apiserver(4825802f-c466-4d66-9dd0-24e12a47633b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8775d4447-zfgm4_calico-apiserver(4825802f-c466-4d66-9dd0-24e12a47633b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8775d4447-zfgm4" podUID="4825802f-c466-4d66-9dd0-24e12a47633b" Dec 13 08:59:07.753366 kubelet[2703]: I1213 08:59:07.753331 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:07.754551 containerd[1473]: time="2024-12-13T08:59:07.754457246Z" level=info msg="StopPodSandbox for \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\"" Dec 13 08:59:07.754777 containerd[1473]: time="2024-12-13T08:59:07.754751210Z" level=info msg="Ensure that sandbox c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b in task-service has been cleanup successfully" Dec 13 08:59:07.756597 kubelet[2703]: I1213 08:59:07.756132 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:07.757276 containerd[1473]: time="2024-12-13T08:59:07.757184318Z" level=info msg="StopPodSandbox for \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\"" Dec 13 08:59:07.758714 containerd[1473]: time="2024-12-13T08:59:07.758604855Z" level=info msg="Ensure that sandbox 657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3 in task-service has been cleanup successfully" Dec 13 08:59:07.763930 kubelet[2703]: I1213 08:59:07.763833 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:07.765405 containerd[1473]: time="2024-12-13T08:59:07.765352975Z" level=info msg="StopPodSandbox for \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\"" Dec 13 08:59:07.767813 containerd[1473]: time="2024-12-13T08:59:07.767730603Z" level=info msg="Ensure that sandbox aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0 in task-service has been cleanup successfully" Dec 13 08:59:07.770009 kubelet[2703]: I1213 08:59:07.769973 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:07.776023 containerd[1473]: time="2024-12-13T08:59:07.775935541Z" level=info msg="StopPodSandbox for \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\"" Dec 13 08:59:07.776421 containerd[1473]: time="2024-12-13T08:59:07.776141503Z" level=info msg="Ensure that sandbox 5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d in task-service has been cleanup successfully" Dec 13 08:59:07.793770 containerd[1473]: time="2024-12-13T08:59:07.792768580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 08:59:07.808003 containerd[1473]: time="2024-12-13T08:59:07.807947520Z" level=error msg="Failed to destroy network for sandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.808777 containerd[1473]: time="2024-12-13T08:59:07.808637489Z" level=error msg="encountered an error cleaning up failed sandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.809291 containerd[1473]: time="2024-12-13T08:59:07.809240216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf7964f-ch45z,Uid:ed34c5d8-9877-44f6-82cc-1e049f25725d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.809935 kubelet[2703]: E1213 08:59:07.809585 2703 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.809935 kubelet[2703]: E1213 08:59:07.809643 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bf7964f-ch45z" Dec 13 08:59:07.809935 kubelet[2703]: E1213 08:59:07.809664 2703 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bf7964f-ch45z" Dec 13 08:59:07.810052 kubelet[2703]: E1213 08:59:07.809714 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bf7964f-ch45z_calico-system(ed34c5d8-9877-44f6-82cc-1e049f25725d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bf7964f-ch45z_calico-system(ed34c5d8-9877-44f6-82cc-1e049f25725d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bf7964f-ch45z" podUID="ed34c5d8-9877-44f6-82cc-1e049f25725d" Dec 13 08:59:07.824960 containerd[1473]: time="2024-12-13T08:59:07.824787000Z" level=error msg="Failed to destroy network for sandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.833796 containerd[1473]: time="2024-12-13T08:59:07.833726546Z" level=error msg="encountered an error cleaning up failed sandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.834469 containerd[1473]: time="2024-12-13T08:59:07.833821467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chvmc,Uid:8cd9a8db-6f3d-4382-8dc3-75aed978669b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.834570 kubelet[2703]: E1213 08:59:07.834175 2703 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.834570 kubelet[2703]: E1213 08:59:07.834238 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-chvmc" Dec 13 08:59:07.834570 kubelet[2703]: E1213 08:59:07.834263 2703 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-chvmc" Dec 13 08:59:07.834931 kubelet[2703]: E1213 08:59:07.834753 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-chvmc_calico-system(8cd9a8db-6f3d-4382-8dc3-75aed978669b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-chvmc_calico-system(8cd9a8db-6f3d-4382-8dc3-75aed978669b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-chvmc" podUID="8cd9a8db-6f3d-4382-8dc3-75aed978669b" Dec 13 08:59:07.847594 containerd[1473]: time="2024-12-13T08:59:07.847539910Z" level=error msg="StopPodSandbox for \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\" failed" error="failed to destroy network for sandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.848173 kubelet[2703]: E1213 08:59:07.847978 2703 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:07.848173 kubelet[2703]: E1213 08:59:07.848066 2703 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3"} Dec 13 08:59:07.848173 kubelet[2703]: E1213 08:59:07.848104 2703 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5f43183-1de9-47e8-b420-26f81d9d2ef1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:59:07.848173 kubelet[2703]: E1213 08:59:07.848137 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5f43183-1de9-47e8-b420-26f81d9d2ef1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-p84dh" podUID="b5f43183-1de9-47e8-b420-26f81d9d2ef1" Dec 13 08:59:07.861774 containerd[1473]: time="2024-12-13T08:59:07.861692518Z" level=error msg="StopPodSandbox for \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\" failed" error="failed to destroy network for sandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.862083 kubelet[2703]: E1213 08:59:07.861976 2703 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:07.862083 kubelet[2703]: E1213 08:59:07.862030 2703 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b"} Dec 13 08:59:07.862083 kubelet[2703]: E1213 08:59:07.862065 2703 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4825802f-c466-4d66-9dd0-24e12a47633b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:59:07.862213 kubelet[2703]: E1213 08:59:07.862099 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4825802f-c466-4d66-9dd0-24e12a47633b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8775d4447-zfgm4" podUID="4825802f-c466-4d66-9dd0-24e12a47633b" Dec 13 08:59:07.873205 containerd[1473]: time="2024-12-13T08:59:07.873147494Z" level=error msg="StopPodSandbox for \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\" failed" error="failed to destroy network for sandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.873736 kubelet[2703]: E1213 08:59:07.873575 2703 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:07.873736 kubelet[2703]: E1213 08:59:07.873647 2703 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0"} Dec 13 08:59:07.873736 kubelet[2703]: E1213 08:59:07.873687 2703 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0f81c19a-7569-4158-afcc-88fa220a0f30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:59:07.873736 kubelet[2703]: E1213 08:59:07.873715 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0f81c19a-7569-4158-afcc-88fa220a0f30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-pmrq8" podUID="0f81c19a-7569-4158-afcc-88fa220a0f30" Dec 13 08:59:07.875684 containerd[1473]: time="2024-12-13T08:59:07.875620683Z" level=error msg="StopPodSandbox for \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\" failed" error="failed to destroy network for sandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:07.876073 kubelet[2703]: E1213 08:59:07.875942 2703 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:07.876073 kubelet[2703]: E1213 08:59:07.875993 2703 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d"} Dec 13 08:59:07.876073 kubelet[2703]: E1213 08:59:07.876027 2703 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b341b31-6a86-4a5b-85c7-acbdb333dcd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:59:07.876073 kubelet[2703]: E1213 08:59:07.876054 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b341b31-6a86-4a5b-85c7-acbdb333dcd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8775d4447-vsr2v" podUID="3b341b31-6a86-4a5b-85c7-acbdb333dcd5" Dec 13 08:59:08.441178 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3-shm.mount: Deactivated successfully. Dec 13 08:59:08.441280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0-shm.mount: Deactivated successfully. Dec 13 08:59:08.792821 kubelet[2703]: I1213 08:59:08.792639 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:08.797413 containerd[1473]: time="2024-12-13T08:59:08.794034851Z" level=info msg="StopPodSandbox for \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\"" Dec 13 08:59:08.797413 containerd[1473]: time="2024-12-13T08:59:08.795564749Z" level=info msg="Ensure that sandbox 3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187 in task-service has been cleanup successfully" Dec 13 08:59:08.801265 kubelet[2703]: I1213 08:59:08.801233 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:08.802749 containerd[1473]: time="2024-12-13T08:59:08.802713992Z" level=info msg="StopPodSandbox for \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\"" Dec 13 08:59:08.807245 containerd[1473]: time="2024-12-13T08:59:08.806249233Z" level=info msg="Ensure that sandbox 8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef in task-service has been cleanup successfully" Dec 13 08:59:08.841048 containerd[1473]: time="2024-12-13T08:59:08.840994518Z" level=error msg="StopPodSandbox for \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\" failed" error="failed to destroy network for sandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:08.841947 kubelet[2703]: E1213 08:59:08.841909 2703 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:08.842052 kubelet[2703]: E1213 08:59:08.841960 2703 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187"} Dec 13 08:59:08.842052 kubelet[2703]: E1213 08:59:08.841997 2703 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed34c5d8-9877-44f6-82cc-1e049f25725d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:59:08.842052 kubelet[2703]: E1213 08:59:08.842034 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed34c5d8-9877-44f6-82cc-1e049f25725d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bf7964f-ch45z" podUID="ed34c5d8-9877-44f6-82cc-1e049f25725d" Dec 13 08:59:08.848064 containerd[1473]: time="2024-12-13T08:59:08.847936639Z" level=error msg="StopPodSandbox for \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\" failed" error="failed to destroy network for sandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:08.848323 kubelet[2703]: E1213 08:59:08.848197 2703 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:08.848323 kubelet[2703]: E1213 08:59:08.848244 2703 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef"} Dec 13 08:59:08.848323 kubelet[2703]: E1213 08:59:08.848286 2703 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8cd9a8db-6f3d-4382-8dc3-75aed978669b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:59:08.848323 kubelet[2703]: E1213 08:59:08.848314 2703 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8cd9a8db-6f3d-4382-8dc3-75aed978669b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-chvmc" podUID="8cd9a8db-6f3d-4382-8dc3-75aed978669b" Dec 13 08:59:14.397531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount496154902.mount: Deactivated successfully. Dec 13 08:59:14.439770 containerd[1473]: time="2024-12-13T08:59:14.438234563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:14.441436 containerd[1473]: time="2024-12-13T08:59:14.441357995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 08:59:14.443377 containerd[1473]: time="2024-12-13T08:59:14.443329056Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:14.447886 containerd[1473]: time="2024-12-13T08:59:14.447832784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:14.449948 containerd[1473]: time="2024-12-13T08:59:14.449898925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.657076424s" Dec 13 08:59:14.450155 containerd[1473]: time="2024-12-13T08:59:14.450137848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 08:59:14.459910 containerd[1473]: time="2024-12-13T08:59:14.459859350Z" level=info msg="CreateContainer within sandbox \"1825f66ed250b6061122e4994aeb5b27e5f37dfcb76870b3468e23babbf0783b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 08:59:14.488990 containerd[1473]: time="2024-12-13T08:59:14.488296530Z" level=info msg="CreateContainer within sandbox \"1825f66ed250b6061122e4994aeb5b27e5f37dfcb76870b3468e23babbf0783b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"65c99f7768b02dccfa77c611939f37d94fbceeaf81cd422202093ea914360389\"" Dec 13 08:59:14.489792 containerd[1473]: time="2024-12-13T08:59:14.489262460Z" level=info msg="StartContainer for \"65c99f7768b02dccfa77c611939f37d94fbceeaf81cd422202093ea914360389\"" Dec 13 08:59:14.522694 systemd[1]: Started cri-containerd-65c99f7768b02dccfa77c611939f37d94fbceeaf81cd422202093ea914360389.scope - libcontainer container 65c99f7768b02dccfa77c611939f37d94fbceeaf81cd422202093ea914360389. Dec 13 08:59:14.556129 containerd[1473]: time="2024-12-13T08:59:14.556082164Z" level=info msg="StartContainer for \"65c99f7768b02dccfa77c611939f37d94fbceeaf81cd422202093ea914360389\" returns successfully" Dec 13 08:59:14.675679 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 08:59:14.675834 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 08:59:15.823211 kubelet[2703]: I1213 08:59:15.821743 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:59:19.496403 kubelet[2703]: I1213 08:59:19.495283 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:59:19.522375 kubelet[2703]: I1213 08:59:19.521274 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-hdtfw" podStartSLOduration=6.099226042 podStartE2EDuration="24.521218273s" podCreationTimestamp="2024-12-13 08:58:55 +0000 UTC" firstStartedPulling="2024-12-13 08:58:56.028504941 +0000 UTC m=+21.583308320" lastFinishedPulling="2024-12-13 08:59:14.450497172 +0000 UTC m=+40.005300551" observedRunningTime="2024-12-13 08:59:14.851040353 +0000 UTC m=+40.405843732" watchObservedRunningTime="2024-12-13 08:59:19.521218273 +0000 UTC m=+45.076021652" Dec 13 08:59:19.601092 containerd[1473]: time="2024-12-13T08:59:19.601011252Z" level=info msg="StopPodSandbox for \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\"" Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.699 [INFO][4046] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.700 [INFO][4046] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" iface="eth0" netns="/var/run/netns/cni-9d2818b6-36d0-2b15-2da0-fcb51efeb1f9" Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.700 [INFO][4046] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" iface="eth0" netns="/var/run/netns/cni-9d2818b6-36d0-2b15-2da0-fcb51efeb1f9" Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.701 [INFO][4046] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" iface="eth0" netns="/var/run/netns/cni-9d2818b6-36d0-2b15-2da0-fcb51efeb1f9" Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.701 [INFO][4046] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.702 [INFO][4046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.742 [INFO][4070] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" HandleID="k8s-pod-network.8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Workload="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.742 [INFO][4070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.742 [INFO][4070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.755 [WARNING][4070] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" HandleID="k8s-pod-network.8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Workload="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.755 [INFO][4070] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" HandleID="k8s-pod-network.8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Workload="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.757 [INFO][4070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:19.763355 containerd[1473]: 2024-12-13 08:59:19.760 [INFO][4046] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:19.766445 containerd[1473]: time="2024-12-13T08:59:19.763969282Z" level=info msg="TearDown network for sandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\" successfully" Dec 13 08:59:19.766445 containerd[1473]: time="2024-12-13T08:59:19.764010682Z" level=info msg="StopPodSandbox for \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\" returns successfully" Dec 13 08:59:19.766110 systemd[1]: run-netns-cni\x2d9d2818b6\x2d36d0\x2d2b15\x2d2da0\x2dfcb51efeb1f9.mount: Deactivated successfully. Dec 13 08:59:19.777754 containerd[1473]: time="2024-12-13T08:59:19.777430453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chvmc,Uid:8cd9a8db-6f3d-4382-8dc3-75aed978669b,Namespace:calico-system,Attempt:1,}" Dec 13 08:59:19.958854 systemd-networkd[1372]: cali4b710e1a750: Link UP Dec 13 08:59:19.959712 systemd-networkd[1372]: cali4b710e1a750: Gained carrier Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.824 [INFO][4078] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.843 [INFO][4078] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0 csi-node-driver- calico-system 8cd9a8db-6f3d-4382-8dc3-75aed978669b 771 0 2024-12-13 08:58:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-e-e153687e15 csi-node-driver-chvmc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4b710e1a750 [] []}} ContainerID="d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" Namespace="calico-system" Pod="csi-node-driver-chvmc" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-" Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.843 [INFO][4078] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" Namespace="calico-system" Pod="csi-node-driver-chvmc" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.882 [INFO][4088] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" HandleID="k8s-pod-network.d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" Workload="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.898 [INFO][4088] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" HandleID="k8s-pod-network.d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" Workload="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-e-e153687e15", "pod":"csi-node-driver-chvmc", "timestamp":"2024-12-13 08:59:19.882039314 +0000 UTC"}, Hostname:"ci-4081-2-1-e-e153687e15", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.898 [INFO][4088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.898 [INFO][4088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.898 [INFO][4088] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-e-e153687e15' Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.901 [INFO][4088] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.908 [INFO][4088] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.915 [INFO][4088] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.919 [INFO][4088] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.922 [INFO][4088] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.923 [INFO][4088] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.925 [INFO][4088] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396 Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.933 [INFO][4088] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.943 [INFO][4088] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.65/26] block=192.168.124.64/26 handle="k8s-pod-network.d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.943 [INFO][4088] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.65/26] handle="k8s-pod-network.d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.943 [INFO][4088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:19.984828 containerd[1473]: 2024-12-13 08:59:19.943 [INFO][4088] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.65/26] IPv6=[] ContainerID="d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" HandleID="k8s-pod-network.d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" Workload="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:19.985958 containerd[1473]: 2024-12-13 08:59:19.946 [INFO][4078] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" Namespace="calico-system" Pod="csi-node-driver-chvmc" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8cd9a8db-6f3d-4382-8dc3-75aed978669b", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"", Pod:"csi-node-driver-chvmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b710e1a750", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:19.985958 containerd[1473]: 2024-12-13 08:59:19.947 [INFO][4078] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.65/32] ContainerID="d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" Namespace="calico-system" Pod="csi-node-driver-chvmc" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:19.985958 containerd[1473]: 2024-12-13 08:59:19.947 [INFO][4078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b710e1a750 ContainerID="d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" Namespace="calico-system" Pod="csi-node-driver-chvmc" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:19.985958 containerd[1473]: 2024-12-13 08:59:19.958 [INFO][4078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" Namespace="calico-system" Pod="csi-node-driver-chvmc" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:19.985958 containerd[1473]: 2024-12-13 08:59:19.959 [INFO][4078] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" Namespace="calico-system" Pod="csi-node-driver-chvmc" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8cd9a8db-6f3d-4382-8dc3-75aed978669b", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396", Pod:"csi-node-driver-chvmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b710e1a750", MAC:"86:fc:ca:9e:8e:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:19.985958 containerd[1473]: 2024-12-13 08:59:19.973 [INFO][4078] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396" Namespace="calico-system" Pod="csi-node-driver-chvmc" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:20.009926 containerd[1473]: time="2024-12-13T08:59:20.009648438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:20.009926 containerd[1473]: time="2024-12-13T08:59:20.009713999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:20.009926 containerd[1473]: time="2024-12-13T08:59:20.009729719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:20.009926 containerd[1473]: time="2024-12-13T08:59:20.009821400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:20.050852 systemd[1]: Started cri-containerd-d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396.scope - libcontainer container d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396. Dec 13 08:59:20.082888 containerd[1473]: time="2024-12-13T08:59:20.082847342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chvmc,Uid:8cd9a8db-6f3d-4382-8dc3-75aed978669b,Namespace:calico-system,Attempt:1,} returns sandbox id \"d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396\"" Dec 13 08:59:20.097905 containerd[1473]: time="2024-12-13T08:59:20.097699045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 08:59:20.599432 kernel: bpftool[4172]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 08:59:20.766235 systemd[1]: run-containerd-runc-k8s.io-d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396-runc.9GUHXX.mount: Deactivated successfully. Dec 13 08:59:20.827682 systemd-networkd[1372]: vxlan.calico: Link UP Dec 13 08:59:20.827697 systemd-networkd[1372]: vxlan.calico: Gained carrier Dec 13 08:59:21.451735 systemd-networkd[1372]: cali4b710e1a750: Gained IPv6LL Dec 13 08:59:21.599440 containerd[1473]: time="2024-12-13T08:59:21.599348124Z" level=info msg="StopPodSandbox for \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\"" Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.673 [INFO][4255] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.673 [INFO][4255] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" iface="eth0" netns="/var/run/netns/cni-95adf277-7d33-990f-6bbe-ef2028be0a28" Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.673 [INFO][4255] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" iface="eth0" netns="/var/run/netns/cni-95adf277-7d33-990f-6bbe-ef2028be0a28" Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.675 [INFO][4255] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" iface="eth0" netns="/var/run/netns/cni-95adf277-7d33-990f-6bbe-ef2028be0a28" Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.675 [INFO][4255] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.675 [INFO][4255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.699 [INFO][4261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" HandleID="k8s-pod-network.c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.699 [INFO][4261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.699 [INFO][4261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.713 [WARNING][4261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" HandleID="k8s-pod-network.c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.713 [INFO][4261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" HandleID="k8s-pod-network.c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.716 [INFO][4261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:21.719230 containerd[1473]: 2024-12-13 08:59:21.717 [INFO][4255] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:21.722011 containerd[1473]: time="2024-12-13T08:59:21.721475802Z" level=info msg="TearDown network for sandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\" successfully" Dec 13 08:59:21.722011 containerd[1473]: time="2024-12-13T08:59:21.721520362Z" level=info msg="StopPodSandbox for \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\" returns successfully" Dec 13 08:59:21.722305 containerd[1473]: time="2024-12-13T08:59:21.722259729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8775d4447-zfgm4,Uid:4825802f-c466-4d66-9dd0-24e12a47633b,Namespace:calico-apiserver,Attempt:1,}" Dec 13 08:59:21.723161 systemd[1]: run-netns-cni\x2d95adf277\x2d7d33\x2d990f\x2d6bbe\x2def2028be0a28.mount: Deactivated successfully. Dec 13 08:59:21.890989 systemd-networkd[1372]: cali45f7ce608c5: Link UP Dec 13 08:59:21.891200 systemd-networkd[1372]: cali45f7ce608c5: Gained carrier Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.788 [INFO][4269] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0 calico-apiserver-8775d4447- calico-apiserver 4825802f-c466-4d66-9dd0-24e12a47633b 780 0 2024-12-13 08:58:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8775d4447 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-e-e153687e15 calico-apiserver-8775d4447-zfgm4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali45f7ce608c5 [] []}} ContainerID="a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-zfgm4" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-" Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.789 [INFO][4269] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-zfgm4" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.824 [INFO][4281] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" HandleID="k8s-pod-network.a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.842 [INFO][4281] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" HandleID="k8s-pod-network.a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cb70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-e-e153687e15", "pod":"calico-apiserver-8775d4447-zfgm4", "timestamp":"2024-12-13 08:59:21.824142735 +0000 UTC"}, Hostname:"ci-4081-2-1-e-e153687e15", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.842 [INFO][4281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.842 [INFO][4281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.842 [INFO][4281] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-e-e153687e15' Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.845 [INFO][4281] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.852 [INFO][4281] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.859 [INFO][4281] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.862 [INFO][4281] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.866 [INFO][4281] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.866 [INFO][4281] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.869 [INFO][4281] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.877 [INFO][4281] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.884 [INFO][4281] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.66/26] block=192.168.124.64/26 handle="k8s-pod-network.a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.884 [INFO][4281] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.66/26] handle="k8s-pod-network.a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.884 [INFO][4281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:21.915816 containerd[1473]: 2024-12-13 08:59:21.884 [INFO][4281] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.66/26] IPv6=[] ContainerID="a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" HandleID="k8s-pod-network.a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:21.918779 containerd[1473]: 2024-12-13 08:59:21.886 [INFO][4269] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-zfgm4" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0", GenerateName:"calico-apiserver-8775d4447-", Namespace:"calico-apiserver", SelfLink:"", UID:"4825802f-c466-4d66-9dd0-24e12a47633b", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8775d4447", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"", Pod:"calico-apiserver-8775d4447-zfgm4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45f7ce608c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:21.918779 containerd[1473]: 2024-12-13 08:59:21.886 [INFO][4269] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.66/32] ContainerID="a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-zfgm4" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:21.918779 containerd[1473]: 2024-12-13 08:59:21.886 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45f7ce608c5 ContainerID="a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-zfgm4" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:21.918779 containerd[1473]: 2024-12-13 08:59:21.891 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-zfgm4" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:21.918779 containerd[1473]: 2024-12-13 08:59:21.892 [INFO][4269] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-zfgm4" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0", GenerateName:"calico-apiserver-8775d4447-", Namespace:"calico-apiserver", SelfLink:"", UID:"4825802f-c466-4d66-9dd0-24e12a47633b", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8775d4447", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac", Pod:"calico-apiserver-8775d4447-zfgm4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45f7ce608c5", MAC:"e2:67:fa:b9:5b:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:21.918779 containerd[1473]: 2024-12-13 08:59:21.912 [INFO][4269] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-zfgm4" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:21.951254 containerd[1473]: time="2024-12-13T08:59:21.951103378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:21.951254 containerd[1473]: time="2024-12-13T08:59:21.951194219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:21.951254 containerd[1473]: time="2024-12-13T08:59:21.951219100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:21.954449 containerd[1473]: time="2024-12-13T08:59:21.952161829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:21.992665 systemd[1]: Started cri-containerd-a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac.scope - libcontainer container a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac. Dec 13 08:59:22.053100 containerd[1473]: time="2024-12-13T08:59:22.053054058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8775d4447-zfgm4,Uid:4825802f-c466-4d66-9dd0-24e12a47633b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac\"" Dec 13 08:59:22.605084 containerd[1473]: time="2024-12-13T08:59:22.603339281Z" level=info msg="StopPodSandbox for \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\"" Dec 13 08:59:22.614359 containerd[1473]: time="2024-12-13T08:59:22.609729221Z" level=info msg="StopPodSandbox for \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\"" Dec 13 08:59:22.614359 containerd[1473]: time="2024-12-13T08:59:22.613228174Z" level=info msg="StopPodSandbox for \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\"" Dec 13 08:59:22.607961 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Dec 13 08:59:22.646789 containerd[1473]: time="2024-12-13T08:59:22.646738127Z" level=info msg="StopPodSandbox for \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\"" Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.777 [INFO][4402] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.784 [INFO][4402] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" iface="eth0" netns="/var/run/netns/cni-1e98a6e6-c6a9-6eca-a6f6-e3528fd5f183" Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.786 [INFO][4402] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" iface="eth0" netns="/var/run/netns/cni-1e98a6e6-c6a9-6eca-a6f6-e3528fd5f183" Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.787 [INFO][4402] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" iface="eth0" netns="/var/run/netns/cni-1e98a6e6-c6a9-6eca-a6f6-e3528fd5f183" Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.787 [INFO][4402] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.787 [INFO][4402] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.888 [INFO][4421] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" HandleID="k8s-pod-network.aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.888 [INFO][4421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.889 [INFO][4421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.910 [WARNING][4421] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" HandleID="k8s-pod-network.aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.912 [INFO][4421] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" HandleID="k8s-pod-network.aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.915 [INFO][4421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:22.921445 containerd[1473]: 2024-12-13 08:59:22.918 [INFO][4402] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:22.925915 systemd[1]: run-netns-cni\x2d1e98a6e6\x2dc6a9\x2d6eca\x2da6f6\x2de3528fd5f183.mount: Deactivated successfully. Dec 13 08:59:22.928275 containerd[1473]: time="2024-12-13T08:59:22.928232918Z" level=info msg="TearDown network for sandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\" successfully" Dec 13 08:59:22.928476 containerd[1473]: time="2024-12-13T08:59:22.928459800Z" level=info msg="StopPodSandbox for \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\" returns successfully" Dec 13 08:59:22.934789 containerd[1473]: time="2024-12-13T08:59:22.934380056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pmrq8,Uid:0f81c19a-7569-4158-afcc-88fa220a0f30,Namespace:kube-system,Attempt:1,}" Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.848 [INFO][4382] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.850 [INFO][4382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" iface="eth0" netns="/var/run/netns/cni-4b056122-2905-606f-fea9-a3945738d16b" Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.850 [INFO][4382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" iface="eth0" netns="/var/run/netns/cni-4b056122-2905-606f-fea9-a3945738d16b" Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.855 [INFO][4382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" iface="eth0" netns="/var/run/netns/cni-4b056122-2905-606f-fea9-a3945738d16b" Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.855 [INFO][4382] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.855 [INFO][4382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.920 [INFO][4427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" HandleID="k8s-pod-network.657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.920 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.920 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.934 [WARNING][4427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" HandleID="k8s-pod-network.657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.934 [INFO][4427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" HandleID="k8s-pod-network.657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.938 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:22.946871 containerd[1473]: 2024-12-13 08:59:22.942 [INFO][4382] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:22.950866 containerd[1473]: time="2024-12-13T08:59:22.950137323Z" level=info msg="TearDown network for sandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\" successfully" Dec 13 08:59:22.951121 containerd[1473]: time="2024-12-13T08:59:22.951095532Z" level=info msg="StopPodSandbox for \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\" returns successfully" Dec 13 08:59:22.954104 systemd[1]: run-netns-cni\x2d4b056122\x2d2905\x2d606f\x2dfea9\x2da3945738d16b.mount: Deactivated successfully. Dec 13 08:59:22.956142 containerd[1473]: time="2024-12-13T08:59:22.956088499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p84dh,Uid:b5f43183-1de9-47e8-b420-26f81d9d2ef1,Namespace:kube-system,Attempt:1,}" Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.851 [INFO][4393] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.855 [INFO][4393] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" iface="eth0" netns="/var/run/netns/cni-0a128f9e-ffb2-5e88-2e56-c7f8d70cacc6" Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.857 [INFO][4393] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" iface="eth0" netns="/var/run/netns/cni-0a128f9e-ffb2-5e88-2e56-c7f8d70cacc6" Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.860 [INFO][4393] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" iface="eth0" netns="/var/run/netns/cni-0a128f9e-ffb2-5e88-2e56-c7f8d70cacc6" Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.860 [INFO][4393] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.860 [INFO][4393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.967 [INFO][4428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" HandleID="k8s-pod-network.5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.967 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.968 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.985 [WARNING][4428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" HandleID="k8s-pod-network.5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.985 [INFO][4428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" HandleID="k8s-pod-network.5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.987 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:22.995843 containerd[1473]: 2024-12-13 08:59:22.992 [INFO][4393] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:23.002097 containerd[1473]: time="2024-12-13T08:59:23.002044008Z" level=info msg="TearDown network for sandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\" successfully" Dec 13 08:59:23.002466 systemd[1]: run-netns-cni\x2d0a128f9e\x2dffb2\x2d5e88\x2d2e56\x2dc7f8d70cacc6.mount: Deactivated successfully. Dec 13 08:59:23.002693 containerd[1473]: time="2024-12-13T08:59:23.002663334Z" level=info msg="StopPodSandbox for \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\" returns successfully" Dec 13 08:59:23.006605 containerd[1473]: time="2024-12-13T08:59:23.006559569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8775d4447-vsr2v,Uid:3b341b31-6a86-4a5b-85c7-acbdb333dcd5,Namespace:calico-apiserver,Attempt:1,}" Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:22.878 [INFO][4401] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:22.879 [INFO][4401] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" iface="eth0" netns="/var/run/netns/cni-103bc40e-61b6-1c00-b0a0-a2ac1a718c76" Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:22.879 [INFO][4401] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" iface="eth0" netns="/var/run/netns/cni-103bc40e-61b6-1c00-b0a0-a2ac1a718c76" Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:22.881 [INFO][4401] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" iface="eth0" netns="/var/run/netns/cni-103bc40e-61b6-1c00-b0a0-a2ac1a718c76" Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:22.881 [INFO][4401] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:22.882 [INFO][4401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:23.028 [INFO][4436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" HandleID="k8s-pod-network.3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:23.028 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:23.028 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:23.045 [WARNING][4436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" HandleID="k8s-pod-network.3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:23.045 [INFO][4436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" HandleID="k8s-pod-network.3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:23.049 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:23.062229 containerd[1473]: 2024-12-13 08:59:23.054 [INFO][4401] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:23.064139 containerd[1473]: time="2024-12-13T08:59:23.064094300Z" level=info msg="TearDown network for sandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\" successfully" Dec 13 08:59:23.064427 containerd[1473]: time="2024-12-13T08:59:23.064295702Z" level=info msg="StopPodSandbox for \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\" returns successfully" Dec 13 08:59:23.067265 containerd[1473]: time="2024-12-13T08:59:23.067167568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf7964f-ch45z,Uid:ed34c5d8-9877-44f6-82cc-1e049f25725d,Namespace:calico-system,Attempt:1,}" Dec 13 08:59:23.072227 systemd[1]: run-netns-cni\x2d103bc40e\x2d61b6\x2d1c00\x2db0a0\x2da2ac1a718c76.mount: Deactivated successfully. Dec 13 08:59:23.177970 containerd[1473]: time="2024-12-13T08:59:23.177686827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:23.184855 containerd[1473]: time="2024-12-13T08:59:23.184631531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 08:59:23.186994 containerd[1473]: time="2024-12-13T08:59:23.186849871Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:23.193027 containerd[1473]: time="2024-12-13T08:59:23.192884927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:23.195170 containerd[1473]: time="2024-12-13T08:59:23.194993506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 3.09719486s" Dec 13 08:59:23.195170 containerd[1473]: time="2024-12-13T08:59:23.195039507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 08:59:23.196475 containerd[1473]: time="2024-12-13T08:59:23.195960035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 08:59:23.214529 containerd[1473]: time="2024-12-13T08:59:23.214466846Z" level=info msg="CreateContainer within sandbox \"d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 08:59:23.336049 containerd[1473]: time="2024-12-13T08:59:23.335886365Z" level=info msg="CreateContainer within sandbox \"d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d3822ea74496153e192b94792365bb44cc059cbc31419959d353469681081668\"" Dec 13 08:59:23.337620 containerd[1473]: time="2024-12-13T08:59:23.337123617Z" level=info msg="StartContainer for \"d3822ea74496153e192b94792365bb44cc059cbc31419959d353469681081668\"" Dec 13 08:59:23.399666 systemd[1]: Started cri-containerd-d3822ea74496153e192b94792365bb44cc059cbc31419959d353469681081668.scope - libcontainer container d3822ea74496153e192b94792365bb44cc059cbc31419959d353469681081668. Dec 13 08:59:23.412609 systemd-networkd[1372]: cali6517419de3e: Link UP Dec 13 08:59:23.413878 systemd-networkd[1372]: cali6517419de3e: Gained carrier Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.126 [INFO][4458] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0 coredns-76f75df574- kube-system b5f43183-1de9-47e8-b420-26f81d9d2ef1 793 0 2024-12-13 08:58:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-e-e153687e15 coredns-76f75df574-p84dh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6517419de3e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" Namespace="kube-system" Pod="coredns-76f75df574-p84dh" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-" Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.126 [INFO][4458] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" Namespace="kube-system" Pod="coredns-76f75df574-p84dh" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.279 [INFO][4495] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" HandleID="k8s-pod-network.de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.319 [INFO][4495] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" HandleID="k8s-pod-network.de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039d7a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-e-e153687e15", "pod":"coredns-76f75df574-p84dh", "timestamp":"2024-12-13 08:59:23.279790048 +0000 UTC"}, Hostname:"ci-4081-2-1-e-e153687e15", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.319 [INFO][4495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.320 [INFO][4495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.320 [INFO][4495] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-e-e153687e15' Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.324 [INFO][4495] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.332 [INFO][4495] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.350 [INFO][4495] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.355 [INFO][4495] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.361 [INFO][4495] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.361 [INFO][4495] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.365 [INFO][4495] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0 Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.381 [INFO][4495] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.395 [INFO][4495] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.67/26] block=192.168.124.64/26 handle="k8s-pod-network.de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.396 [INFO][4495] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.67/26] handle="k8s-pod-network.de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.396 [INFO][4495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:23.448140 containerd[1473]: 2024-12-13 08:59:23.396 [INFO][4495] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.67/26] IPv6=[] ContainerID="de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" HandleID="k8s-pod-network.de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:23.450881 containerd[1473]: 2024-12-13 08:59:23.402 [INFO][4458] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" Namespace="kube-system" Pod="coredns-76f75df574-p84dh" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b5f43183-1de9-47e8-b420-26f81d9d2ef1", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"", Pod:"coredns-76f75df574-p84dh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6517419de3e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:23.450881 containerd[1473]: 2024-12-13 08:59:23.403 [INFO][4458] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.67/32] ContainerID="de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" Namespace="kube-system" Pod="coredns-76f75df574-p84dh" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:23.450881 containerd[1473]: 2024-12-13 08:59:23.403 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6517419de3e ContainerID="de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" Namespace="kube-system" Pod="coredns-76f75df574-p84dh" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:23.450881 containerd[1473]: 2024-12-13 08:59:23.415 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" Namespace="kube-system" Pod="coredns-76f75df574-p84dh" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:23.450881 containerd[1473]: 2024-12-13 08:59:23.422 [INFO][4458] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" Namespace="kube-system" Pod="coredns-76f75df574-p84dh" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b5f43183-1de9-47e8-b420-26f81d9d2ef1", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0", Pod:"coredns-76f75df574-p84dh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6517419de3e", MAC:"fe:2b:fe:d2:77:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:23.450881 containerd[1473]: 2024-12-13 08:59:23.444 [INFO][4458] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0" Namespace="kube-system" Pod="coredns-76f75df574-p84dh" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:23.533517 containerd[1473]: time="2024-12-13T08:59:23.530138716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:23.533517 containerd[1473]: time="2024-12-13T08:59:23.530851442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:23.533517 containerd[1473]: time="2024-12-13T08:59:23.530875723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:23.533517 containerd[1473]: time="2024-12-13T08:59:23.531014004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:23.533480 systemd-networkd[1372]: cali4d7e5405e5d: Link UP Dec 13 08:59:23.533825 systemd-networkd[1372]: cali4d7e5405e5d: Gained carrier Dec 13 08:59:23.545577 containerd[1473]: time="2024-12-13T08:59:23.545243695Z" level=info msg="StartContainer for \"d3822ea74496153e192b94792365bb44cc059cbc31419959d353469681081668\" returns successfully" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.154 [INFO][4447] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0 coredns-76f75df574- kube-system 0f81c19a-7569-4158-afcc-88fa220a0f30 791 0 2024-12-13 08:58:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-e-e153687e15 coredns-76f75df574-pmrq8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4d7e5405e5d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" Namespace="kube-system" Pod="coredns-76f75df574-pmrq8" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.155 [INFO][4447] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" Namespace="kube-system" Pod="coredns-76f75df574-pmrq8" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.283 [INFO][4500] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" HandleID="k8s-pod-network.0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.320 [INFO][4500] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" HandleID="k8s-pod-network.0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ec7f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-e-e153687e15", "pod":"coredns-76f75df574-pmrq8", "timestamp":"2024-12-13 08:59:23.283785525 +0000 UTC"}, Hostname:"ci-4081-2-1-e-e153687e15", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.320 [INFO][4500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.396 [INFO][4500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.397 [INFO][4500] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-e-e153687e15' Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.404 [INFO][4500] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.423 [INFO][4500] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.442 [INFO][4500] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.449 [INFO][4500] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.457 [INFO][4500] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.457 [INFO][4500] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.467 [INFO][4500] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3 Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.479 [INFO][4500] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.502 [INFO][4500] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.68/26] block=192.168.124.64/26 handle="k8s-pod-network.0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.502 [INFO][4500] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.68/26] handle="k8s-pod-network.0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.508 [INFO][4500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:23.567753 containerd[1473]: 2024-12-13 08:59:23.508 [INFO][4500] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.68/26] IPv6=[] ContainerID="0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" HandleID="k8s-pod-network.0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:23.568753 containerd[1473]: 2024-12-13 08:59:23.518 [INFO][4447] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" Namespace="kube-system" Pod="coredns-76f75df574-pmrq8" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0f81c19a-7569-4158-afcc-88fa220a0f30", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"", Pod:"coredns-76f75df574-pmrq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d7e5405e5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:23.568753 containerd[1473]: 2024-12-13 08:59:23.518 [INFO][4447] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.68/32] ContainerID="0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" Namespace="kube-system" Pod="coredns-76f75df574-pmrq8" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:23.568753 containerd[1473]: 2024-12-13 08:59:23.518 [INFO][4447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d7e5405e5d ContainerID="0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" Namespace="kube-system" Pod="coredns-76f75df574-pmrq8" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:23.568753 containerd[1473]: 2024-12-13 08:59:23.532 [INFO][4447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" Namespace="kube-system" Pod="coredns-76f75df574-pmrq8" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:23.568753 containerd[1473]: 2024-12-13 08:59:23.532 [INFO][4447] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" Namespace="kube-system" Pod="coredns-76f75df574-pmrq8" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0f81c19a-7569-4158-afcc-88fa220a0f30", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3", Pod:"coredns-76f75df574-pmrq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d7e5405e5d", MAC:"1a:43:ab:7a:1d:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:23.568753 containerd[1473]: 2024-12-13 08:59:23.558 [INFO][4447] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3" Namespace="kube-system" Pod="coredns-76f75df574-pmrq8" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:23.589940 systemd[1]: Started cri-containerd-de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0.scope - libcontainer container de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0. Dec 13 08:59:23.630295 systemd-networkd[1372]: cali45f7ce608c5: Gained IPv6LL Dec 13 08:59:23.640358 containerd[1473]: time="2024-12-13T08:59:23.632847343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:23.640358 containerd[1473]: time="2024-12-13T08:59:23.632937584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:23.640358 containerd[1473]: time="2024-12-13T08:59:23.632949664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:23.640358 containerd[1473]: time="2024-12-13T08:59:23.633062185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:23.659162 systemd-networkd[1372]: calie1f280617f0: Link UP Dec 13 08:59:23.661072 systemd-networkd[1372]: calie1f280617f0: Gained carrier Dec 13 08:59:23.688859 systemd[1]: Started cri-containerd-0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3.scope - libcontainer container 0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3. Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.200 [INFO][4470] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0 calico-apiserver-8775d4447- calico-apiserver 3b341b31-6a86-4a5b-85c7-acbdb333dcd5 794 0 2024-12-13 08:58:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8775d4447 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-e-e153687e15 calico-apiserver-8775d4447-vsr2v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie1f280617f0 [] []}} ContainerID="fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-vsr2v" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-" Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.201 [INFO][4470] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-vsr2v" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.324 [INFO][4507] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" HandleID="k8s-pod-network.fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.353 [INFO][4507] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" HandleID="k8s-pod-network.fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400041d2c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-e-e153687e15", "pod":"calico-apiserver-8775d4447-vsr2v", "timestamp":"2024-12-13 08:59:23.324103577 +0000 UTC"}, Hostname:"ci-4081-2-1-e-e153687e15", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.355 [INFO][4507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.509 [INFO][4507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.509 [INFO][4507] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-e-e153687e15' Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.517 [INFO][4507] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.552 [INFO][4507] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.568 [INFO][4507] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.572 [INFO][4507] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.580 [INFO][4507] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.581 [INFO][4507] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.586 [INFO][4507] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685 Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.600 [INFO][4507] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.616 [INFO][4507] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.69/26] block=192.168.124.64/26 handle="k8s-pod-network.fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.617 [INFO][4507] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.69/26] handle="k8s-pod-network.fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.619 [INFO][4507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:23.720709 containerd[1473]: 2024-12-13 08:59:23.619 [INFO][4507] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.69/26] IPv6=[] ContainerID="fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" HandleID="k8s-pod-network.fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:23.721735 containerd[1473]: 2024-12-13 08:59:23.646 [INFO][4470] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-vsr2v" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0", GenerateName:"calico-apiserver-8775d4447-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b341b31-6a86-4a5b-85c7-acbdb333dcd5", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8775d4447", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"", Pod:"calico-apiserver-8775d4447-vsr2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1f280617f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:23.721735 containerd[1473]: 2024-12-13 08:59:23.646 [INFO][4470] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.69/32] ContainerID="fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-vsr2v" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:23.721735 containerd[1473]: 2024-12-13 08:59:23.646 [INFO][4470] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1f280617f0 ContainerID="fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-vsr2v" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:23.721735 containerd[1473]: 2024-12-13 08:59:23.664 [INFO][4470] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-vsr2v" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:23.721735 containerd[1473]: 2024-12-13 08:59:23.678 [INFO][4470] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-vsr2v" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0", GenerateName:"calico-apiserver-8775d4447-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b341b31-6a86-4a5b-85c7-acbdb333dcd5", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8775d4447", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685", Pod:"calico-apiserver-8775d4447-vsr2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1f280617f0", MAC:"92:c0:36:00:78:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:23.721735 containerd[1473]: 2024-12-13 08:59:23.699 [INFO][4470] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685" Namespace="calico-apiserver" Pod="calico-apiserver-8775d4447-vsr2v" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:23.745980 containerd[1473]: time="2024-12-13T08:59:23.745764504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p84dh,Uid:b5f43183-1de9-47e8-b420-26f81d9d2ef1,Namespace:kube-system,Attempt:1,} returns sandbox id \"de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0\"" Dec 13 08:59:23.770992 systemd-networkd[1372]: calibc01c1fa2ad: Link UP Dec 13 08:59:23.771124 systemd-networkd[1372]: calibc01c1fa2ad: Gained carrier Dec 13 08:59:23.785437 containerd[1473]: time="2024-12-13T08:59:23.784920625Z" level=info msg="CreateContainer within sandbox \"de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.247 [INFO][4485] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0 calico-kube-controllers-6bf7964f- calico-system ed34c5d8-9877-44f6-82cc-1e049f25725d 795 0 2024-12-13 08:58:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bf7964f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-e-e153687e15 calico-kube-controllers-6bf7964f-ch45z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibc01c1fa2ad [] []}} ContainerID="453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" Namespace="calico-system" Pod="calico-kube-controllers-6bf7964f-ch45z" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.247 [INFO][4485] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" Namespace="calico-system" Pod="calico-kube-controllers-6bf7964f-ch45z" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.369 [INFO][4515] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" HandleID="k8s-pod-network.453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.390 [INFO][4515] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" HandleID="k8s-pod-network.453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e4dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-e-e153687e15", "pod":"calico-kube-controllers-6bf7964f-ch45z", "timestamp":"2024-12-13 08:59:23.369187832 +0000 UTC"}, Hostname:"ci-4081-2-1-e-e153687e15", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.391 [INFO][4515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.619 [INFO][4515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.619 [INFO][4515] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-e-e153687e15' Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.627 [INFO][4515] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.663 [INFO][4515] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.683 [INFO][4515] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.691 [INFO][4515] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.711 [INFO][4515] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.711 [INFO][4515] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.720 [INFO][4515] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.737 [INFO][4515] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.756 [INFO][4515] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.70/26] block=192.168.124.64/26 handle="k8s-pod-network.453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.758 [INFO][4515] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.70/26] handle="k8s-pod-network.453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" host="ci-4081-2-1-e-e153687e15" Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.759 [INFO][4515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:23.818070 containerd[1473]: 2024-12-13 08:59:23.759 [INFO][4515] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.70/26] IPv6=[] ContainerID="453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" HandleID="k8s-pod-network.453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:23.819423 containerd[1473]: 2024-12-13 08:59:23.768 [INFO][4485] cni-plugin/k8s.go 386: Populated endpoint ContainerID="453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" Namespace="calico-system" Pod="calico-kube-controllers-6bf7964f-ch45z" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0", GenerateName:"calico-kube-controllers-6bf7964f-", Namespace:"calico-system", SelfLink:"", UID:"ed34c5d8-9877-44f6-82cc-1e049f25725d", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bf7964f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"", Pod:"calico-kube-controllers-6bf7964f-ch45z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc01c1fa2ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:23.819423 containerd[1473]: 2024-12-13 08:59:23.769 [INFO][4485] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.70/32] ContainerID="453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" Namespace="calico-system" Pod="calico-kube-controllers-6bf7964f-ch45z" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:23.819423 containerd[1473]: 2024-12-13 08:59:23.769 [INFO][4485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc01c1fa2ad ContainerID="453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" Namespace="calico-system" Pod="calico-kube-controllers-6bf7964f-ch45z" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:23.819423 containerd[1473]: 2024-12-13 08:59:23.770 [INFO][4485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" Namespace="calico-system" Pod="calico-kube-controllers-6bf7964f-ch45z" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:23.819423 containerd[1473]: 2024-12-13 08:59:23.771 [INFO][4485] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" Namespace="calico-system" Pod="calico-kube-controllers-6bf7964f-ch45z" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0", GenerateName:"calico-kube-controllers-6bf7964f-", Namespace:"calico-system", SelfLink:"", UID:"ed34c5d8-9877-44f6-82cc-1e049f25725d", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bf7964f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c", Pod:"calico-kube-controllers-6bf7964f-ch45z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc01c1fa2ad", MAC:"52:81:4a:57:58:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:23.819423 containerd[1473]: 2024-12-13 08:59:23.800 [INFO][4485] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c" Namespace="calico-system" Pod="calico-kube-controllers-6bf7964f-ch45z" WorkloadEndpoint="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:23.829568 containerd[1473]: time="2024-12-13T08:59:23.817842088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:23.829568 containerd[1473]: time="2024-12-13T08:59:23.819344942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:23.829568 containerd[1473]: time="2024-12-13T08:59:23.820299871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:23.829568 containerd[1473]: time="2024-12-13T08:59:23.820985197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:23.830706 containerd[1473]: time="2024-12-13T08:59:23.830632926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pmrq8,Uid:0f81c19a-7569-4158-afcc-88fa220a0f30,Namespace:kube-system,Attempt:1,} returns sandbox id \"0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3\"" Dec 13 08:59:23.845001 containerd[1473]: time="2024-12-13T08:59:23.844941298Z" level=info msg="CreateContainer within sandbox \"de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"50a3490be82676f5ea288c193eefd0bb7f174b8ef11c737ad1275555f1db4ad8\"" Dec 13 08:59:23.852349 containerd[1473]: time="2024-12-13T08:59:23.851882922Z" level=info msg="CreateContainer within sandbox \"0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 08:59:23.852349 containerd[1473]: time="2024-12-13T08:59:23.852046283Z" level=info msg="StartContainer for \"50a3490be82676f5ea288c193eefd0bb7f174b8ef11c737ad1275555f1db4ad8\"" Dec 13 08:59:23.856463 systemd[1]: Started cri-containerd-fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685.scope - libcontainer container fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685. Dec 13 08:59:23.909857 containerd[1473]: time="2024-12-13T08:59:23.909770815Z" level=info msg="CreateContainer within sandbox \"0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4df7238a0009c1fe82769fa4f217ce9085875b62ad8bc5566d6ca89e12c7fb54\"" Dec 13 08:59:23.915245 containerd[1473]: time="2024-12-13T08:59:23.914475139Z" level=info msg="StartContainer for \"4df7238a0009c1fe82769fa4f217ce9085875b62ad8bc5566d6ca89e12c7fb54\"" Dec 13 08:59:23.927646 systemd[1]: Started cri-containerd-50a3490be82676f5ea288c193eefd0bb7f174b8ef11c737ad1275555f1db4ad8.scope - libcontainer container 50a3490be82676f5ea288c193eefd0bb7f174b8ef11c737ad1275555f1db4ad8. Dec 13 08:59:23.929853 containerd[1473]: time="2024-12-13T08:59:23.929710319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:23.930154 containerd[1473]: time="2024-12-13T08:59:23.929825120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:23.930154 containerd[1473]: time="2024-12-13T08:59:23.929843360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:23.930476 containerd[1473]: time="2024-12-13T08:59:23.930431046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:23.967640 systemd[1]: Started cri-containerd-453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c.scope - libcontainer container 453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c. Dec 13 08:59:23.970048 systemd[1]: Started cri-containerd-4df7238a0009c1fe82769fa4f217ce9085875b62ad8bc5566d6ca89e12c7fb54.scope - libcontainer container 4df7238a0009c1fe82769fa4f217ce9085875b62ad8bc5566d6ca89e12c7fb54. Dec 13 08:59:24.022325 containerd[1473]: time="2024-12-13T08:59:24.020906597Z" level=info msg="StartContainer for \"50a3490be82676f5ea288c193eefd0bb7f174b8ef11c737ad1275555f1db4ad8\" returns successfully" Dec 13 08:59:24.031223 containerd[1473]: time="2024-12-13T08:59:24.031025729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8775d4447-vsr2v,Uid:3b341b31-6a86-4a5b-85c7-acbdb333dcd5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685\"" Dec 13 08:59:24.044404 containerd[1473]: time="2024-12-13T08:59:24.043658964Z" level=info msg="StartContainer for \"4df7238a0009c1fe82769fa4f217ce9085875b62ad8bc5566d6ca89e12c7fb54\" returns successfully" Dec 13 08:59:24.150666 containerd[1473]: time="2024-12-13T08:59:24.150614937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf7964f-ch45z,Uid:ed34c5d8-9877-44f6-82cc-1e049f25725d,Namespace:calico-system,Attempt:1,} returns sandbox id \"453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c\"" Dec 13 08:59:24.316097 kubelet[2703]: I1213 08:59:24.315932 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:59:24.428813 systemd[1]: run-containerd-runc-k8s.io-65c99f7768b02dccfa77c611939f37d94fbceeaf81cd422202093ea914360389-runc.4oOxeJ.mount: Deactivated successfully. Dec 13 08:59:24.844707 systemd-networkd[1372]: calie1f280617f0: Gained IPv6LL Dec 13 08:59:24.954532 kubelet[2703]: I1213 08:59:24.953480 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pmrq8" podStartSLOduration=37.953432477 podStartE2EDuration="37.953432477s" podCreationTimestamp="2024-12-13 08:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:59:24.953046194 +0000 UTC m=+50.507849613" watchObservedRunningTime="2024-12-13 08:59:24.953432477 +0000 UTC m=+50.508235856" Dec 13 08:59:25.037054 systemd-networkd[1372]: cali6517419de3e: Gained IPv6LL Dec 13 08:59:25.292123 systemd-networkd[1372]: calibc01c1fa2ad: Gained IPv6LL Dec 13 08:59:25.484360 systemd-networkd[1372]: cali4d7e5405e5d: Gained IPv6LL Dec 13 08:59:26.417470 containerd[1473]: time="2024-12-13T08:59:26.417378970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:26.419204 containerd[1473]: time="2024-12-13T08:59:26.418958624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 08:59:26.420466 containerd[1473]: time="2024-12-13T08:59:26.420199835Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:26.434403 containerd[1473]: time="2024-12-13T08:59:26.433880436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:26.434403 containerd[1473]: time="2024-12-13T08:59:26.434241119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.238248403s" Dec 13 08:59:26.434403 containerd[1473]: time="2024-12-13T08:59:26.434275879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 08:59:26.435674 containerd[1473]: time="2024-12-13T08:59:26.435452930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 08:59:26.443772 containerd[1473]: time="2024-12-13T08:59:26.443619602Z" level=info msg="CreateContainer within sandbox \"a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 08:59:26.479431 containerd[1473]: time="2024-12-13T08:59:26.479006555Z" level=info msg="CreateContainer within sandbox \"a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9b2761c30baa646adc6901c76844ba2a9e9add94bc57eeb81a9b366f564709f2\"" Dec 13 08:59:26.481478 containerd[1473]: time="2024-12-13T08:59:26.481346616Z" level=info msg="StartContainer for \"9b2761c30baa646adc6901c76844ba2a9e9add94bc57eeb81a9b366f564709f2\"" Dec 13 08:59:26.532632 systemd[1]: Started cri-containerd-9b2761c30baa646adc6901c76844ba2a9e9add94bc57eeb81a9b366f564709f2.scope - libcontainer container 9b2761c30baa646adc6901c76844ba2a9e9add94bc57eeb81a9b366f564709f2. Dec 13 08:59:26.575600 containerd[1473]: time="2024-12-13T08:59:26.574818804Z" level=info msg="StartContainer for \"9b2761c30baa646adc6901c76844ba2a9e9add94bc57eeb81a9b366f564709f2\" returns successfully" Dec 13 08:59:26.975420 kubelet[2703]: I1213 08:59:26.974091 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-p84dh" podStartSLOduration=39.974047179 podStartE2EDuration="39.974047179s" podCreationTimestamp="2024-12-13 08:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:59:24.990715296 +0000 UTC m=+50.545518675" watchObservedRunningTime="2024-12-13 08:59:26.974047179 +0000 UTC m=+52.528850558" Dec 13 08:59:28.341728 containerd[1473]: time="2024-12-13T08:59:28.341663181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:28.343316 containerd[1473]: time="2024-12-13T08:59:28.343258314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 08:59:28.347405 containerd[1473]: time="2024-12-13T08:59:28.346602543Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:28.352169 containerd[1473]: time="2024-12-13T08:59:28.352099191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:28.354498 containerd[1473]: time="2024-12-13T08:59:28.354375010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.91888528s" Dec 13 08:59:28.354498 containerd[1473]: time="2024-12-13T08:59:28.354459171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 08:59:28.356841 containerd[1473]: time="2024-12-13T08:59:28.356792991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 08:59:28.358060 containerd[1473]: time="2024-12-13T08:59:28.358016322Z" level=info msg="CreateContainer within sandbox \"d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 08:59:28.419611 containerd[1473]: time="2024-12-13T08:59:28.419505013Z" level=info msg="CreateContainer within sandbox \"d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"33badda1e14c877ee7f3bcdceec23f67b9fc924aa4fc2bae561b60744ffb9532\"" Dec 13 08:59:28.420306 containerd[1473]: time="2024-12-13T08:59:28.420267179Z" level=info msg="StartContainer for \"33badda1e14c877ee7f3bcdceec23f67b9fc924aa4fc2bae561b60744ffb9532\"" Dec 13 08:59:28.471052 systemd[1]: run-containerd-runc-k8s.io-33badda1e14c877ee7f3bcdceec23f67b9fc924aa4fc2bae561b60744ffb9532-runc.1FmJmS.mount: Deactivated successfully. Dec 13 08:59:28.483474 systemd[1]: Started cri-containerd-33badda1e14c877ee7f3bcdceec23f67b9fc924aa4fc2bae561b60744ffb9532.scope - libcontainer container 33badda1e14c877ee7f3bcdceec23f67b9fc924aa4fc2bae561b60744ffb9532. Dec 13 08:59:28.535628 containerd[1473]: time="2024-12-13T08:59:28.535035170Z" level=info msg="StartContainer for \"33badda1e14c877ee7f3bcdceec23f67b9fc924aa4fc2bae561b60744ffb9532\" returns successfully" Dec 13 08:59:28.741032 containerd[1473]: time="2024-12-13T08:59:28.740898787Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:28.741861 containerd[1473]: time="2024-12-13T08:59:28.741826555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 08:59:28.745589 kubelet[2703]: I1213 08:59:28.744667 2703 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 08:59:28.750869 kubelet[2703]: I1213 08:59:28.750793 2703 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 08:59:28.751520 containerd[1473]: time="2024-12-13T08:59:28.751399678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 394.547045ms" Dec 13 08:59:28.751983 containerd[1473]: time="2024-12-13T08:59:28.751863042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 08:59:28.754930 containerd[1473]: time="2024-12-13T08:59:28.753596297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 08:59:28.756346 containerd[1473]: time="2024-12-13T08:59:28.756263880Z" level=info msg="CreateContainer within sandbox \"fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 08:59:28.790408 containerd[1473]: time="2024-12-13T08:59:28.789956011Z" level=info msg="CreateContainer within sandbox \"fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1c5e6a7797d0adadc01110b2f6947950f20b35bb792065a72dffd61d70a0e4c5\"" Dec 13 08:59:28.791971 containerd[1473]: time="2024-12-13T08:59:28.791936788Z" level=info msg="StartContainer for \"1c5e6a7797d0adadc01110b2f6947950f20b35bb792065a72dffd61d70a0e4c5\"" Dec 13 08:59:28.838912 kubelet[2703]: I1213 08:59:28.838863 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8775d4447-zfgm4" podStartSLOduration=29.459819592 podStartE2EDuration="33.838813672s" podCreationTimestamp="2024-12-13 08:58:55 +0000 UTC" firstStartedPulling="2024-12-13 08:59:22.056086126 +0000 UTC m=+47.610889505" lastFinishedPulling="2024-12-13 08:59:26.435080246 +0000 UTC m=+51.989883585" observedRunningTime="2024-12-13 08:59:26.976944845 +0000 UTC m=+52.531748224" watchObservedRunningTime="2024-12-13 08:59:28.838813672 +0000 UTC m=+54.393617011" Dec 13 08:59:28.861690 systemd[1]: Started cri-containerd-1c5e6a7797d0adadc01110b2f6947950f20b35bb792065a72dffd61d70a0e4c5.scope - libcontainer container 1c5e6a7797d0adadc01110b2f6947950f20b35bb792065a72dffd61d70a0e4c5. Dec 13 08:59:28.925488 containerd[1473]: time="2024-12-13T08:59:28.924069888Z" level=info msg="StartContainer for \"1c5e6a7797d0adadc01110b2f6947950f20b35bb792065a72dffd61d70a0e4c5\" returns successfully" Dec 13 08:59:28.993170 kubelet[2703]: I1213 08:59:28.993047 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8775d4447-vsr2v" podStartSLOduration=29.276849574 podStartE2EDuration="33.992998163s" podCreationTimestamp="2024-12-13 08:58:55 +0000 UTC" firstStartedPulling="2024-12-13 08:59:24.0366149 +0000 UTC m=+49.591418279" lastFinishedPulling="2024-12-13 08:59:28.752763489 +0000 UTC m=+54.307566868" observedRunningTime="2024-12-13 08:59:28.992500639 +0000 UTC m=+54.547304018" watchObservedRunningTime="2024-12-13 08:59:28.992998163 +0000 UTC m=+54.547801542" Dec 13 08:59:29.010238 kubelet[2703]: I1213 08:59:29.009952 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-chvmc" podStartSLOduration=25.751459646 podStartE2EDuration="34.009899828s" podCreationTimestamp="2024-12-13 08:58:55 +0000 UTC" firstStartedPulling="2024-12-13 08:59:20.096517033 +0000 UTC m=+45.651320412" lastFinishedPulling="2024-12-13 08:59:28.354957215 +0000 UTC m=+53.909760594" observedRunningTime="2024-12-13 08:59:29.009577185 +0000 UTC m=+54.564380564" watchObservedRunningTime="2024-12-13 08:59:29.009899828 +0000 UTC m=+54.564703207" Dec 13 08:59:29.976468 kubelet[2703]: I1213 08:59:29.974651 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:59:31.477002 containerd[1473]: time="2024-12-13T08:59:31.476039815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:31.478861 containerd[1473]: time="2024-12-13T08:59:31.478805318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 08:59:31.480906 containerd[1473]: time="2024-12-13T08:59:31.480858735Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:31.485697 containerd[1473]: time="2024-12-13T08:59:31.485639895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:31.488624 containerd[1473]: time="2024-12-13T08:59:31.487503270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.732406721s" Dec 13 08:59:31.488624 containerd[1473]: time="2024-12-13T08:59:31.487595431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 08:59:31.520113 containerd[1473]: time="2024-12-13T08:59:31.520064101Z" level=info msg="CreateContainer within sandbox \"453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 08:59:31.539575 containerd[1473]: time="2024-12-13T08:59:31.539342782Z" level=info msg="CreateContainer within sandbox \"453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"34af568a591ab8558c39dda12cbf71be1d94bb0bb13efcb6810f2cd88697e737\"" Dec 13 08:59:31.543243 containerd[1473]: time="2024-12-13T08:59:31.543148733Z" level=info msg="StartContainer for \"34af568a591ab8558c39dda12cbf71be1d94bb0bb13efcb6810f2cd88697e737\"" Dec 13 08:59:31.594807 systemd[1]: Started cri-containerd-34af568a591ab8558c39dda12cbf71be1d94bb0bb13efcb6810f2cd88697e737.scope - libcontainer container 34af568a591ab8558c39dda12cbf71be1d94bb0bb13efcb6810f2cd88697e737. Dec 13 08:59:31.653607 containerd[1473]: time="2024-12-13T08:59:31.653115969Z" level=info msg="StartContainer for \"34af568a591ab8558c39dda12cbf71be1d94bb0bb13efcb6810f2cd88697e737\" returns successfully" Dec 13 08:59:31.884919 update_engine[1453]: I20241213 08:59:31.883800 1453 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 08:59:31.884919 update_engine[1453]: I20241213 08:59:31.883859 1453 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 08:59:31.887699 update_engine[1453]: I20241213 08:59:31.885416 1453 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 08:59:31.891169 update_engine[1453]: I20241213 08:59:31.889595 1453 omaha_request_params.cc:62] Current group set to stable Dec 13 08:59:31.891169 update_engine[1453]: I20241213 08:59:31.889714 1453 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 08:59:31.891169 update_engine[1453]: I20241213 08:59:31.889722 1453 update_attempter.cc:643] Scheduling an action processor start. Dec 13 08:59:31.891169 update_engine[1453]: I20241213 08:59:31.889743 1453 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 08:59:31.891169 update_engine[1453]: I20241213 08:59:31.889780 1453 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 08:59:31.891169 update_engine[1453]: I20241213 08:59:31.889845 1453 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 08:59:31.891169 update_engine[1453]: I20241213 08:59:31.889853 1453 omaha_request_action.cc:272] Request: Dec 13 08:59:31.891169 update_engine[1453]: Dec 13 08:59:31.891169 update_engine[1453]: Dec 13 08:59:31.891169 update_engine[1453]: Dec 13 08:59:31.891169 update_engine[1453]: Dec 13 08:59:31.891169 update_engine[1453]: Dec 13 08:59:31.891169 update_engine[1453]: Dec 13 08:59:31.891169 update_engine[1453]: Dec 13 08:59:31.891169 update_engine[1453]: Dec 13 08:59:31.891169 update_engine[1453]: I20241213 08:59:31.889858 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 08:59:31.891662 locksmithd[1493]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 08:59:31.893663 update_engine[1453]: I20241213 08:59:31.893625 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 08:59:31.894145 update_engine[1453]: I20241213 08:59:31.894115 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 08:59:31.895261 update_engine[1453]: E20241213 08:59:31.895231 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 08:59:31.895740 update_engine[1453]: I20241213 08:59:31.895712 1453 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 08:59:32.007452 kubelet[2703]: I1213 08:59:32.006329 2703 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bf7964f-ch45z" podStartSLOduration=29.671056389 podStartE2EDuration="37.006286747s" podCreationTimestamp="2024-12-13 08:58:55 +0000 UTC" firstStartedPulling="2024-12-13 08:59:24.152600275 +0000 UTC m=+49.707403654" lastFinishedPulling="2024-12-13 08:59:31.487830633 +0000 UTC m=+57.042634012" observedRunningTime="2024-12-13 08:59:32.005837704 +0000 UTC m=+57.560641083" watchObservedRunningTime="2024-12-13 08:59:32.006286747 +0000 UTC m=+57.561090126" Dec 13 08:59:34.624616 containerd[1473]: time="2024-12-13T08:59:34.624006913Z" level=info msg="StopPodSandbox for \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\"" Dec 13 08:59:34.722051 containerd[1473]: 2024-12-13 08:59:34.680 [WARNING][5126] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0", GenerateName:"calico-apiserver-8775d4447-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b341b31-6a86-4a5b-85c7-acbdb333dcd5", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8775d4447", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685", Pod:"calico-apiserver-8775d4447-vsr2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1f280617f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:34.722051 containerd[1473]: 2024-12-13 08:59:34.680 [INFO][5126] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:34.722051 containerd[1473]: 2024-12-13 08:59:34.680 [INFO][5126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" iface="eth0" netns="" Dec 13 08:59:34.722051 containerd[1473]: 2024-12-13 08:59:34.680 [INFO][5126] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:34.722051 containerd[1473]: 2024-12-13 08:59:34.680 [INFO][5126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:34.722051 containerd[1473]: 2024-12-13 08:59:34.701 [INFO][5132] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" HandleID="k8s-pod-network.5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:34.722051 containerd[1473]: 2024-12-13 08:59:34.701 [INFO][5132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:34.722051 containerd[1473]: 2024-12-13 08:59:34.701 [INFO][5132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:34.722051 containerd[1473]: 2024-12-13 08:59:34.713 [WARNING][5132] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" HandleID="k8s-pod-network.5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:34.722051 containerd[1473]: 2024-12-13 08:59:34.713 [INFO][5132] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" HandleID="k8s-pod-network.5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:34.722051 containerd[1473]: 2024-12-13 08:59:34.715 [INFO][5132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:34.722051 containerd[1473]: 2024-12-13 08:59:34.718 [INFO][5126] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:34.722051 containerd[1473]: time="2024-12-13T08:59:34.721480897Z" level=info msg="TearDown network for sandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\" successfully" Dec 13 08:59:34.722051 containerd[1473]: time="2024-12-13T08:59:34.721513777Z" level=info msg="StopPodSandbox for \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\" returns successfully" Dec 13 08:59:34.722684 containerd[1473]: time="2024-12-13T08:59:34.722360184Z" level=info msg="RemovePodSandbox for \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\"" Dec 13 08:59:34.722684 containerd[1473]: time="2024-12-13T08:59:34.722475145Z" level=info msg="Forcibly stopping sandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\"" Dec 13 08:59:34.840428 containerd[1473]: 2024-12-13 08:59:34.798 [WARNING][5150] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0", GenerateName:"calico-apiserver-8775d4447-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b341b31-6a86-4a5b-85c7-acbdb333dcd5", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8775d4447", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"fd097115303eb0c338eb65eb9dffd4c34897791b0b8ce5fc0b549a0324d3c685", Pod:"calico-apiserver-8775d4447-vsr2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1f280617f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:34.840428 containerd[1473]: 2024-12-13 08:59:34.798 [INFO][5150] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:34.840428 containerd[1473]: 2024-12-13 08:59:34.798 [INFO][5150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" iface="eth0" netns="" Dec 13 08:59:34.840428 containerd[1473]: 2024-12-13 08:59:34.798 [INFO][5150] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:34.840428 containerd[1473]: 2024-12-13 08:59:34.798 [INFO][5150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:34.840428 containerd[1473]: 2024-12-13 08:59:34.819 [INFO][5156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" HandleID="k8s-pod-network.5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:34.840428 containerd[1473]: 2024-12-13 08:59:34.819 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:34.840428 containerd[1473]: 2024-12-13 08:59:34.819 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:34.840428 containerd[1473]: 2024-12-13 08:59:34.833 [WARNING][5156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" HandleID="k8s-pod-network.5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:34.840428 containerd[1473]: 2024-12-13 08:59:34.833 [INFO][5156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" HandleID="k8s-pod-network.5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--vsr2v-eth0" Dec 13 08:59:34.840428 containerd[1473]: 2024-12-13 08:59:34.837 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:34.840428 containerd[1473]: 2024-12-13 08:59:34.838 [INFO][5150] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d" Dec 13 08:59:34.840428 containerd[1473]: time="2024-12-13T08:59:34.840289652Z" level=info msg="TearDown network for sandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\" successfully" Dec 13 08:59:34.849530 containerd[1473]: time="2024-12-13T08:59:34.849261004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:59:34.849530 containerd[1473]: time="2024-12-13T08:59:34.849347125Z" level=info msg="RemovePodSandbox \"5702fc315c322ceba77e4fb6e1e383473de1d7f70501e08f5b78b55f19f3538d\" returns successfully" Dec 13 08:59:34.850113 containerd[1473]: time="2024-12-13T08:59:34.850072411Z" level=info msg="StopPodSandbox for \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\"" Dec 13 08:59:34.966155 containerd[1473]: 2024-12-13 08:59:34.902 [WARNING][5174] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8cd9a8db-6f3d-4382-8dc3-75aed978669b", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396", Pod:"csi-node-driver-chvmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b710e1a750", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:34.966155 containerd[1473]: 2024-12-13 08:59:34.903 [INFO][5174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:34.966155 containerd[1473]: 2024-12-13 08:59:34.903 [INFO][5174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" iface="eth0" netns="" Dec 13 08:59:34.966155 containerd[1473]: 2024-12-13 08:59:34.903 [INFO][5174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:34.966155 containerd[1473]: 2024-12-13 08:59:34.903 [INFO][5174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:34.966155 containerd[1473]: 2024-12-13 08:59:34.944 [INFO][5183] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" HandleID="k8s-pod-network.8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Workload="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:34.966155 containerd[1473]: 2024-12-13 08:59:34.945 [INFO][5183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:34.966155 containerd[1473]: 2024-12-13 08:59:34.945 [INFO][5183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:34.966155 containerd[1473]: 2024-12-13 08:59:34.959 [WARNING][5183] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" HandleID="k8s-pod-network.8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Workload="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:34.966155 containerd[1473]: 2024-12-13 08:59:34.959 [INFO][5183] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" HandleID="k8s-pod-network.8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Workload="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:34.966155 containerd[1473]: 2024-12-13 08:59:34.962 [INFO][5183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:34.966155 containerd[1473]: 2024-12-13 08:59:34.963 [INFO][5174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:34.966155 containerd[1473]: time="2024-12-13T08:59:34.965943583Z" level=info msg="TearDown network for sandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\" successfully" Dec 13 08:59:34.966155 containerd[1473]: time="2024-12-13T08:59:34.965968823Z" level=info msg="StopPodSandbox for \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\" returns successfully" Dec 13 08:59:34.967312 containerd[1473]: time="2024-12-13T08:59:34.967081752Z" level=info msg="RemovePodSandbox for \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\"" Dec 13 08:59:34.967312 containerd[1473]: time="2024-12-13T08:59:34.967122072Z" level=info msg="Forcibly stopping sandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\"" Dec 13 08:59:35.104421 containerd[1473]: 2024-12-13 08:59:35.047 [WARNING][5203] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8cd9a8db-6f3d-4382-8dc3-75aed978669b", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"d215ad6ded80dfe167b830249fd6b0feb0e36bb4c566d2ab9b0faab0ddaac396", Pod:"csi-node-driver-chvmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b710e1a750", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:35.104421 containerd[1473]: 2024-12-13 08:59:35.047 [INFO][5203] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:35.104421 containerd[1473]: 2024-12-13 08:59:35.047 [INFO][5203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" iface="eth0" netns="" Dec 13 08:59:35.104421 containerd[1473]: 2024-12-13 08:59:35.047 [INFO][5203] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:35.104421 containerd[1473]: 2024-12-13 08:59:35.047 [INFO][5203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:35.104421 containerd[1473]: 2024-12-13 08:59:35.081 [INFO][5209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" HandleID="k8s-pod-network.8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Workload="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:35.104421 containerd[1473]: 2024-12-13 08:59:35.082 [INFO][5209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:35.104421 containerd[1473]: 2024-12-13 08:59:35.082 [INFO][5209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:35.104421 containerd[1473]: 2024-12-13 08:59:35.093 [WARNING][5209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" HandleID="k8s-pod-network.8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Workload="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:35.104421 containerd[1473]: 2024-12-13 08:59:35.094 [INFO][5209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" HandleID="k8s-pod-network.8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Workload="ci--4081--2--1--e--e153687e15-k8s-csi--node--driver--chvmc-eth0" Dec 13 08:59:35.104421 containerd[1473]: 2024-12-13 08:59:35.096 [INFO][5209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:35.104421 containerd[1473]: 2024-12-13 08:59:35.100 [INFO][5203] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef" Dec 13 08:59:35.104421 containerd[1473]: time="2024-12-13T08:59:35.103914243Z" level=info msg="TearDown network for sandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\" successfully" Dec 13 08:59:35.112180 containerd[1473]: time="2024-12-13T08:59:35.110263694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:59:35.112180 containerd[1473]: time="2024-12-13T08:59:35.110450455Z" level=info msg="RemovePodSandbox \"8f0cce603f2bedeb25c8913ada59f0721aed435654f1c4f23989e30b79e2ccef\" returns successfully" Dec 13 08:59:35.112180 containerd[1473]: time="2024-12-13T08:59:35.111994788Z" level=info msg="StopPodSandbox for \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\"" Dec 13 08:59:35.205939 containerd[1473]: 2024-12-13 08:59:35.164 [WARNING][5227] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0", GenerateName:"calico-apiserver-8775d4447-", Namespace:"calico-apiserver", SelfLink:"", UID:"4825802f-c466-4d66-9dd0-24e12a47633b", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8775d4447", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac", Pod:"calico-apiserver-8775d4447-zfgm4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45f7ce608c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:35.205939 containerd[1473]: 2024-12-13 08:59:35.165 [INFO][5227] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:35.205939 containerd[1473]: 2024-12-13 08:59:35.165 [INFO][5227] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" iface="eth0" netns="" Dec 13 08:59:35.205939 containerd[1473]: 2024-12-13 08:59:35.165 [INFO][5227] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:35.205939 containerd[1473]: 2024-12-13 08:59:35.165 [INFO][5227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:35.205939 containerd[1473]: 2024-12-13 08:59:35.187 [INFO][5233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" HandleID="k8s-pod-network.c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:35.205939 containerd[1473]: 2024-12-13 08:59:35.187 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:35.205939 containerd[1473]: 2024-12-13 08:59:35.187 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:35.205939 containerd[1473]: 2024-12-13 08:59:35.199 [WARNING][5233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" HandleID="k8s-pod-network.c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:35.205939 containerd[1473]: 2024-12-13 08:59:35.199 [INFO][5233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" HandleID="k8s-pod-network.c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:35.205939 containerd[1473]: 2024-12-13 08:59:35.202 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:35.205939 containerd[1473]: 2024-12-13 08:59:35.203 [INFO][5227] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:35.205939 containerd[1473]: time="2024-12-13T08:59:35.205439851Z" level=info msg="TearDown network for sandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\" successfully" Dec 13 08:59:35.205939 containerd[1473]: time="2024-12-13T08:59:35.205468251Z" level=info msg="StopPodSandbox for \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\" returns successfully" Dec 13 08:59:35.206987 containerd[1473]: time="2024-12-13T08:59:35.206591140Z" level=info msg="RemovePodSandbox for \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\"" Dec 13 08:59:35.206987 containerd[1473]: time="2024-12-13T08:59:35.206662620Z" level=info msg="Forcibly stopping sandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\"" Dec 13 08:59:35.312690 containerd[1473]: 2024-12-13 08:59:35.264 [WARNING][5251] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0", GenerateName:"calico-apiserver-8775d4447-", Namespace:"calico-apiserver", SelfLink:"", UID:"4825802f-c466-4d66-9dd0-24e12a47633b", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8775d4447", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"a1944b66207acd03a9d8c494314031f1be3480becb405e7ea6c66b37205b5bac", Pod:"calico-apiserver-8775d4447-zfgm4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45f7ce608c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:35.312690 containerd[1473]: 2024-12-13 08:59:35.264 [INFO][5251] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:35.312690 containerd[1473]: 2024-12-13 08:59:35.264 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" iface="eth0" netns="" Dec 13 08:59:35.312690 containerd[1473]: 2024-12-13 08:59:35.264 [INFO][5251] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:35.312690 containerd[1473]: 2024-12-13 08:59:35.264 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:35.312690 containerd[1473]: 2024-12-13 08:59:35.296 [INFO][5257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" HandleID="k8s-pod-network.c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:35.312690 containerd[1473]: 2024-12-13 08:59:35.296 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:35.312690 containerd[1473]: 2024-12-13 08:59:35.296 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:35.312690 containerd[1473]: 2024-12-13 08:59:35.306 [WARNING][5257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" HandleID="k8s-pod-network.c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:35.312690 containerd[1473]: 2024-12-13 08:59:35.306 [INFO][5257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" HandleID="k8s-pod-network.c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--apiserver--8775d4447--zfgm4-eth0" Dec 13 08:59:35.312690 containerd[1473]: 2024-12-13 08:59:35.309 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:35.312690 containerd[1473]: 2024-12-13 08:59:35.310 [INFO][5251] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b" Dec 13 08:59:35.314729 containerd[1473]: time="2024-12-13T08:59:35.312808825Z" level=info msg="TearDown network for sandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\" successfully" Dec 13 08:59:35.323673 containerd[1473]: time="2024-12-13T08:59:35.323592311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:59:35.323823 containerd[1473]: time="2024-12-13T08:59:35.323705471Z" level=info msg="RemovePodSandbox \"c07cc6992b5e01bf037f5cec51f01f0e7e7d70288b71d7796437e9588d66068b\" returns successfully" Dec 13 08:59:35.324677 containerd[1473]: time="2024-12-13T08:59:35.324267996Z" level=info msg="StopPodSandbox for \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\"" Dec 13 08:59:35.418252 containerd[1473]: 2024-12-13 08:59:35.375 [WARNING][5275] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0", GenerateName:"calico-kube-controllers-6bf7964f-", Namespace:"calico-system", SelfLink:"", UID:"ed34c5d8-9877-44f6-82cc-1e049f25725d", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bf7964f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c", Pod:"calico-kube-controllers-6bf7964f-ch45z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc01c1fa2ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:35.418252 containerd[1473]: 2024-12-13 08:59:35.375 [INFO][5275] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:35.418252 containerd[1473]: 2024-12-13 08:59:35.375 [INFO][5275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" iface="eth0" netns="" Dec 13 08:59:35.418252 containerd[1473]: 2024-12-13 08:59:35.375 [INFO][5275] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:35.418252 containerd[1473]: 2024-12-13 08:59:35.375 [INFO][5275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:35.418252 containerd[1473]: 2024-12-13 08:59:35.400 [INFO][5281] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" HandleID="k8s-pod-network.3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:35.418252 containerd[1473]: 2024-12-13 08:59:35.400 [INFO][5281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:35.418252 containerd[1473]: 2024-12-13 08:59:35.400 [INFO][5281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:35.418252 containerd[1473]: 2024-12-13 08:59:35.412 [WARNING][5281] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" HandleID="k8s-pod-network.3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:35.418252 containerd[1473]: 2024-12-13 08:59:35.412 [INFO][5281] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" HandleID="k8s-pod-network.3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:35.418252 containerd[1473]: 2024-12-13 08:59:35.415 [INFO][5281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:35.418252 containerd[1473]: 2024-12-13 08:59:35.416 [INFO][5275] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:35.419995 containerd[1473]: time="2024-12-13T08:59:35.418289784Z" level=info msg="TearDown network for sandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\" successfully" Dec 13 08:59:35.419995 containerd[1473]: time="2024-12-13T08:59:35.418317344Z" level=info msg="StopPodSandbox for \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\" returns successfully" Dec 13 08:59:35.419995 containerd[1473]: time="2024-12-13T08:59:35.418879188Z" level=info msg="RemovePodSandbox for \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\"" Dec 13 08:59:35.419995 containerd[1473]: time="2024-12-13T08:59:35.418910749Z" level=info msg="Forcibly stopping sandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\"" Dec 13 08:59:35.532279 containerd[1473]: 2024-12-13 08:59:35.476 [WARNING][5299] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0", GenerateName:"calico-kube-controllers-6bf7964f-", Namespace:"calico-system", SelfLink:"", UID:"ed34c5d8-9877-44f6-82cc-1e049f25725d", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bf7964f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"453d4dc0d949b94e4afeb5f19dfef9d2a40e3d4304a3334ccba392e8382e6c1c", Pod:"calico-kube-controllers-6bf7964f-ch45z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc01c1fa2ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:35.532279 containerd[1473]: 2024-12-13 08:59:35.476 [INFO][5299] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:35.532279 containerd[1473]: 2024-12-13 08:59:35.476 [INFO][5299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" iface="eth0" netns="" Dec 13 08:59:35.532279 containerd[1473]: 2024-12-13 08:59:35.476 [INFO][5299] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:35.532279 containerd[1473]: 2024-12-13 08:59:35.476 [INFO][5299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:35.532279 containerd[1473]: 2024-12-13 08:59:35.505 [INFO][5305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" HandleID="k8s-pod-network.3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:35.532279 containerd[1473]: 2024-12-13 08:59:35.506 [INFO][5305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:35.532279 containerd[1473]: 2024-12-13 08:59:35.506 [INFO][5305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:35.532279 containerd[1473]: 2024-12-13 08:59:35.525 [WARNING][5305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" HandleID="k8s-pod-network.3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:35.532279 containerd[1473]: 2024-12-13 08:59:35.525 [INFO][5305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" HandleID="k8s-pod-network.3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Workload="ci--4081--2--1--e--e153687e15-k8s-calico--kube--controllers--6bf7964f--ch45z-eth0" Dec 13 08:59:35.532279 containerd[1473]: 2024-12-13 08:59:35.528 [INFO][5305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:35.532279 containerd[1473]: 2024-12-13 08:59:35.530 [INFO][5299] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187" Dec 13 08:59:35.532757 containerd[1473]: time="2024-12-13T08:59:35.532330331Z" level=info msg="TearDown network for sandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\" successfully" Dec 13 08:59:35.536657 containerd[1473]: time="2024-12-13T08:59:35.536605005Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:59:35.537205 containerd[1473]: time="2024-12-13T08:59:35.536696886Z" level=info msg="RemovePodSandbox \"3e194c813a2956d13de6909a1b3b02ed3da3ff8b2c8859ad0541f2c130a7d187\" returns successfully" Dec 13 08:59:35.537205 containerd[1473]: time="2024-12-13T08:59:35.537200610Z" level=info msg="StopPodSandbox for \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\"" Dec 13 08:59:35.630133 containerd[1473]: 2024-12-13 08:59:35.587 [WARNING][5323] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0f81c19a-7569-4158-afcc-88fa220a0f30", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3", Pod:"coredns-76f75df574-pmrq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d7e5405e5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:35.630133 containerd[1473]: 2024-12-13 08:59:35.587 [INFO][5323] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:35.630133 containerd[1473]: 2024-12-13 08:59:35.587 [INFO][5323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" iface="eth0" netns="" Dec 13 08:59:35.630133 containerd[1473]: 2024-12-13 08:59:35.588 [INFO][5323] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:35.630133 containerd[1473]: 2024-12-13 08:59:35.588 [INFO][5323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:35.630133 containerd[1473]: 2024-12-13 08:59:35.611 [INFO][5330] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" HandleID="k8s-pod-network.aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:35.630133 containerd[1473]: 2024-12-13 08:59:35.611 [INFO][5330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:35.630133 containerd[1473]: 2024-12-13 08:59:35.611 [INFO][5330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:35.630133 containerd[1473]: 2024-12-13 08:59:35.623 [WARNING][5330] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" HandleID="k8s-pod-network.aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:35.630133 containerd[1473]: 2024-12-13 08:59:35.623 [INFO][5330] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" HandleID="k8s-pod-network.aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:35.630133 containerd[1473]: 2024-12-13 08:59:35.626 [INFO][5330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:35.630133 containerd[1473]: 2024-12-13 08:59:35.628 [INFO][5323] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:35.630133 containerd[1473]: time="2024-12-13T08:59:35.630078388Z" level=info msg="TearDown network for sandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\" successfully" Dec 13 08:59:35.630133 containerd[1473]: time="2024-12-13T08:59:35.630105828Z" level=info msg="StopPodSandbox for \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\" returns successfully" Dec 13 08:59:35.633183 containerd[1473]: time="2024-12-13T08:59:35.631683001Z" level=info msg="RemovePodSandbox for \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\"" Dec 13 08:59:35.633183 containerd[1473]: time="2024-12-13T08:59:35.631726041Z" level=info msg="Forcibly stopping sandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\"" Dec 13 08:59:35.732263 containerd[1473]: 2024-12-13 08:59:35.685 [WARNING][5348] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0f81c19a-7569-4158-afcc-88fa220a0f30", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"0d318635cca24b0cf07de929f95f85b8f4949f8e826e73d35b465559cb6420d3", Pod:"coredns-76f75df574-pmrq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d7e5405e5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:35.732263 containerd[1473]: 2024-12-13 08:59:35.686 [INFO][5348] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:35.732263 containerd[1473]: 2024-12-13 08:59:35.686 [INFO][5348] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" iface="eth0" netns="" Dec 13 08:59:35.732263 containerd[1473]: 2024-12-13 08:59:35.686 [INFO][5348] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:35.732263 containerd[1473]: 2024-12-13 08:59:35.686 [INFO][5348] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:35.732263 containerd[1473]: 2024-12-13 08:59:35.715 [INFO][5354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" HandleID="k8s-pod-network.aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:35.732263 containerd[1473]: 2024-12-13 08:59:35.716 [INFO][5354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:35.732263 containerd[1473]: 2024-12-13 08:59:35.716 [INFO][5354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:35.732263 containerd[1473]: 2024-12-13 08:59:35.726 [WARNING][5354] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" HandleID="k8s-pod-network.aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:35.732263 containerd[1473]: 2024-12-13 08:59:35.726 [INFO][5354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" HandleID="k8s-pod-network.aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--pmrq8-eth0" Dec 13 08:59:35.732263 containerd[1473]: 2024-12-13 08:59:35.728 [INFO][5354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:35.732263 containerd[1473]: 2024-12-13 08:59:35.730 [INFO][5348] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0" Dec 13 08:59:35.732263 containerd[1473]: time="2024-12-13T08:59:35.732066919Z" level=info msg="TearDown network for sandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\" successfully" Dec 13 08:59:35.736401 containerd[1473]: time="2024-12-13T08:59:35.736333113Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:59:35.736572 containerd[1473]: time="2024-12-13T08:59:35.736445674Z" level=info msg="RemovePodSandbox \"aba08fe93577d53cce45c57d22facee1df76d537eddb9297f0ca8ec8b19be9d0\" returns successfully" Dec 13 08:59:35.737536 containerd[1473]: time="2024-12-13T08:59:35.737082239Z" level=info msg="StopPodSandbox for \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\"" Dec 13 08:59:35.835970 containerd[1473]: 2024-12-13 08:59:35.784 [WARNING][5372] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b5f43183-1de9-47e8-b420-26f81d9d2ef1", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0", Pod:"coredns-76f75df574-p84dh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6517419de3e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:35.835970 containerd[1473]: 2024-12-13 08:59:35.784 [INFO][5372] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:35.835970 containerd[1473]: 2024-12-13 08:59:35.784 [INFO][5372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" iface="eth0" netns="" Dec 13 08:59:35.835970 containerd[1473]: 2024-12-13 08:59:35.784 [INFO][5372] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:35.835970 containerd[1473]: 2024-12-13 08:59:35.784 [INFO][5372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:35.835970 containerd[1473]: 2024-12-13 08:59:35.812 [INFO][5378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" HandleID="k8s-pod-network.657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:35.835970 containerd[1473]: 2024-12-13 08:59:35.812 [INFO][5378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:35.835970 containerd[1473]: 2024-12-13 08:59:35.812 [INFO][5378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:35.835970 containerd[1473]: 2024-12-13 08:59:35.828 [WARNING][5378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" HandleID="k8s-pod-network.657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:35.835970 containerd[1473]: 2024-12-13 08:59:35.828 [INFO][5378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" HandleID="k8s-pod-network.657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:35.835970 containerd[1473]: 2024-12-13 08:59:35.831 [INFO][5378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:35.835970 containerd[1473]: 2024-12-13 08:59:35.833 [INFO][5372] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:35.837330 containerd[1473]: time="2024-12-13T08:59:35.836844473Z" level=info msg="TearDown network for sandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\" successfully" Dec 13 08:59:35.837330 containerd[1473]: time="2024-12-13T08:59:35.836878953Z" level=info msg="StopPodSandbox for \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\" returns successfully" Dec 13 08:59:35.839116 containerd[1473]: time="2024-12-13T08:59:35.838142843Z" level=info msg="RemovePodSandbox for \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\"" Dec 13 08:59:35.839116 containerd[1473]: time="2024-12-13T08:59:35.838179803Z" level=info msg="Forcibly stopping sandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\"" Dec 13 08:59:35.947524 containerd[1473]: 2024-12-13 08:59:35.882 [WARNING][5396] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b5f43183-1de9-47e8-b420-26f81d9d2ef1", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-e-e153687e15", ContainerID:"de788ca822c50b49cbfd1bdbea9eb4df6852b53436aca4448d3cbca9b45f82c0", Pod:"coredns-76f75df574-p84dh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6517419de3e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:35.947524 containerd[1473]: 2024-12-13 08:59:35.882 [INFO][5396] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:35.947524 containerd[1473]: 2024-12-13 08:59:35.882 [INFO][5396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" iface="eth0" netns="" Dec 13 08:59:35.947524 containerd[1473]: 2024-12-13 08:59:35.882 [INFO][5396] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:35.947524 containerd[1473]: 2024-12-13 08:59:35.882 [INFO][5396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:35.947524 containerd[1473]: 2024-12-13 08:59:35.918 [INFO][5402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" HandleID="k8s-pod-network.657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:35.947524 containerd[1473]: 2024-12-13 08:59:35.918 [INFO][5402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:35.947524 containerd[1473]: 2024-12-13 08:59:35.919 [INFO][5402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:35.947524 containerd[1473]: 2024-12-13 08:59:35.940 [WARNING][5402] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" HandleID="k8s-pod-network.657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:35.947524 containerd[1473]: 2024-12-13 08:59:35.940 [INFO][5402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" HandleID="k8s-pod-network.657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Workload="ci--4081--2--1--e--e153687e15-k8s-coredns--76f75df574--p84dh-eth0" Dec 13 08:59:35.947524 containerd[1473]: 2024-12-13 08:59:35.944 [INFO][5402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:35.947524 containerd[1473]: 2024-12-13 08:59:35.946 [INFO][5396] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3" Dec 13 08:59:35.948082 containerd[1473]: time="2024-12-13T08:59:35.948014117Z" level=info msg="TearDown network for sandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\" successfully" Dec 13 08:59:35.953284 containerd[1473]: time="2024-12-13T08:59:35.953210198Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:59:35.953284 containerd[1473]: time="2024-12-13T08:59:35.953292679Z" level=info msg="RemovePodSandbox \"657217c5c1ff3a67c7af89636363d365b33f10067179b91956c8b7d22fa163b3\" returns successfully" Dec 13 08:59:41.891434 update_engine[1453]: I20241213 08:59:41.891203 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 08:59:41.891946 update_engine[1453]: I20241213 08:59:41.891654 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 08:59:41.892012 update_engine[1453]: I20241213 08:59:41.891968 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 08:59:41.892910 update_engine[1453]: E20241213 08:59:41.892812 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 08:59:41.893012 update_engine[1453]: I20241213 08:59:41.892963 1453 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 08:59:51.892505 update_engine[1453]: I20241213 08:59:51.892421 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 08:59:51.893109 update_engine[1453]: I20241213 08:59:51.892683 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 08:59:51.893109 update_engine[1453]: I20241213 08:59:51.892920 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 08:59:51.894643 update_engine[1453]: E20241213 08:59:51.894541 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 08:59:51.894643 update_engine[1453]: I20241213 08:59:51.894636 1453 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 09:00:01.429594 kubelet[2703]: I1213 09:00:01.429492 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:00:01.892120 update_engine[1453]: I20241213 09:00:01.891906 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:00:01.892729 update_engine[1453]: I20241213 09:00:01.892360 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:00:01.892729 update_engine[1453]: I20241213 09:00:01.892663 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:00:01.893581 update_engine[1453]: E20241213 09:00:01.893508 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:00:01.893725 update_engine[1453]: I20241213 09:00:01.893594 1453 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 09:00:01.893725 update_engine[1453]: I20241213 09:00:01.893606 1453 omaha_request_action.cc:617] Omaha request response: Dec 13 09:00:01.893725 update_engine[1453]: E20241213 09:00:01.893702 1453 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 09:00:01.893725 update_engine[1453]: I20241213 09:00:01.893721 1453 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 09:00:01.893984 update_engine[1453]: I20241213 09:00:01.893728 1453 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 09:00:01.893984 update_engine[1453]: I20241213 09:00:01.893739 1453 update_attempter.cc:306] Processing Done. Dec 13 09:00:01.893984 update_engine[1453]: E20241213 09:00:01.893755 1453 update_attempter.cc:619] Update failed. Dec 13 09:00:01.893984 update_engine[1453]: I20241213 09:00:01.893761 1453 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 09:00:01.893984 update_engine[1453]: I20241213 09:00:01.893767 1453 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 09:00:01.893984 update_engine[1453]: I20241213 09:00:01.893774 1453 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 09:00:01.893984 update_engine[1453]: I20241213 09:00:01.893857 1453 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 09:00:01.893984 update_engine[1453]: I20241213 09:00:01.893883 1453 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 09:00:01.893984 update_engine[1453]: I20241213 09:00:01.893890 1453 omaha_request_action.cc:272] Request: Dec 13 09:00:01.893984 update_engine[1453]: Dec 13 09:00:01.893984 update_engine[1453]: Dec 13 09:00:01.893984 update_engine[1453]: Dec 13 09:00:01.893984 update_engine[1453]: Dec 13 09:00:01.893984 update_engine[1453]: Dec 13 09:00:01.893984 update_engine[1453]: Dec 13 09:00:01.893984 update_engine[1453]: I20241213 09:00:01.893898 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 09:00:01.894724 update_engine[1453]: I20241213 09:00:01.894060 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 09:00:01.894724 update_engine[1453]: I20241213 09:00:01.894330 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 09:00:01.895828 update_engine[1453]: E20241213 09:00:01.895181 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 09:00:01.895828 update_engine[1453]: I20241213 09:00:01.895246 1453 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 09:00:01.895828 update_engine[1453]: I20241213 09:00:01.895258 1453 omaha_request_action.cc:617] Omaha request response: Dec 13 09:00:01.895828 update_engine[1453]: I20241213 09:00:01.895265 1453 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 09:00:01.895828 update_engine[1453]: I20241213 09:00:01.895272 1453 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 09:00:01.895828 update_engine[1453]: I20241213 09:00:01.895278 1453 update_attempter.cc:306] Processing Done. Dec 13 09:00:01.895828 update_engine[1453]: I20241213 09:00:01.895284 1453 update_attempter.cc:310] Error event sent. Dec 13 09:00:01.895828 update_engine[1453]: I20241213 09:00:01.895294 1453 update_check_scheduler.cc:74] Next update check in 41m40s Dec 13 09:00:01.896334 locksmithd[1493]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 09:00:01.896334 locksmithd[1493]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 09:00:08.493145 systemd[1]: Started sshd@8-5.75.230.207:22-116.162.80.221:53578.service - OpenSSH per-connection server daemon (116.162.80.221:53578). Dec 13 09:00:13.332062 sshd[5490]: Connection closed by authenticating user root 116.162.80.221 port 53578 [preauth] Dec 13 09:00:13.335157 systemd[1]: sshd@8-5.75.230.207:22-116.162.80.221:53578.service: Deactivated successfully. Dec 13 09:00:13.840893 systemd[1]: Started sshd@9-5.75.230.207:22-116.162.80.221:35108.service - OpenSSH per-connection server daemon (116.162.80.221:35108). Dec 13 09:00:18.956286 sshd[5495]: Connection closed by authenticating user root 116.162.80.221 port 35108 [preauth] Dec 13 09:00:18.959846 systemd[1]: sshd@9-5.75.230.207:22-116.162.80.221:35108.service: Deactivated successfully. Dec 13 09:00:19.460727 systemd[1]: Started sshd@10-5.75.230.207:22-116.162.80.221:45500.service - OpenSSH per-connection server daemon (116.162.80.221:45500). Dec 13 09:00:21.821408 sshd[5504]: Connection closed by authenticating user root 116.162.80.221 port 45500 [preauth] Dec 13 09:00:21.820142 systemd[1]: sshd@10-5.75.230.207:22-116.162.80.221:45500.service: Deactivated successfully. Dec 13 09:00:22.317877 systemd[1]: Started sshd@11-5.75.230.207:22-116.162.80.221:50488.service - OpenSSH per-connection server daemon (116.162.80.221:50488). Dec 13 09:00:27.083115 sshd[5509]: Connection closed by authenticating user root 116.162.80.221 port 50488 [preauth] Dec 13 09:00:27.085713 systemd[1]: sshd@11-5.75.230.207:22-116.162.80.221:50488.service: Deactivated successfully. Dec 13 09:00:27.577960 systemd[1]: Started sshd@12-5.75.230.207:22-116.162.80.221:60012.service - OpenSSH per-connection server daemon (116.162.80.221:60012). Dec 13 09:00:29.881980 sshd[5535]: Connection closed by authenticating user root 116.162.80.221 port 60012 [preauth] Dec 13 09:00:29.885406 systemd[1]: sshd@12-5.75.230.207:22-116.162.80.221:60012.service: Deactivated successfully. Dec 13 09:00:32.438206 systemd[1]: Started sshd@13-5.75.230.207:22-116.162.80.221:37284.service - OpenSSH per-connection server daemon (116.162.80.221:37284). Dec 13 09:00:37.393016 sshd[5558]: Connection closed by authenticating user root 116.162.80.221 port 37284 [preauth] Dec 13 09:00:37.395988 systemd[1]: sshd@13-5.75.230.207:22-116.162.80.221:37284.service: Deactivated successfully. Dec 13 09:00:37.574354 systemd[1]: run-containerd-runc-k8s.io-34af568a591ab8558c39dda12cbf71be1d94bb0bb13efcb6810f2cd88697e737-runc.1DsVTA.mount: Deactivated successfully. Dec 13 09:00:37.897798 systemd[1]: Started sshd@14-5.75.230.207:22-116.162.80.221:51294.service - OpenSSH per-connection server daemon (116.162.80.221:51294). Dec 13 09:00:45.428671 sshd[5583]: Connection closed by authenticating user root 116.162.80.221 port 51294 [preauth] Dec 13 09:00:45.431277 systemd[1]: sshd@14-5.75.230.207:22-116.162.80.221:51294.service: Deactivated successfully. Dec 13 09:00:45.931850 systemd[1]: Started sshd@15-5.75.230.207:22-116.162.80.221:37584.service - OpenSSH per-connection server daemon (116.162.80.221:37584). Dec 13 09:00:52.079956 sshd[5596]: Connection closed by authenticating user root 116.162.80.221 port 37584 [preauth] Dec 13 09:00:52.084943 systemd[1]: sshd@15-5.75.230.207:22-116.162.80.221:37584.service: Deactivated successfully. Dec 13 09:00:52.585213 systemd[1]: Started sshd@16-5.75.230.207:22-116.162.80.221:49470.service - OpenSSH per-connection server daemon (116.162.80.221:49470). Dec 13 09:00:54.901055 sshd[5603]: Connection closed by authenticating user root 116.162.80.221 port 49470 [preauth] Dec 13 09:00:54.904502 systemd[1]: sshd@16-5.75.230.207:22-116.162.80.221:49470.service: Deactivated successfully. Dec 13 09:01:27.992402 systemd[1]: run-containerd-runc-k8s.io-34af568a591ab8558c39dda12cbf71be1d94bb0bb13efcb6810f2cd88697e737-runc.1KiyNV.mount: Deactivated successfully. Dec 13 09:01:54.333167 systemd[1]: run-containerd-runc-k8s.io-65c99f7768b02dccfa77c611939f37d94fbceeaf81cd422202093ea914360389-runc.biJuag.mount: Deactivated successfully. Dec 13 09:02:24.338002 systemd[1]: run-containerd-runc-k8s.io-65c99f7768b02dccfa77c611939f37d94fbceeaf81cd422202093ea914360389-runc.ZnIOKk.mount: Deactivated successfully. Dec 13 09:03:26.129536 systemd[1]: Started sshd@17-5.75.230.207:22-139.178.89.65:45474.service - OpenSSH per-connection server daemon (139.178.89.65:45474). Dec 13 09:03:27.133591 sshd[5940]: Accepted publickey for core from 139.178.89.65 port 45474 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:27.137942 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:27.149172 systemd-logind[1452]: New session 8 of user core. Dec 13 09:03:27.152444 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 09:03:27.932721 sshd[5940]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:27.937749 systemd[1]: sshd@17-5.75.230.207:22-139.178.89.65:45474.service: Deactivated successfully. Dec 13 09:03:27.940463 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 09:03:27.943170 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Dec 13 09:03:27.944803 systemd-logind[1452]: Removed session 8. Dec 13 09:03:33.121938 systemd[1]: Started sshd@18-5.75.230.207:22-139.178.89.65:48280.service - OpenSSH per-connection server daemon (139.178.89.65:48280). Dec 13 09:03:34.112108 sshd[5972]: Accepted publickey for core from 139.178.89.65 port 48280 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:34.114269 sshd[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:34.127983 systemd-logind[1452]: New session 9 of user core. Dec 13 09:03:34.136545 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 09:03:34.894950 sshd[5972]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:34.902313 systemd[1]: sshd@18-5.75.230.207:22-139.178.89.65:48280.service: Deactivated successfully. Dec 13 09:03:34.905625 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 09:03:34.912254 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Dec 13 09:03:34.914872 systemd-logind[1452]: Removed session 9. Dec 13 09:03:40.072065 systemd[1]: Started sshd@19-5.75.230.207:22-139.178.89.65:36250.service - OpenSSH per-connection server daemon (139.178.89.65:36250). Dec 13 09:03:41.057856 sshd[6008]: Accepted publickey for core from 139.178.89.65 port 36250 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:41.062233 sshd[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:41.072272 systemd-logind[1452]: New session 10 of user core. Dec 13 09:03:41.077961 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 09:03:41.834013 sshd[6008]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:41.842125 systemd[1]: sshd@19-5.75.230.207:22-139.178.89.65:36250.service: Deactivated successfully. Dec 13 09:03:41.844753 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 09:03:41.847347 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Dec 13 09:03:41.849149 systemd-logind[1452]: Removed session 10. Dec 13 09:03:42.007748 systemd[1]: Started sshd@20-5.75.230.207:22-139.178.89.65:36264.service - OpenSSH per-connection server daemon (139.178.89.65:36264). Dec 13 09:03:42.990421 sshd[6022]: Accepted publickey for core from 139.178.89.65 port 36264 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:42.994445 sshd[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:43.004361 systemd-logind[1452]: New session 11 of user core. Dec 13 09:03:43.011799 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 09:03:43.796609 sshd[6022]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:43.803380 systemd[1]: sshd@20-5.75.230.207:22-139.178.89.65:36264.service: Deactivated successfully. Dec 13 09:03:43.807243 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 09:03:43.808489 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Dec 13 09:03:43.811907 systemd-logind[1452]: Removed session 11. Dec 13 09:03:43.978913 systemd[1]: Started sshd@21-5.75.230.207:22-139.178.89.65:36272.service - OpenSSH per-connection server daemon (139.178.89.65:36272). Dec 13 09:03:44.966671 sshd[6037]: Accepted publickey for core from 139.178.89.65 port 36272 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:44.967592 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:44.975819 systemd-logind[1452]: New session 12 of user core. Dec 13 09:03:44.980824 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 09:03:45.744007 sshd[6037]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:45.749113 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Dec 13 09:03:45.749514 systemd[1]: sshd@21-5.75.230.207:22-139.178.89.65:36272.service: Deactivated successfully. Dec 13 09:03:45.753004 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 09:03:45.757039 systemd-logind[1452]: Removed session 12. Dec 13 09:03:50.918472 systemd[1]: Started sshd@22-5.75.230.207:22-139.178.89.65:58234.service - OpenSSH per-connection server daemon (139.178.89.65:58234). Dec 13 09:03:51.908757 sshd[6052]: Accepted publickey for core from 139.178.89.65 port 58234 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:51.911649 sshd[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:51.920830 systemd-logind[1452]: New session 13 of user core. Dec 13 09:03:51.924216 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 09:03:52.680727 sshd[6052]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:52.685243 systemd[1]: sshd@22-5.75.230.207:22-139.178.89.65:58234.service: Deactivated successfully. Dec 13 09:03:52.688034 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 09:03:52.691014 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Dec 13 09:03:52.692703 systemd-logind[1452]: Removed session 13. Dec 13 09:03:52.857812 systemd[1]: Started sshd@23-5.75.230.207:22-139.178.89.65:58246.service - OpenSSH per-connection server daemon (139.178.89.65:58246). Dec 13 09:03:53.844447 sshd[6064]: Accepted publickey for core from 139.178.89.65 port 58246 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:53.846063 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:53.852713 systemd-logind[1452]: New session 14 of user core. Dec 13 09:03:53.855628 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 09:03:54.746790 sshd[6064]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:54.752185 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Dec 13 09:03:54.752800 systemd[1]: sshd@23-5.75.230.207:22-139.178.89.65:58246.service: Deactivated successfully. Dec 13 09:03:54.756220 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 09:03:54.759851 systemd-logind[1452]: Removed session 14. Dec 13 09:03:54.925907 systemd[1]: Started sshd@24-5.75.230.207:22-139.178.89.65:58248.service - OpenSSH per-connection server daemon (139.178.89.65:58248). Dec 13 09:03:55.907619 sshd[6096]: Accepted publickey for core from 139.178.89.65 port 58248 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:55.909830 sshd[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:55.916994 systemd-logind[1452]: New session 15 of user core. Dec 13 09:03:55.925674 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 09:03:58.485747 sshd[6096]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:58.493119 systemd[1]: sshd@24-5.75.230.207:22-139.178.89.65:58248.service: Deactivated successfully. Dec 13 09:03:58.493868 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Dec 13 09:03:58.501473 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 09:03:58.508115 systemd-logind[1452]: Removed session 15. Dec 13 09:03:58.665752 systemd[1]: Started sshd@25-5.75.230.207:22-139.178.89.65:41054.service - OpenSSH per-connection server daemon (139.178.89.65:41054). Dec 13 09:03:59.654766 sshd[6116]: Accepted publickey for core from 139.178.89.65 port 41054 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:59.655851 sshd[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:59.670067 systemd-logind[1452]: New session 16 of user core. Dec 13 09:03:59.673667 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 09:04:00.550122 sshd[6116]: pam_unix(sshd:session): session closed for user core Dec 13 09:04:00.555633 systemd[1]: sshd@25-5.75.230.207:22-139.178.89.65:41054.service: Deactivated successfully. Dec 13 09:04:00.558297 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 09:04:00.559346 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Dec 13 09:04:00.560801 systemd-logind[1452]: Removed session 16. Dec 13 09:04:00.730153 systemd[1]: Started sshd@26-5.75.230.207:22-139.178.89.65:41058.service - OpenSSH per-connection server daemon (139.178.89.65:41058). Dec 13 09:04:01.731334 sshd[6127]: Accepted publickey for core from 139.178.89.65 port 41058 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:04:01.733837 sshd[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:04:01.740643 systemd-logind[1452]: New session 17 of user core. Dec 13 09:04:01.750677 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 09:04:02.490991 sshd[6127]: pam_unix(sshd:session): session closed for user core Dec 13 09:04:02.496749 systemd[1]: sshd@26-5.75.230.207:22-139.178.89.65:41058.service: Deactivated successfully. Dec 13 09:04:02.501617 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 09:04:02.509609 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Dec 13 09:04:02.511717 systemd-logind[1452]: Removed session 17. Dec 13 09:04:07.666777 systemd[1]: Started sshd@27-5.75.230.207:22-139.178.89.65:41066.service - OpenSSH per-connection server daemon (139.178.89.65:41066). Dec 13 09:04:08.651695 sshd[6178]: Accepted publickey for core from 139.178.89.65 port 41066 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:04:08.657342 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:04:08.666298 systemd-logind[1452]: New session 18 of user core. Dec 13 09:04:08.676016 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 09:04:09.439468 sshd[6178]: pam_unix(sshd:session): session closed for user core Dec 13 09:04:09.447581 systemd[1]: sshd@27-5.75.230.207:22-139.178.89.65:41066.service: Deactivated successfully. Dec 13 09:04:09.453305 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 09:04:09.456223 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Dec 13 09:04:09.458618 systemd-logind[1452]: Removed session 18. Dec 13 09:04:14.614112 systemd[1]: Started sshd@28-5.75.230.207:22-139.178.89.65:37008.service - OpenSSH per-connection server daemon (139.178.89.65:37008). Dec 13 09:04:15.590503 sshd[6190]: Accepted publickey for core from 139.178.89.65 port 37008 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:04:15.593164 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:04:15.602040 systemd-logind[1452]: New session 19 of user core. Dec 13 09:04:15.605694 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 09:04:16.365836 sshd[6190]: pam_unix(sshd:session): session closed for user core Dec 13 09:04:16.378172 systemd[1]: sshd@28-5.75.230.207:22-139.178.89.65:37008.service: Deactivated successfully. Dec 13 09:04:16.383297 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 09:04:16.387181 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Dec 13 09:04:16.390677 systemd-logind[1452]: Removed session 19.